220 points by mikece 29 days ago | 5 comments
jprjr_ 29 days ago
The thing I'm excited about is decoding chained Ogg FLAC files.

Some software wouldn't work correctly with FLAC-based Icecast streams if they used libFLAC/libFLAC++ for demuxing and decoding. Usually these streams mux into Ogg and send updated metadata by closing out the previous Ogg bitstream and starting a new one. If you were using libFLAC to demux and decode - when the stream updated, it would just hang forever. Apps would have to do their own Ogg demuxing and reset the decoder between streams.

Chained Ogg FLAC allows having lossless internet radio streams with rich, in-band metadata instead of relying on out-of-band methods. So you could have in-band album art, artist info, links - anything you can cram into a Vorbis comment block.

BoingBoomTschak 28 days ago
Massive waste to use FLAC for Internet streaming, though. Opus was made for this purpose (in part).
nullc 27 days ago
HTTP streaming is pretty much inherently high latency, or at least none of the software stack is particularly agreeable. I don't think it's fair to say that Opus was made for that purpose in any way that any other general purpose audio codec wasn't. (or even... vorbis really was made for that purpose, though sure you're better off using opus for it).

Lossless audio is unconditionally transparent-- you won't have coding artifacts, you won't have issues with the codec accidentally increasing the crest factor of the audio and creating clipping where there wasn't. If you have the bandwidth for it, why not?

So many people are using streaming in lieu of radio-- a true broadcast medium. I think any high ground to argue efficiency was lost at that point. :) making the streams use 10x the bandwidth? meh. Maybe convincing video sites to provide an option to turn off the video would be a better use of complaint energy: it impacts more people than lossless streaming and wastes a lot more bandwidth :)

27 days ago
nullify88 29 days ago
Are there any public lossless radio streams out there?
longitudinal93 28 days ago
You can filter by "flac" on radio-browser:

https://www.radio-browser.info/search?page=1&order=clickcoun...

masklinn 29 days ago
That’s nice although probably not of much use to most people: iirc FLAC encoding was 60+x realtime on modern machines already so unless you need to transcode your entire library (which you could do in parallel anyway) odds are you spend more time on setting up the encoding than actually running it.
diggan 29 days ago
> That’s nice although probably not of much use to most people

Doesn't that depend on the hardware of "most people"? Even if you have a shit CPU, you probably have at least more than 1 core, so this will be at least little bit helpful for those people, wouldn't it?

Edit: Just tried turning a ~5 minute .wav into .flac (without multithreading) on a Intel Celeron N4505 (worst CPU I have running atm) and took around 20 seconds, FWIW

_flux 29 days ago
But even a smaller number of people have individual very long raw audio files.

I've converted a bunch of sound packs (for music production) to flac and it really takes next to no time at all. I suppose those are quite short audio files, but there's a lot of them, 20 gigabytes in total (in flac).

Perhaps the person who wrote this improvement did have a use case for it, though :).

stonemetal12 29 days ago
FLAC is more than 20 years old at this point.

At least according to wikipedia it doesn't look like they haven't changed the algorithm to much in the mean time, so just about anything should be able to run it today.

johncolanduoni 29 days ago
In most situations you’d be encoding more than one song at a time, which would already parallelize enough unless you had a monster cpu and only one album.
diggan 29 days ago
I dunno, when I export a song I'm working on, it's just that one song. I think there are more use cases for .flac than just converting a big existing music collection.
masklinn 29 days ago
> this will be at least little bit helpful for those people, wouldn't it

Probably not, because they're unlikely to have enough stuff to export that it's relevant.

> Edit: Just tried turning a ~5 minute .wav into .flac (without multithreading) on a Intel Celeron N4505 (worst CPU I have running atm) and took around 20 seconds, FWIW

Which is basically nothing. It takes more time than that to fix the track's tagging.

diggan 29 days ago
I mean, you're again assuming the only use case is "encode and tag a huge music collection", encoding is used for more than that.

For example, I have a raspberry pi that is responsible for recording as soon as it powers up. Then I can press a button, and it grabs the last 60 recorded minutes, which happens to be saved as .wav right now, which I'm fine with, the rest of my pipeline works with .wav.

But if my pipeline required flac, I would need to turn the wav into flac on the raspberry pi at this point, for 60 minutes of audio, and of course I'd like that to be faster if possible, so I can start editing it right away.

dijital 29 days ago
For folks working in bioacoustics I think it might be pretty relevant. I'm working on a project with large batches of high fidelity, ultrasonic bioacoustic recordings that need to be in WAV format for species analysis but, at the data sizes involved, FLAC is a good archive format (~60% smaller).

This release will probably be worth a look to speed the archiving/restoring jobs up.

Doohickey-d 28 days ago
There's also a usecase of using flac for non-audio data: anything that looks like a waveform compresses quite well. e.g. analog signals, data logging, etc. One example is https://github.com/oyvindln/vhs-decode - capturing high-bandwidth signals directly from analog tape with SDR-like hardware, for later software demodulation. Those waveform signals compress quite well with flac.

Currently there's flacCL, which compresses on the GPU, so this is one more option.

dale_glass 29 days ago
I have a possible use for FLAC for realtime audio.

We (Overte, an open source VR environment) have a need for fast audio encoding, and sometimes audio encoding CPU time is the main bottleneck. For this purpose we support multiple codecs, and FLAC is actually of interest because it turns out that the niche of "compressing audio really fast but still in good quality" is a rare one.

We maingly use Opus which is great, but it's fairly CPU heavy, so there can be cases when one might want to sacrifice some bandwidth in exchange for less CPU time.

Lockal 29 days ago
It could be useful for audio editors like here: https://manual.audacityteam.org/man/undo_redo_and_history.ht... - many steps require full save of tracks (potentially dozens of them). It is possible to compress history retrospectively, but why, if we can be done in parallel?
CyberDildonics 29 days ago
If you have multiple tracks you would just put different tracks on different threads anyway and parallelization is trivial.
flounder3 29 days ago
Was about to say the same thing. It was already blazingly fast, with a typical album only taking seconds.
2OEH8eoCRo0 29 days ago
Even still, it was no issue saturating all CPU cores since each core could transcode a track at a time.
lazka 29 days ago
On Windows (so libwinpthread), 8C/16T machine:

  $ flac --version
  flac 1.5.0
  $ hyperfine -r5 "flac -f -8 a.wav a.flac" "flac -j16 -f -8 a.wav a.flac"
  Benchmark 1: flac -f -8 a.wav a.flac
    Time (mean ± σ):     13.148 s ±  0.194 s    [User: 12.758 s, System: 0.361 s]
    Range (min … max):   12.934 s … 13.318 s    5 runs

  Benchmark 2: flac -j16 -f -8 a.wav a.flac
    Time (mean ± σ):      2.404 s ±  0.012 s    [User: 14.126 s, System: 1.355 s]
    Range (min … max):    2.395 s …  2.425 s    5 runs

  Summary
    flac -j16 -f -8 a.wav a.flac ran
      5.47 ± 0.09 times faster than flac -f -8 a.wav a.flac
jiehong 29 days ago
Interestingly, FLAC is now published as RFC 9639 [0].

[0]: https://www.rfc-editor.org/rfc/rfc9639.html

macawfish 29 days ago
Will this translate to low latency FLAC streaming?
jprjr_ 29 days ago
For FLAC, latency is primarily driven by block size and the library's own internal buffering.

Using the API you can set the blocksize, but there's no manual flush function. The only way to flush output is to call the "finish" function, which as its name implies - marks the stream as finished.

I actually wrote my own FLAC encoder for this exact reason - https://github.com/jprjr/tflac - focused on latency over efficient compression.

vodou 29 days ago
Probably not. It only mentions multi-threaded encoding. Not decoding. But for streaming it shouldn't matter a lot since you only decode smaller chunks at a time. Latency should be good. At least that is my experience and 95% of my music listening is listening to FLAC files.
ksec 29 days ago
People looking for low latency lossless streaming may want to take a look at WavPack.

https://www.wavpack.com

hart_russell 28 days ago
Would this be ideal for streaming from my navidrome server? Currently I stream FLAC on the local network and it converts it to opus on the fly when I'm on mobile.
shawabawa3 29 days ago
probably not as FLAC is basically only useful for archival purposes

for streaming you are better off with an optimised lossy codec

iamacyborg 29 days ago
I can understand why a big streaming provider might want to use a lossy codec from a bandwidth cost perspective but what about in the context of streaming in your own network (eg through Roon or similar)?
masklinn 29 days ago
Why would you transcode to FLAC when streaming? And transcode from what?
VyseofArcadia 29 days ago
Off the top of my head, let's say the file you want to stream is Ogg Opus, but the device you're streaming to only supports FLAC and MP3. You could transcode to MP3 and get all the artifacts that come with a double lossy encode, or you could transcode to FLAC which doesn't buy you any bandwidth savings but it does avoid double lossy artifacts.
jprjr_ 29 days ago
Chained Ogg FLAC works really well as an intermediary/internal streaming format.

In my case - I have an internet radio station available in a few different codec/bitrate combinations. I generate a chained Ogg FLAC stream so I have in-band metadata with lossless audio.

The stream gets piped into a process that will encode the lossy versions, update metadata the correct way per-stream (like there's HLS with timed ID3, there's Icecast with chained Ogg Opus, Icecast with AAC + Shoutcast-style metadata).

bojanvidanovic 29 days ago
Out of curiosity, can you provide a link to your station? I have created a website for listening lossless internet radio stations: https://audiophile.fm
jprjr_ 29 days ago
Well, I only use FLAC internally - none of the public streams are FLAC
theandrewbailey 29 days ago
> for streaming you are better off with an optimised lossy codec

If you are Spotify, that probably makes sense. But if you are someone with a homelab, you probably have enough bandwidth and then some, so streaming FLAC to a home theater (your own or your friend's) makes sense.

PaulDavisThe1st 29 days ago
It is what (originally) SlimDevices (now Logitech Media Server) does, for example.
Night_Thastus 29 days ago
Audio files are tiny, itty bitty things - even uncompressed. If you have the ability to use a lossless file at 0 extra cost...why not? Massive streaming services like Spotify don't obviously, the economics are way different.
she46BiOmUerPVj 29 days ago
I have a flac collection that I was streaming, and I ended up writing some software to encode the entire library to opus because when you are driving around you never know how good your bandwidth will be. Since moving to opus I never have my music cut off anymore. Even with the nice stereo in my car I don't notice any quality problems. There are definitely reasons to not stream wav or flac around all the time.
givinguflac 29 days ago
Why re-encode to a crap codec when you could just use plex with adaptive bitrate streaming?
amlib 29 days ago
Could you elaborate on Opus being a crap codec? AFAIK it's a state of the art lossy codec for high quality sound/music (and some other applications)
givinguflac 29 days ago
Because it’s lossy, period. You may not notice it if you’re not looking hard enough; but you wouldn’t accept a .zip file of a word doc that was missing letters or words in the document. You’d use lossless compression.

I’m not saying there’s no use for opus- just that if your goal is a high quality listening experience, that ain’t it.

https://www.ecstuff4u.com/2023/03/opus-vs-flac-difference-co...

amlib 28 days ago
That's like saying cars are crap because it's not as powerful as a truck. Both are completely different classes of vehicles optimizing for different use cases. So are lossy vs lossless codecs, you can't just say one is superior to the other without specifying the use case.

For instance, I've got a navidrome instance with all my music library accessible from anywhere in the world trough my phone. However there are situations where I may not have internet any connection, so I use the app on the phone (Tempo) to mark the songs I want to be downloaded and available even when offline, but my phone storage wouldn't hold even a quarter of my playlists if I went with the original encode of the songs (mostly lossless flacs), so I instead set it to download a transcoded Opus 128kbps version of it all and it fits on my phone with room to spare. It sound pretty damn good trough my admittedly average IEMs and I get the benefit of offline playback. Even if you somehow had the absolute best playback system connected to my phone you might be able to tell the difference, but it beats not having to rely on internet connectivity.

izackp 29 days ago
That's a bad illustration. The letters are there.. they're just slightly lower rez. Like going from a 256x256 space per letter to 128x128. Is there a difference? sure. Can you read it perfectly fine.. of course.

You could probably argue that these are handwritten letters but the argument still stands.

aeroevan 29 days ago
I doubt whatever plex would do could beat opus (unless it's already transcoding to opus)
pimeys 29 days ago
If you decide to stream with a lower bitrate in Plexamp, it transcodes to Opus.

You should not encode the files, just use Plex or Jellyfin and choose a lower bitrate when playing with your phone. Jellyfin uses AAC and Plexamp uses Opus.

pimeys 29 days ago
Tell that to some of my 24bit/192kHz flac files. About 300 megabytes each. Not nice to stream with plexamp using my 40 Mbps upstream... Easy to encode in opus though.
Marsymars 28 days ago
Even uncompressed, 24bit/192Khz is <10 Mbps.
epcoa 29 days ago
The major streaming platforms except Spotify offer lossless streaming as an upgrade or benefit(Apple Music) for years and even Spotify the hold out is releasing “Super Premium” soon. Opinion aside, lossless streaming is a big deal.
timcobb 29 days ago
why is that?
shawabawa3 29 days ago
the human ear just isn't good enough at processing sound to need lossless codecs

a modern audio codec at 320kbps bitrate is more than good enough.

Lossless is useful for recompressing stuff when new codecs come out or for different purposes without introducing artefacts, not really for listening (in before angry audiophiles attack me)

arp242 29 days ago
> a modern audio codec at 320kbps bitrate is more than good enough.

MP3 V0 should already be, and is typically smaller.

That said, it does depend on the quality of the encoder; back in the day a lot of MP3 encoders were not very good, even at high quality settings. These days LAME is the de-facto standard and it's pretty good, but maybe some others aren't(?)

elabajaba 28 days ago
Hell, modern audio codecs (opus and AAC, but not the ffmpeg opus/AAC encoders) are transparent at ~160-192k. MP3 is a legacy codec these days, and generally needs ~30% more bitrate for similar quality.
givinguflac 29 days ago
The human ear is absolutely good enough to hear the difference. However the vast majority of the population has not had listening training. I’ve done double blind tests repeatedly and provided it’s not on crap audio gear I can absolutely tell the difference, as can most of my golden ears trained acquaintances.
PaulDavisThe1st 29 days ago
This conflicts with every published double blind study that was not in a context that had the word "audiophile" in its name.

When you say you "can absolutely tell the difference", what score are you getting that proves you are doing better than guessing? And with what type of lossy encoding?

hackingonempty 29 days ago
People who make such claims either repeat the tests until they get one with a perfect score or otherwise don't count every trial, do a poor job of conversion so there is clipping or other artifacts that break blinding, compare different sources like the standard version that accompanies the "high rez" version on an SACD but may be from a different mastering, don't level match, don't actually do a real double blind test, don't do enough trials, or are just lying.
givinguflac 29 days ago
I’ve done multiple online tests and always scored at least in the 70s. Using foobar and my own cds or hi res downloads I’ve done encoding of the same exact wav file to flac, mp3, and ogg. Flac wins. Using monitor speakers (Mackie MR5’s) and a high end DAC it was not at all difficult for me.

I truly appreciate you calling me a liar though; really adds to the conversation.

BoingBoomTschak 28 days ago
> Mackie MR5

Don't want to sound snarky, but these are only "decent" (the succeeding MRx24 add an actually designed waveguide) and you'll never hear the sound of a DAC unless it's very badly implemented or has SNR troubles that show with ultra sensitive IEMs.

Anyone who has read enough of the research in the matter will tell you that the codec itself is an improbable culprit. The way it's encoded and the test setup usually are "at fault" in this situation.

snvzz 28 days ago
Indeed, it is quite possible to tell mp3 from flac in double blind test, with a good pair of ears and cheap (but well-measuring) audio equipment.
BoingBoomTschak 28 days ago
I give the benefit of doubt for mp3, since the format is simply flawed even with all the LAME magic (e.g. https://wiki.hydrogenaud.io/index.php?title=LAME_Y_switch). Though the differences mostly remain in the realm of ABX competition, especially with VBR (always use VBR).

But for Vorbis, AAC and Opus with decent encoder/bitrate, I doubt it. Baring killer samples like castanets showing pre-echo issues inherent to some MDCT based codecs.

snvzz 28 days ago
Can't say about AAC, have never tried.

Vorbis fooled me at 160-192 territory... over a decade ago.

Opus is very hard, even at 128kbps, tested relatively recently (3-5yr ago)

hackingonempty 28 days ago
I don't know why you're hearing a difference I'm just pointing out that there are many reasons why you could be hearing a difference that are not a specific effect of the codec itself.

You're right I shouldn't be making crappy posts and will try better in the future.

snvzz 28 days ago
There's a simple tool to figure it out. It's called double blind test.

I went through it, and could tell mp3 (lame) from lossless, with high confidence. Many people I know and trust not to make shit up have done the same. Parent likely did the same as well.

However, I cannot ABX Opus from flac. It becomes impossible for me somewhere 100 to 160kbit/s depending on song, weather and luck.

mp3 is quite good for a codec from 1991. But it is deeply flawed, and compares quite poorly to what we have today.

hylaride 29 days ago
> However the vast majority of the population has not had listening training.

This pretty much means they don't need it. And even if they're all trained, there's still very much a good enough for many situations. I don't need to waste data on lossless if I'm streaming to my phone in a noisy environment, even with noise cancelling. Add to the fact that 99% of Bluetooth headphones are lossy anyways and you're left overoptimizing.

Sitting on a beanbag at home with a pair of Hifiman Susvaras or some other overpriced headseat, that's maybe another story...

seba_dos1 29 days ago
> Add to the fact that 99% of Bluetooth headphones are lossy anyways and you're left overoptimizing.

Perhaps somewhat counterintuitively, Bluetooth headphones is actually a use case where lossless audio helps the most, as you're avoiding double-lossy compression. SBC XQ isn't that bad, but it gets much worse when what you feed it is already lossy.

Marsymars 28 days ago
I really like my 2.4 GHz RF headphones. Not portable outside of the house, but optical input to the base, lossless wireless transmission, and compared to Bluetooth, better range/interference/pairing/obsolescence. I like them so much I bought a second pair that I have as a backup for when the first breaks.
givinguflac 29 days ago
Right, and children shouldn’t be taught to name distinct colors and therefore they would not need it. Hot take there bud.
hylaride 28 days ago
I didn't say they shouldn't, but aren't.
timrichard 29 days ago
Possibly, depending on the listener.

But why would I bother recompressing when the various media players in the house can deal with the FLAC files just fine? On a typical home wifi network, a track probably transfers in about a second.

shawabawa3 29 days ago
right but I was talking about in the context of low latency streaming, where the costs of sending FLAC over <insert modern audio codec here> are considerably higher (in terms of latency, bandwidth etc)
TacticalCoder 29 days ago
[dead]