Some software wouldn't work correctly with FLAC-based Icecast streams if they used libFLAC/libFLAC++ for demuxing and decoding. Usually these streams mux into Ogg and send updated metadata by closing out the previous Ogg bitstream and starting a new one. If you were using libFLAC to demux and decode - when the stream updated, it would just hang forever. Apps would have to do their own Ogg demuxing and reset the decoder between streams.
Chained Ogg FLAC allows having lossless internet radio streams with rich, in-band metadata instead of relying on out-of-band methods. So you could have in-band album art, artist info, links - anything you can cram into a Vorbis comment block.
Lossless audio is unconditionally transparent-- you won't have coding artifacts, you won't have issues with the codec accidentally increasing the crest factor of the audio and creating clipping where there wasn't. If you have the bandwidth for it, why not?
So many people are using streaming in lieu of radio-- a true broadcast medium. I think any high ground to argue efficiency was lost at that point. :) making the streams use 10x the bandwidth? meh. Maybe convincing video sites to provide an option to turn off the video would be a better use of complaint energy: it impacts more people than lossless streaming and wastes a lot more bandwidth :)
https://www.radio-browser.info/search?page=1&order=clickcoun...
Doesn't that depend on the hardware of "most people"? Even if you have a shit CPU, you probably have at least more than 1 core, so this will be at least little bit helpful for those people, wouldn't it?
Edit: Just tried turning a ~5 minute .wav into .flac (without multithreading) on a Intel Celeron N4505 (worst CPU I have running atm) and took around 20 seconds, FWIW
I've converted a bunch of sound packs (for music production) to flac and it really takes next to no time at all. I suppose those are quite short audio files, but there's a lot of them, 20 gigabytes in total (in flac).
Perhaps the person who wrote this improvement did have a use case for it, though :).
At least according to wikipedia it doesn't look like they haven't changed the algorithm to much in the mean time, so just about anything should be able to run it today.
Probably not, because they're unlikely to have enough stuff to export that it's relevant.
> Edit: Just tried turning a ~5 minute .wav into .flac (without multithreading) on a Intel Celeron N4505 (worst CPU I have running atm) and took around 20 seconds, FWIW
Which is basically nothing. It takes more time than that to fix the track's tagging.
For example, I have a raspberry pi that is responsible for recording as soon as it powers up. Then I can press a button, and it grabs the last 60 recorded minutes, which happens to be saved as .wav right now, which I'm fine with, the rest of my pipeline works with .wav.
But if my pipeline required flac, I would need to turn the wav into flac on the raspberry pi at this point, for 60 minutes of audio, and of course I'd like that to be faster if possible, so I can start editing it right away.
This release will probably be worth a look to speed the archiving/restoring jobs up.
Currently there's flacCL, which compresses on the GPU, so this is one more option.
We (Overte, an open source VR environment) have a need for fast audio encoding, and sometimes audio encoding CPU time is the main bottleneck. For this purpose we support multiple codecs, and FLAC is actually of interest because it turns out that the niche of "compressing audio really fast but still in good quality" is a rare one.
We maingly use Opus which is great, but it's fairly CPU heavy, so there can be cases when one might want to sacrifice some bandwidth in exchange for less CPU time.
$ flac --version
flac 1.5.0
$ hyperfine -r5 "flac -f -8 a.wav a.flac" "flac -j16 -f -8 a.wav a.flac"
Benchmark 1: flac -f -8 a.wav a.flac
Time (mean ± σ): 13.148 s ± 0.194 s [User: 12.758 s, System: 0.361 s]
Range (min … max): 12.934 s … 13.318 s 5 runs
Benchmark 2: flac -j16 -f -8 a.wav a.flac
Time (mean ± σ): 2.404 s ± 0.012 s [User: 14.126 s, System: 1.355 s]
Range (min … max): 2.395 s … 2.425 s 5 runs
Summary
flac -j16 -f -8 a.wav a.flac ran
5.47 ± 0.09 times faster than flac -f -8 a.wav a.flac
Using the API you can set the blocksize, but there's no manual flush function. The only way to flush output is to call the "finish" function, which as its name implies - marks the stream as finished.
I actually wrote my own FLAC encoder for this exact reason - https://github.com/jprjr/tflac - focused on latency over efficient compression.
for streaming you are better off with an optimised lossy codec
In my case - I have an internet radio station available in a few different codec/bitrate combinations. I generate a chained Ogg FLAC stream so I have in-band metadata with lossless audio.
The stream gets piped into a process that will encode the lossy versions, update metadata the correct way per-stream (like there's HLS with timed ID3, there's Icecast with chained Ogg Opus, Icecast with AAC + Shoutcast-style metadata).
If you are Spotify, that probably makes sense. But if you are someone with a homelab, you probably have enough bandwidth and then some, so streaming FLAC to a home theater (your own or your friend's) makes sense.
I’m not saying there’s no use for opus- just that if your goal is a high quality listening experience, that ain’t it.
https://www.ecstuff4u.com/2023/03/opus-vs-flac-difference-co...
For instance, I've got a navidrome instance with all my music library accessible from anywhere in the world trough my phone. However there are situations where I may not have internet any connection, so I use the app on the phone (Tempo) to mark the songs I want to be downloaded and available even when offline, but my phone storage wouldn't hold even a quarter of my playlists if I went with the original encode of the songs (mostly lossless flacs), so I instead set it to download a transcoded Opus 128kbps version of it all and it fits on my phone with room to spare. It sound pretty damn good trough my admittedly average IEMs and I get the benefit of offline playback. Even if you somehow had the absolute best playback system connected to my phone you might be able to tell the difference, but it beats not having to rely on internet connectivity.
You could probably argue that these are handwritten letters but the argument still stands.
You should not encode the files, just use Plex or Jellyfin and choose a lower bitrate when playing with your phone. Jellyfin uses AAC and Plexamp uses Opus.
a modern audio codec at 320kbps bitrate is more than good enough.
Lossless is useful for recompressing stuff when new codecs come out or for different purposes without introducing artefacts, not really for listening (in before angry audiophiles attack me)
MP3 V0 should already be, and is typically smaller.
That said, it does depend on the quality of the encoder; back in the day a lot of MP3 encoders were not very good, even at high quality settings. These days LAME is the de-facto standard and it's pretty good, but maybe some others aren't(?)
When you say you "can absolutely tell the difference", what score are you getting that proves you are doing better than guessing? And with what type of lossy encoding?
I truly appreciate you calling me a liar though; really adds to the conversation.
Don't want to sound snarky, but these are only "decent" (the succeeding MRx24 add an actually designed waveguide) and you'll never hear the sound of a DAC unless it's very badly implemented or has SNR troubles that show with ultra sensitive IEMs.
Anyone who has read enough of the research in the matter will tell you that the codec itself is an improbable culprit. The way it's encoded and the test setup usually are "at fault" in this situation.
But for Vorbis, AAC and Opus with decent encoder/bitrate, I doubt it. Baring killer samples like castanets showing pre-echo issues inherent to some MDCT based codecs.
Vorbis fooled me at 160-192 territory... over a decade ago.
Opus is very hard, even at 128kbps, tested relatively recently (3-5yr ago)
You're right I shouldn't be making crappy posts and will try better in the future.
I went through it, and could tell mp3 (lame) from lossless, with high confidence. Many people I know and trust not to make shit up have done the same. Parent likely did the same as well.
However, I cannot ABX Opus from flac. It becomes impossible for me somewhere 100 to 160kbit/s depending on song, weather and luck.
mp3 is quite good for a codec from 1991. But it is deeply flawed, and compares quite poorly to what we have today.
This pretty much means they don't need it. And even if they're all trained, there's still very much a good enough for many situations. I don't need to waste data on lossless if I'm streaming to my phone in a noisy environment, even with noise cancelling. Add to the fact that 99% of Bluetooth headphones are lossy anyways and you're left overoptimizing.
Sitting on a beanbag at home with a pair of Hifiman Susvaras or some other overpriced headseat, that's maybe another story...
Perhaps somewhat counterintuitively, Bluetooth headphones is actually a use case where lossless audio helps the most, as you're avoiding double-lossy compression. SBC XQ isn't that bad, but it gets much worse when what you feed it is already lossy.
But why would I bother recompressing when the various media players in the house can deal with the FLAC files just fine? On a typical home wifi network, a track probably transfers in about a second.