Ripping CDs worth it?

No offense but this is bullshit, the bits are transmitted over TCP/IP which is guaranteed lossless and guaranteed in order delivery.
The computer then buffers it, so there should be no additional jitter over using your computer to play FLAC or just a CD.
Now you can make an argument that a good CD transport can provide a more jitter free signal than a computer, but that’s about it, if it coming from your computer and going over USB to the DAC there is literally no difference other that the quality of the recording.

1 Like

I would say it’s more jitter and clock based regarding why playing a local file makes some of a difference. You also have to go through whatever service for playback and can’t use your own player at times. Realistically lossless streaming vs file sounds the same (assuming same master/copy) except light differences on super high end setups

1 Like

Actually, it’s science. There is no such thing as perfect transmission. It isn’t physically possible. At the atomic level you have to push electrons around and other particles simply get in the way and knock a few off course. I do agree though, that the audible difference is slim-to-zero even on the highest-end systems.

I’ll agree the player could make a difference. As can exclusive mode.

I can literally unplug my network cable while “streaming” from amazon music, and it’ll keep playing for 10’s of seconds before it runs out of data.

All the audio call does is take a bunch of bytes, which the OS then deals with, as long as your ahead of what’s being played, doesn’t matter what’s pushing along the api, nothing in the player is dealing with clocks or timing.

Is the file not encoded differently though for streaming? Because wouldn’t it be served in chunks and then reconstructed where on a pc you would just buffer the entire file?

I’m not even going to have this argument, the data is sent in packets, it’s checksummed, if it doesn’t match it gets re sent, the application will never see anything but a continuous in order stream of bytes that match what was sent.
If the underlying network can’t deliver the data correctly then the application never sees it.
Yes theoretically it’s possible for the right set of bits to be flipped that the checksum still matches, Practically it isn’t going to happen.

2 Likes

TCP/UDP have error correction, so a faulty packet is re-requested when a checksum fails. There are instances where a checksum won’t fail correctly but it’s pretty rare.

Jinx.

1 Like

It’s served in chunks, but in practice you’d be seconds ahead pulling over the network, and you just queue the chunks to the audio api, which is what a player playing locally is going to do.

Underneath there is going to be even more buffering, both in the USB layer and most likely in the DAC’s USB interface chip.

1 Like

You need to buy this otherwise TCP and UDP can break and make your $25,000 dollar speaker cables sound like $500 dollar speaker cables. You don’t want that.

2 Likes

Infiniband Master Race!

I’m just wondering if because it’s delivered in chunks that’s what makes the sonic difference (assuming no). I am pretty sure it has to do with clocking and jitter for playback based on how the device handles it but whatevs, not really trying to advocate someone switch from lossless streaming to files

The audio api’s basically take chunks either way, there should be no difference.

1 Like

It wouldn’t. DSD is split into PCM and recombined inside the DAC because the USB-IF couldn’t be bothered to add dedicated support the USB standard. It happens all the time.

1 Like

Gotcha, so it’s a final playback thing then

Packets is the better word. If a packet is missed or missing you get nothing or broken warbly digital like you do on a cell phone. Ethernet though works with error correction and streaming the data with a lot of buffer so there’s plenty of time to re-send and arrange.

I get how networking works, I just didn’t know if tidal did something different or out of the ordinary to optimize their service

1 Like

At the app API they can do stuff but in the lower layers they have to stick to the standards.

1 Like

This may be conspiratorial, but I have noticed slight differences in waveforms when playing a small section of a song. It is minute though.

Would make sense if they were to start with more compressed chunks while the buffer loads with lesser or uncompressed chunks.
(I should note that I have 0 idea whether that is what happens).

I think we’re just talking about different grain sizes here. Yes, there is checksumming and packet transmission and error correction - and that’s all really, really damn good at what it does. It’s still electronic transmission and subject to the laws of physics. TCP/UDP happens at an “engineering level” which builds in some level of tolerance. My point was a very simplistic way of saying “there’s a lot going on at the atomic and sub-atomic level that impacts signal transmission and the longer that transmission gets and the more complex that chain gets the more chances there are for things to go wrong.” And in fact, given the probabalistic statistical models of the behavior of the particles that “carry” a signal, errors are expected - which is why we had to develop those error correction methods to begin with. However, ultimately anything we humans do is going to be imperfect. But again, that’s all losing the central point I was trying to make: it’s really really hard to tell the difference between local files and streaming files sourced from the same master.

1 Like