Digital audio myths in this day and age? Hopefully I will squash some of those

Everyone here heard these, some of you may even be keen believers in these. Raging from “magical coldness” added to the sound (compared to full analog), to “stair-steps audio” and through extension to “hi-res” audio being better since “it has more resolution so it has to be better”, the digital audio has been plagued with myths since its dawn in the last century.

Many people imagine audio like an image, which is made of pixels. The more pixels it has, the better it looks. So, it seems natural that “resolution” in digital audio works the same, right? Wrong.
To start with the conclusion, standard lossless CD quality (or FLAC, WAV) of 16/44.1 is 120% of what any human is able to hear. It is a PERFECT representation of the original recording limited to 22 kHz, since sound waves don’t need an infinity of points to be perfectly represented.

This stuff was known since 1950, so why is it still not common knowledge these days?

This is the most comprehensive and trustworthy single piece of information about digital audio that I know. He will thoroughly educate you in the way digital audio works. Anyone that wants to “know” rather than “believe” their very placebo-prone ears will want to watch this video.

To summarize a few key points:

  • Standard CD quality of 16/44.1 is a perfect representation of any audio that is recorded, limited to 22 kHz. I am in my twenties and I can hear at most 17 kHz. I don’t go to concerts and don’t listen to music loudly, so other people at my age may hear even less. Which means that this audio quality is too much for pretty much anyone that is old enough to actually enjoy music.

  • “Hi-res” just increases that limit to 44 kHz or 88 kHz, THAT’S IT. Everything bellow 22 kHz is identical to standard CD lossless;

  • There is no lossless format more lossless than other (duh). FLAC, WAV etc just differ by the amount of file size they waste;

  • Audio is no image, you don’t hear more stuff if it has more resolution. You only need 44k samples per second to perfectly represent up to 22 kHz of audio.

  • The purpose Bit-depth (16, 24, 32) is to reduce the quantization error from conversion (resultant noise floor). 16 bit already puts the noise floor at a level of -96 dB, which means you need to listen to at least 96 dB loud before you hear any of the noise floor. Needless to say, your hearing is in immediate danger at that level, therefore 16 bits is all the bit depth anyone sane needs.

Practical test - the actual waveform difference from hi-res to standard CD, made with Audacity

Hi-res 24/88.2 - 4x file size


16/44.1

The sample by sample difference between the two

Hear for yourself, this this the difference file. It’s the real deal, you can see the spectrum for yourself. Has audio info, nothing is audible. Maybe your dog will be able to enjoy it :stuck_out_tongue:

Even if it was in the audible frequency range, the difference is so faint that it is almost non-existent. And this was with an actual Hi-res file. The usual case is that they have absolutely no information above 22 kHz (upscales), which makes them a total scam and waste of space.

The main point I am trying to prove here is that CD (16/44.1) is already perfect quality in the audible spectrum. As opposed to what analogue heads proclaim, having a finite number of points as samples doesn’t make the audio imperfect. Sound waves aren’t drawings. They can be represented with mathematical functions.
And as you can see, doubling the amount of samples does not affect the common area between the two at all. The only thing hi-res is capable of doing is reducing the noise floor (which is already stupidly low) and increasing the frequency range. These were originally intended for studio use, but they made their way into the hands of the consumers due to greedy marketing.

7 Likes

There is one occasion when the Hi Res version of a track can be noticeably better than the standard version. That’s if it happens to be mastered differently. There was a good post about the merits of Hi res audio on Audio Science review:

High Resolution Audio: Does It Matter?

Also Here’s a post I think you all will find really amusing.
RCA Cable Burn In

4 Likes

The audio myth personally annoys me the most is that USB cables have a notable affect on sound. I once met a guy who had a $200 3 ft usb cable because he wanted to reduce jitter. You’d be much better off buying a higher end DAC with asynchronous USB support.

I really wish there was some well funded organization who could scientifically put many of these debates to rest. But even if there was, there’s no amount evidence that can convince people like this guy…
SOtM sNH-10G Audiophile Ethernet Switch Review

4 Likes

Agreed. I even saw a guy doing a double-blind test on youtube, confirming he heard the difference between the standard USB cable and a 200$ cable… poor him… The only reason why you’d hear a difference is because your DAC is shit.

Also, burning-in DACs… yeah right… how about good soldering instead :stuck_out_tongue:

Tbh some of the default usb cables… can be improved measurement wise with audioquest 50 dollar green… or some of the cinnamon or whatever bullshit name they tag the name of the actually cable… like my smsl su-8 default cable… doesn’t sound the same as the green audioquest… but I can also use that audioquest in all my dacs… so 50 dollars… eh who cares(in my position)…

I would try it if you can… if you don’t like it… send it back to amazon…no harm no foul.

Pretty sure it’s because the SU-8 cable was defective. An “audioquest” or “audiophile” or “pure audio” or “hi-res audio” or whatever USB cable should sound the same as “amazon basics” USB cables, otherwise it’s just defective.

1 Like

There are even “audiophile” power cables available for hundreds of dollars on amazon. Zeos actually bought one a while ago just to see what one felt like. Wonder what actually happened to it?:face_with_raised_eyebrow:

An obvious viewpoint. It’s natural that actual different masters will sound different, but I was talking more along the lines of the same source file, like taking an actual “hi-res” and downsample it to 16/44.1 and then compare the two.

Speaking of which, I underlined “actual” hi-res since most of the hi-res files I have seen are not even hi-res. They may not have any bits of info above 22kHz, accoring to simple spectral analasys. Lots of them are just up-scales from standard for more money. Like when you convert a 128 kbps mp3 to FLAC. But people sill rave about those that they sound night and day better than standard quality :joy:

I have learned the hard way that there are people in this hobby that are simply overwhelmed by their placebo :neutral_face:

It is of course the same with cables. Hearing is one of the most unreliable senses a human has. For example, your emotions will quickly change how good or bad something sounds. If you listen to a soundtrack from a movie you liked, you will immediately find it way more beautiful.
The opposite also happens to me all the time. After I am tired from work, my music sounds worse than usual.

Trusting one’s ears seems like something very unproductive in the long run. Which is why it has become a hard custom for me to always analyze the spectrum and waveform of the the music I am adding to my collection, with Speck and Audacity. A lot of times my findings were quite shocking to say the least.

EDIT: updated my post with a practical test of an actual sample by sample difference between a hi-res and CD quality. This method can be used to see the true sound difference between any 2 audio files.

1 Like

Yeah I did understand the point you were making. My (tangentially related) point was you could still have specific situation where it’s actually worth paying extra for the Hi Res version. Adele: Live at the Royal Albert Hall is a good example according to Amirm. But I admit it may be moot point if the number of real world examples is next to nothing.

Also, I’d curious to know how that RCA cable manufacture arrived at 175 hours of burn being optimal. It seems rather specific. XD

2 Likes

I’m a software engineer and not some uneducated loony. Initially i was skeptical of most audiophile things. I used to think dacs all sounded the same cause they just converted digital to analogue. I used to think amps all sounded the same since they just made the music louder. I used to think mp3 sounded just like flac since the difference was apparently inaudible. I used to think cables made no difference since they just carried the signal from A to B.
Gradually, as i upgraded my system however, after using one of the aforementioned things for a few weeks and switching back, i would be blown away by how bad my music used to sound.
Its a thing where until you actually try something, you don’t really know if it’ll make a difference for you. Every system is different and everyone’s music is different.
A good example is I recently bought a $150 audioquest HDMI cable and after using it for a week, i proudly say it made absolutely no difference when switching back. But, again, you don’t know until you try.
You can try to justify things with numbers, but for the same reason ECC memory is a thing, nothing in the world is perfect, nothing is without errors, and the goal for someone who is passionate about music is to reach as close as they reasonably can.

1 Like

I think the true culprits of the degradation in sound quality are Noise and Jitter.

Somehow you sound like the fact that I am pointing out pure fantasy and myths like this is a bother…
There is no way to romanticize this, digital audio is just a long string of numbers, but it is also made perfect in nature, there is no need to upgrade it further.

While it’s true that an audio chain is complicated, the money you should spend on each one of them differs greatly.

  • Even cheap DACs these days have THD values of like 0.00001%. Therefore, if you can actually hear how bad a DAC is, then that DAC belongs two decades ago.
  • The amplifier is important, but it’s purpose is indeed nothing more than amplify the sound. As long as it can output a straight line for a straight line input, then everything is perfect. If I somehow wanted a different sound signature, I would just spend the money on a different headphone or speakers.
  • By far the most important component in your audio chain is the output device. Here, literally sky is the limit. If you have too much money and want to upgrade something, here is the best place to dump 70% of that, at least. Sky is the limit https://www.upscaleaudio.com/products/focal-grand-utopia-em-evo-loudspeakers-pair
  • And then, there are those other popular places to upgrade: cables and hi-res (expensive AF for mostly useless up-scales from 16/44.1)… But they also make (very) arguably zero difference :wink:

And here comes the purpose of all of this. Which of these components would you be willing to spend your hard earned money on? In my case, I would certainly spend the bare minimum on cables and I will be happy forever with standard lossless, but if I had 100.000-200.000$ to spend on audio I would seriously eye those speakers :heart_eyes:

As it is pointed out in the video and in my post, the noise floor is a variable that is always present in digital or analog audio. But even with just 16-bit depth of audio, the noise floor will be at a level of -96 dB, which means you need at least 96 dB volume (ear-bursting loud) in order for it to even begin to appear.

The noise floor from the digital format is not a problem at all, there is no need to make one out of it. On vinyl would be way higher though.
However, recording noise is an ever present problem, but the format won’t help with that, if it’s already recorded on it.

I am not familiar with jitter though. And I can’t say that I detect something wrong with my most perfect recordings either…

Sorry, I was referring to EMF noise, introduced through equipment and not the recording. The jitter reduction is where I think most dac manufacturers differentiate themselves. There’s a handful of OEM dacs that’s most brands use. Some implement proprietary clocking to further reduce jitter.

I guess I will know when I hear it. Or when I don’t hear it anymore, if it’s present in my current equipment. And even though DACs are the pretty self-sufficient part of the audio chain, there is still some sense in upgrades here. But considering what I know, there won’t be any audible benefits from cables.

1 Like

While I agree with you about higher than cd quality audio and cables/USB cables etc, I disagree about amps and dacs only sounding different if they are over 2 decades old. There are clearly audible differences between them. It’s not just whether or not a dac has below this certain amount of distortion, or whether an amp has enough power to make your headphones loud enough. I could easily pick them out with my eyes closed, the difference is audible. I have 4 amp/dac combo units, and a desktop dac connected to a desktop amp. I also have 2 phones and a Pc that can output audio. Some of them sound bassier, some sound more clinical, there are clear eyes closed A B differences switching between them, even though they can all make my headphones “loud enough”

2 Likes

Never said amps (not DACs) don’t sound different. I did say that I personally think an amp should not color the sound in any way, since that is the job of the output device.
Pretty much no output device is perfect, since they are made of a physical membrane that always has imperfections. But an amp can be perfect quite easily, why would I want two separate things that color the sound at the same time? I will certainly lose track of what sounds like what.

However, even if you can argue that there is still meaning to amps coloring sound in certain ways (like tubes), there is no place for a DAC that colors the sound. At it’s core, a DAC is a computing device whose only purpose is to transform the string of numbers into the UNIQUE SOLUTION that is the output analog signal. If it sounds different compared to a perfectly flat reference DAC, then it has no real purpose. I won’t even know what am I listening to like that.

For example 1 kHz sine wave as digital input always has to be a 1 kHz sine wave as the output, like in the experiment in the video I provided.

1 Like

It has nothing to do with the ability to hear frequencies higher than 20kHz and everything to do with distortion.

I would highly recommend reading up about Aliasing DIstortion (AD), Intermodulation Distortion (IMD) and Aliasing Intermodulation Distortion (AID).

2 Likes

I know about that. It’s a problem with the band-limiting of older DACs, not with the mathematical Nyquist theorem. Current DACs remove this problem completely using oversampling in the conversion process. This is a problem we literally solved 20 years ago. No need for 4x or 8x larger files to get rid of an in-existent problem.
The Nyquist theorem defines pretty clearly the sample rate required to perfectly reproduce a band-limited signal. And that will not change just because some equipment has issues.

So would you rather spend a lot on “hi-res” (or just up-scales) files or just get a better DAC and not worry about things you may or may not be able to hear anyway?

OFF-TOPIC: But man, the answers I got to this thread are always hovering around the same core issues: people don’t understand how digital audio works, and they refuse to comprehend. And when I am really trying to spread the knowledge of the theory behind it, everyone is like I am the public enemy number one. The purpose of this is to educate people that don’t really know better, but if you really want to spend extra money and a lot more storage space on this, then don’t let me stop you… :disappointed:

2 Likes