If there is any sort of DSP then there is always window applied because a filtering implies potential phase issues.
That’s why linear phase exists.
Basically, it’s a trade-off between phase shift and pre-ringing. Minimum-phase filters don’t have pre-ringing but the phase changes.
Linear-phase filters don’t change the phase but produces pre-ringing.
People think minimum-phase is evil because phase shift destroys the song. But all the changes of phase happen for all channels of the song equally.
Linear-phase should only be used if we want to change the channels differently. No one does that for something like a lowpass filter.
IDK maybe I’m wrong but minimum-phase should be the default for all dacs and lowpass filters.
Neutron’s EQ is IIR which means minumum-phase. There’s no problem if EQ changes all the channels equally.
To bring this back on-topic for a bit, I hope everyone here realizes that listening to 6 samples and making 6 choices doesn’t prove anything nearly conclusive about our ability to hear the difference between 320 kbps MP3 and lossless original. We would need at least 20 attempts (like 4 music samples, 5 tests for each), and ideally 100+, to get a reliable result.
Something like what is done here with a corrected version of an (originally bullshit) test published by Tidal: http://abx.digitalfeed.net
I did not pick single one of the 128 kbps, the 2 wronly selected still where 320 kbps.
Cant be that sad it’s true. But i did notice the “bad” ones from similar kinda of thinking. This kinda sounds nicer… so it’s cannot be this one. It’s in the more “cleaner” or similar ones.
But i also thought Coldplay would be as “easy” as the rest and have listened their music.
Aaaand on the test… Could not spot the difference with 2 similar tracks, just could not.
Perry had similar thing going one. One was different than the two, so can’t be that. The 2 where so so so similar and clean.