OTOH if you don’t understand very well what hypothesis you’re testing, what each testing method can and can’t prove etc., you can just as easily get fooled by tests. And that tends to be even worse, because then you’re running around saying “I’ve got science on my side” and refusing to hear any counter-arguments, when you really don’t have science on your side, you’ve just misused or misunderstood the science. Kinda like the “vaccines cause autism” crowd and their bogus and eventually discredited Wakefield study. Or kinda like vegans. (Had to say it, sorry.)
For example, in the scenario above I have to wonder how wide a variety of music they used in that blind test, because music masks distortion and different music can have different masking properties. Or even more to the point: did they use specially chosen “worst offender” tracks that are especially likely to reveal distortion when it’s present? And then I could go on to comment that just because 2% “across the board” THD wasn’t audible in a blind test it doesn’t mean you can take that exact number and apply it directly to the specs of any amp you’re looking to buy - random amps can have various distortion profiles, some more audible, some less. It’s not always linear.
Objectivism is hard. There’s always that one more variable you haven’t thought of that can invalidate most or all of your conclusions. That’s why science can only be done by multiple independent teams, checking and re-checking eachother’s work.