This hobby is subjective. No one should be agreeing 100% of the time with any reviewer. There are people out there that bought the Pontiac Aztek after all, they looked at that that monstrosity and said “that’s the vehicle for me.” Reviewers are going to rank however they are going to rank.
Ranking is a silly exercise in itself when you have to decide what metrics and weights to use: Comfort, Tone, Technicalities, Build Quality, Looks, Bass, Treble, Mids, Price, Value, Preference. How much is too much. The rest of the reviewer’s day can affect what they say about a product or how much time they spend with it: don’t expect a lengthy review on something they don’t like. How hot and humid was it that day, how congested were they, how tired were their ears, were they having a good day? There are various subconscious biases at play.
A reviewer isn’t going to spend all month pulling 100 IEMs just to precisely place the right letter next to their review. They would have to reflow their ranking every other month if that were the case.
Due to unit variation, reviewers may not get the same unit that we eventually get. Some might get great channel balance for example. I would think manufacturer provided review samples would be binned a tad higher than any random pick off the shelf.
Ultimately, don’t judge a reviewer on how they rank something, instead judge their review by what they say about it, how they qualify and assess, the meat and potatoes of what they are trying to get across based on their unique perceptions. Don’t get upset because their two character summation wasn’t what you think it should be.
Make your own list–
I happen to think the NiceHCK EB2S (flathead earbud) is better than a pair of Vidos, even though I reglued the brass accent that fell off of my EB2S. But perhaps I will think differently if I drill a bass port hole through the Vidos, micropore tape one side, and toss on a Y2 filter.