I’ve been working in my friend’s studio for about two months now. He listens to everything (mostly 70s stuff like Bowie and art and international music and modern singer-songwriter stuff) through a Bose Mini SoundLink II via Spotify. The first few days there, I was nicely surprised by the sound quality. I’m roughly familiar with the general “Bose style,” which seems to have some tweaks from an ideal/flat frequency curve to make vocals pop a bit, so I wasn’t expecting much, but it seemed fresh listening to his music which was mostly unfamiliar to me…
…But then, I started playing my own music (mostly 90s alt-rock 320kbps mp3s) through it and the difference between the numerous systems I’d heard it on and his really began to bother me. Songs sound much worse on the Mini Soundlink than even my Toyota Camry. Yes, vocals pop a little more, but any instrument in the vocal range comes out “choked.” It’s just awful what these speakers do to my familiar songs. Guitar chords don’t ring out, drums and basslines just kind of disappear, 4-chord progressions don’t even seem to resolve properly because there are so many missing partials… Of course the space isn’t doing it any favors - a 3000 square foot, 12-foot ceiling with mostly concrete and many random art objects and tools diffracting sound, but little “soft” attenuation.
Can anyone describe technically what Bose is doing to my beloved songs? Are there notch filters strewn about? Is there some kind of multiband compressor? Are they doing advanced DSP tricks? How the heck are they processing the signal?! Transients/peaks seem flattened or erased somehow. It’s shocking… It just strains me to listen to… so maybe if I knew technically what was happening it might give me something to think about instead of stressing me out so much. It seems like the presence of vocals ducks the rest of the sounds somehow?! It also seems like near-quiet parts get “gated out,” so you can’t hear things trail off into the reverb well.
What the heck is going on!