miuywu
New Head-Fier
I've been passively absorbing a good amount of music listening and hifi gear info for just over a few months through listening to youtube reviews in the background, discussions and reading reading reviews and comments on such.
Seems like some enthusiasts like to reference science to justify their experience, frequency graphs are the most common. Source and amp electrical measurements are not as common but talked about as well.
In contrast to this trend of using measurements and math to make the 'best' (at least optically) decision, the more experienced listeners will fallback to a word that means nothing to new people:
***Synergy***
The whole chain creates the sound. Makes sense, but the way the word synergy is used makes it a mystery on how to achieve it accurately without much experience.
It seems like the goal is to find a chain with your taste of:
Because each of those steps are physically different, I am keen on why similar types of distortion can be achieved at all 3 stages. I want to know if there's a best use case for each component - and why? I'd love to see the math / science.
My guess for best cases are:
DAC creates the base, the map of a song...
Maybe this is all different for speakers. My only hifi experience is with IEMs and headphones as of now. For example, decent sound stage for speakers can be too small sounding on headphones.
Seems like some enthusiasts like to reference science to justify their experience, frequency graphs are the most common. Source and amp electrical measurements are not as common but talked about as well.
In contrast to this trend of using measurements and math to make the 'best' (at least optically) decision, the more experienced listeners will fallback to a word that means nothing to new people:
***Synergy***
The whole chain creates the sound. Makes sense, but the way the word synergy is used makes it a mystery on how to achieve it accurately without much experience.
It seems like the goal is to find a chain with your taste of:
- sound signature
- sound stage
- environment / airy-ness
- natural sounding - timbre
- resolution - microdetails / texture
- separation - no clue how this is quantified atm
Because each of those steps are physically different, I am keen on why similar types of distortion can be achieved at all 3 stages. I want to know if there's a best use case for each component - and why? I'd love to see the math / science.
My guess for best cases are:
DAC creates the base, the map of a song...
- resolution, timbre, separation, environment
- sound stage, sound signature
- timbre - you don't want a nice signal but unrealistic / unpleasant sound creation from it
- sound signature - taste
- resolution - you don't want to bottleneck your chain with slow drivers ( or maybe you do )
Maybe this is all different for speakers. My only hifi experience is with IEMs and headphones as of now. For example, decent sound stage for speakers can be too small sounding on headphones.