Generally, there are two types of degradation with compressed audio. The first kind is high end roll off. At lower rates, the encoder applies a low pass filter to reduce ultra high frequencies which take up more bandwidth. The higher the data rate, the higher the filter until at a certain point, you should be getting every frequency humans can hear. The other kind of degradation is artifacting. Certain sounds are difficult to encode- massed voices, applause, complex orchestral string textures, etc. If there isn't enough bandwidth to render these sounds, they make a digital splat or gurgle. At very low rates, the digital distortion is easy to hear... as it gets higher, they become less and less frequent until the track achieves complete transparency. That happens at different points with different codecs.
Generally, I've found that people who claim to hear differences in soundstage or clarity not related to artifacting don't test blind. I think those two descriptions are pretty safe to chalk up to bias.
With downloads from sharing it's impossible to know what the data rate or encoders are. Some people are idiots who take 128 MP3s and transcode them to FLAC and then upload that. The download stores state what they use... Apple is AAC 256 VBR and Amazon is MP3 LAME 256 VBR I believe.
Hope this helps!