[1] If you guys are curious about the differences between theory/simulations and reality, and you end up doing some research, you are bound to find information that confirms the existence of that invisible barrier between mathematically ideal things that you can design on a piece of paper (for example, sphere radius 2cm) and what we can build in real life.
[2] The important takeaway that you need to keep in mind is that, unlike mathematical models, real machines do not exhibit infinite precision.
[2a] For our discussion, this is where our perception of detail comes in: how close are those compressions and rarefactions at the transducer output to an ideal sine wave.
[3] I guess I would need to do more research on the sources of error in the sine wave. The only thing I know for a fact is that the sine waves that are output from the audio chain are not ideal (since no machine can be made to perfect mathematical precision), and I would be interested in learning what that could be....
1. Yes, we are curious about the differences between theory and reality (the practical implementation of the theory) and we have done some research on the matter. For example, the Nyquist/Shannon Sampling Theorem states that any given sine wave (or combination of sine waves) can be reproduced perfectly providing the sample rate is more than double the highest audio frequency we want to reproduce. However it's impractical to perfectly implement the theory in reality, although we can get very close. On the analogue side, it's also possible to get extremely close to the "theory" in practice, although the big difference (compared to digital theory) is that analogue theory does not indicate/predict perfect reproduction to start with, providing you take into account all the pertinent theory/theories of course, such as Johnson/Nyquist Noise for example.
2. This statement is certainly true on the analogue side of audio but is not really true on the digital side, as the problem of infinite precision is bypassed in the digital/binary realm (as it is with the telegraph system). However, what we digitise is an analogue signal and what the digital audio process reproduces is an analogue signal and therefore the ultimate limits of a digital audio system are determined by the theories/practicalities of analogue signals. Of course though, we cannot hear/perceive an analogue signal, it has to be further converted into an acoustic signal and this is where the theory/theories predict the greatest signal loss/distortion. In a competently designed audio system, it's therefore the transducers which determine the ultimate limits of precision/accuracy.
2a. This statement represents the hole in your reasoning. This would be "
where our perception of detail comes in"
IF our hearing/perception had infinite precision but of course it doesn't. The issue/question is therefore: Is the precision of the system less than or greater than the precision of our hearing/perception? Splitting an audio system into it's constituent parts, the answer to this question is that the precision of the digital domain and conversion to analogue is many times greater than the precision of our hearing (in some respects 1,000 times more precise) and even after amplification, it would typically be at least 10 times greater, even with modestly priced equipment. However, this is not the case with transducers and many transducers aren't designed for ultimate precision in the first place, they're often/typically designed to produce a subjectively pleasing output rather than an accurate output.
3. Taking the above into account, "researching the sources of error in the sine wave" isn't really going to help because "infinite precision" is irrelevant. The obvious areas to research are not in the digital realm but the transducers and perception itself.
A digital file does not have to be bit perfect to prevent a computer from crashing. There are data correction schemes for not having any perceptible difference in audio/video quality, or a player can continue to read data. Another example is a corrupted image file: a computer won't suddenly crash if you open it. You'll just see portions of it look garbled.
In the digital domain we don't just have a digital data file though, we also have digital code containing algorithms/instructions, more than a billion transistors to execute those instructions and we've got to move all those bits of data (both the data bits and the bits representing the instructions) to/from various different locations and keep track of them all with other bits of information. It's only the initial reading of the data file itself from long term storage to RAM that has a data correction scheme but of course, data correction is a process (series of "instructions") that requires the movement and processing of bits of data which if not perfect would cause a crash by the error correction code itself. An "instruction" is a cycle that requires the fetching of the instruction from memory, decoding the instruction, reading the address from memory (of the data) and then executing the instruction. So, while a corrupted image file itself probably wouldn't cause a crash (only a corrupted/garbled image) any single bit error anywhere in any of the instruction cycles which are carried out on each of those image file bits probably would and reportedly, the latest generation iPhone is capable of 600 billion instruction cycles per second!
G