Chord 2Go & 2Yu Wired/Wireless Network streamer and S/PDIF adaptor - Official thread
Feb 29, 2020 at 2:10 AM Post #436 of 6,290
Exactly, but at the end of the day, it is outputting a digital signal. So if it cannot transmit electrical noise, and is not doing any sample processing (no upsampling, no crossfeed, volume - I.e., bit perfect) what Is the dac getting that is different from one source to another?
Here’s a quote from a website:


What does "bit perfect" mean?
Before going any further, I need to define the term "bit perfect" as it is used in the blog so as to not confuse my readers. The term "bit perfect" is a technical term that is used to describe any form of digital communication that involves a series of checks and error correction (i.e., checksum), ensuring the data that arrives at the receiver is identical to the data that was transmitted from the source. This is what allows you to download a file from a server halfway around the world and know that it will arrive at your computer identical in every way to the original.

Of course unlike most digital data transfer, music is played in real-time, so even if you are using digital communication devices (i.e. streamers, modems, and routers) that can potentially correct corrupted data, there is often no time to do this, and therefore the corrupted data is passed on to the next component.

When the term "bit perfect" is used in regards to player software, it can be somewhat misleading, since it implies that what is output from the computer has not been altered in any way from the original music data file. This is not the case. All bit perfect means in regards to player software is that the player software doesn't intentionally alter the music data files before decoding and/or streaming them.

If bit perfect player software did in fact assure the music data leaving the computer had no bit errors, then all so-called bit perfect players would sound identical, and this is certainly not the case. What would be more accurate would be to say that a specific music player software can be operated in "bit perfect mode," in which no algorithms were purposely used to alter the original data file.

This is a perfect example of why I sincerely recommend you view the claims of companies that sell music player software (and anything else in the audiophile industry) as "marketing language" as opposed to quantifiable facts.”

https://www.mojo-audio.com/blog/computer-audio-misconceptions/

bobfa has a very informative blog with links to other pages, which discuss the usb audio protocol by the people who created the protocol, so they should know. It’s worth reading if you think, because it’s a computer sending the data, then it’s digital, then it’s identical to the receiver.

https://audiophilestyle.com/blogs/entry/752-usb-universal-serial-bus/

https://www.edn.com/fundamentals-of-usb-audio/

then an interesting quote from above link:

.In order to keep a short feedback loop, the trick is to not buffer audio packets and feedback packets unnecessarily. Any additional buffering creates latency in the reporting, and this latency makes it more difficult to keep a smooth flow of traffic. This means that the low-level USB stack and the USB Audio stack should be tightly integrated, without buffering in between. Although this is hard to achieve on an application processor, this is quite easy to achieve if the software is implemented on an embedded processor that has a predictable execution time.”
 
Last edited:
Feb 29, 2020 at 2:35 AM Post #437 of 6,290
Here’s a quote from a website:
I snipped your quote shorter, what was your point exactly? :)

PC's can output a lot of noise superimposed all over the digital signal and I bet that noise can affect the digital signal if Hugo2's input filtering can't remove it all, but I'm also willing to bet the noise has to be very very bad to make an audible difference to the output.

2Go is a computer remember, a very expensive portable computer and it should have good output noise reduction but how do we know do we take Chord's word for it or do we measure its output against other sources first to make an informed decision?

My last post on the subject because this is an argument nobody can win and its off topic here anyway I suspect.
 
Feb 29, 2020 at 2:38 AM Post #438 of 6,290
I snipped your quote shorter, what was your point exactly? :)

PC's can output a lot of noise superimposed all over the digital signal and I bet that noise can affect the digital signal if Hugo2's input filtering can't remove it all, but I'm also willing to bet the noise has to be very very bad to make an audible difference to the output.

2Go is a computer remember, a very expensive portable computer and it should have good output noise reduction but how do we know do we take Chord's word for it or do we measure its output against other sources first to make an informed decision?

My last post on the subject because this is an argument nobody can win and its off topic here anyway I suspect.
I’m just trying to say, it’s complicated and implementation of everything in the source can have a large bearing. So sources are not the same.

hopefully as 2go is a custom motherboard etc, and is minimal in a good sense, it will be a good source.
 
Last edited:
Feb 29, 2020 at 3:25 AM Post #439 of 6,290
Since when has buffering made it “difficult to keep a smooth flow of traffic”.?
And if latency is bad then an M-Scaler/DAVE is the last thing you should use. It has way higher latency than any other DAC out there.
 
Feb 29, 2020 at 3:43 AM Post #440 of 6,290
Why had you not used the BLE way? Was it unusable at Bristol?
The 2go will be used mainly with the SDcard in it?

With the time I had I didn't try BLE to see if it was usable at Brizzle.

Yes I think SDCARD internal storage will be used lots more when 2go is available. Sadly I couldn't do a comparison with the internal cards as I didn't have any of the songs on my local device in order to switch and compare between the two. Music I did listen to however sounded very good over sdcard.
 
Feb 29, 2020 at 4:45 AM Post #442 of 6,290
Since when has buffering made it “difficult to keep a smooth flow of traffic”.?
And if latency is bad then an M-Scaler/DAVE is the last thing you should use. It has way higher latency than any other DAC out there.
I understood it to relate to error correction. If the data is sent as 88,000 packets a second (made up numbers), and every 6th packet has an error, then the asynchronous receiver needs to pin the current position, then ask the source, plz resend packet 6. You can see how that gets quickly out of hand and packets are just dropped.
But if you maintain low latency, to a fraction of a second, then it’s manageable to orchestrate the resending etc, as you’re only dealing with a fraction of the packets.
 
Feb 29, 2020 at 5:06 AM Post #443 of 6,290
I understood it to relate to error correction. If the data is sent as 88,000 packets a second (made up numbers), and every 6th packet has an error, then the asynchronous receiver needs to pin the current position, then ask the source, plz resend packet 6. You can see how that gets quickly out of hand and packets are just dropped.
But if you maintain low latency, to a fraction of a second, then it’s manageable to orchestrate the resending etc, as you’re only dealing with a fraction of the packets.
Data transmission is normally unbelievably robust. No way will every 6th packet have an uncorrectable error. The acceptable Bit Error Rate for USB3.2 for example is 1 bit in 10^12 bits. ie 1 bit every 1,000,000,000,000 bits. ie 1 bit every million million. That is pretty robust! And that bit will likely be correctable. For ethernet the acceptable BER is 1 in 10^10 bits, and again it is likely recoverable. It is just nonsense to talk about errors every sixth packet. Low latency is important in a real time context - if you are playing a digital piano for example you want to hear the sound as soon as possible after you hit the key, but latency matters not a jot if you are playing music back. Decent sized buffers improve stability and resilience. And there is massive latency in an MScaler/DAVE.

References:
USB BER https://blogs.synopsys.com/tousbornottousb/2017/11/28/bit-error-rates-for-usb-3-2/
Ethernet BER https://netcraftsmen.com/understanding-interface-errors-and-tcp-performance/
 
Last edited:
Feb 29, 2020 at 5:29 AM Post #445 of 6,290
Data transmission is normally unbelievably robust. No way will every 6th packet have an uncorrectable error. The acceptable Bit Error Rate for USB3.2 for example is 1 bit in 10^12 bits. ie 1 bit every 1,000,000,000,000 bits. ie 1 bit every million million. That is pretty robust! And that bit will likely be correctable. For ethernet the acceptable BER is 1 in 10^10 bits, and again it is likely recoverable. It is just nonsense to talk about errors every sixth packet. Low latency is important in a real time context - if you are playing a digital piano for example you want to hear the sound as soon as possible after you hit the key, but latency matters not a jot if you are playing music back. Decent sized buffers improve stability and resilience. And there is massive latency in an MScaler/DAVE.

References:
USB BER https://blogs.synopsys.com/tousbornottousb/2017/11/28/bit-error-rates-for-usb-3-2/
Ethernet BER https://netcraftsmen.com/understanding-interface-errors-and-tcp-performance/
(Made up numbers) that’s what I said, I’m just thinking through this stuff and trying to increase my understanding.
You have a very definite opinion, maybe you need to allow some room for doubt or experimentation?
I know what you mean about buffers for sure, but why then is low latency a desirable property in audio playback?
Any decent audio configuration tool will allow you to choose your desired latency.. which may be something you’d decide based on the size of the buffers on your receiving component possibly.
 
Feb 29, 2020 at 5:32 AM Post #446 of 6,290
Data transmission is normally unbelievably robust. No way will every 6th packet have an uncorrectable error. The acceptable Bit Error Rate for USB3.2 for example is 1 bit in 10^12 bits. ie 1 bit every 1,000,000,000,000 bits. ie 1 bit every million million. That is pretty robust! And that bit will likely be correctable. For ethernet the acceptable BER is 1 in 10^10 bits, and again it is likely recoverable. It is just nonsense to talk about errors every sixth packet. Low latency is important in a real time context - if you are playing a digital piano for example you want to hear the sound as soon as possible after you hit the key, but latency matters not a jot if you are playing music back. Decent sized buffers improve stability and resilience. And there is massive latency in an MScaler/DAVE.

References:
USB BER https://blogs.synopsys.com/tousbornottousb/2017/11/28/bit-error-rates-for-usb-3-2/
Ethernet BER https://netcraftsmen.com/understanding-interface-errors-and-tcp-performance/

I'm with you on this and the robustness of data transmission.

I thought @Rob Watts had posted that he had logged huge audio data transfers over long time periods without a single data error. Also, discussion of various power supplies promoting data errors also seems to me to be completely spurious. Again, I recollect Rob saying that even a single error would be audible as a pop or click and I personally cannot recollect ever having heard a single pop or click.

Rather than data corruption, I thought the issue of power supplies was to do with possible analogue noise from the power supply getting overlaid on top of the digital signal and then eventually possibly getting into the analogue stage of a DAC where it can cause audible distortion.
 
Feb 29, 2020 at 5:40 AM Post #447 of 6,290
Maybe I should have been much more specific in terms of the setup so as to save any ambiguity. I used two usual methods that people would use to connect to the Hugo. At both times using the same song with same bit depth and sample rate stored locally on my phone. First I used bubble upnp connected directly to Poly using wifi. The second time I used the exact same song from my local phone storage using UAPP and a cable.

I should also add that it ruined the song "comparatively" speaking after listening to it using wifi and BubbleuPnP

I didn’t take it as literal; but as a common phrase of comparison in audio as well as other fields of aesthetics.
 
Last edited:
Feb 29, 2020 at 5:45 AM Post #448 of 6,290
(Made up numbers) that’s what I said, I’m just thinking through this stuff and trying to increase my understanding.
You have a very definite opinion, maybe you need to allow some room for doubt or experimentation?
I know what you mean about buffers for sure, but why then is low latency a desirable property in audio playback?
Any decent audio configuration tool will allow you to choose your desired latency.. which may be something you’d decide based on the size of the buffers on your receiving component possibly.
I am trying to increase your understanding by giving you links to standards and technical documents written by people and standards bodies who know what they are talking about. These are not personal opinions. Your understanding is way, way off. Why use “made up numbers” that are totally and utterly wrong? Who says low latency is a desirable property in audio playback? It is surely important in a recording context - if you are singing and your voice is getting fed back to you through headphones any kind of delay will be unsettling. But what difference can it make in a music playback context? In fact you could argue that long latency is better - long latency means the M Scaler can have a million tap filter, long latency means you could have a huge buffer and play back an entire album from memory ..
 
Last edited:
Feb 29, 2020 at 5:57 AM Post #449 of 6,290
I am trying to increase your understanding by giving you links to standards and technical documents written by people and standards bodies who know what they are talking about. These are not personal opinions. Your understanding is way, way off. Why use “made up numbers” that are totally and utterly wrong? Who says low latency is a desirable property in audio playback? It is surely important in a recording context - if you are singing and your voice is getting fed back to you through headphones any kind of delay will be unsettling. But what difference can it make in a music playback context? In fact you could argue that long latency is better - long latency means the M Scaler can have a million tap filter, long latency means you could have a huge buffer and play back an entire album from memory ..
Made up numbers because I’m just having a casual conversation. Andrew, you’ve upset me lol, I thought we were on the same team.
 
Feb 29, 2020 at 9:38 AM Post #450 of 6,290
Made up numbers because I’m just having a casual conversation. Andrew, you’ve upset me lol, I thought we were on the same team.

I agree with Nick above. My opinion is that it’s the overlaid electrical noise and RFI that can be transmitted from component to component that weneed to be concerned with. The data files are perfectly (bit perfectly) fine. If real time processing of stored files was an issurpe for music or anything, we would have random errors occurring every time we opened an excel or word doc, which we do not.
 

Users who are viewing this thread

  • Back
    Top