debunking myths -- objective and subjective observations from a Sunday afternoon

admin_exported

New member
Aug 10, 2019
2,556
4
0
Visit site
So in another thread, talking about differences in digital sources (and specifically brands of USB sticks vs one another), I was challenged to present reasoned evidence for my 'scepticism'. I foolishly promised I would, so here goes. Starting with the basics, working my way up to issues that are more directly relevant.

My background by the way is under and postgraduate engineering and mathematics from university and a professional career in the digital domain (although not audio). Let's just say my current job involves number crunching and fast computers (lots of them).

I'll concentrate on the following:

Myth #1: digital HiFi is at the forefront of digital technology and science doesn't understand and can't explain many of the subtleties involved

Myth #2: many of the limitations from the analog world can be translated and applied to digital audio; specifically, every component in the chain matters, down to USB sticks and hard drives

Obviously if you want to validate my claims and/or read more, please Google the keywords below -- I will try and stick to universally accepted terminology and it should be reasonably clear when I'm quiting facts vs stating subjective opinion.



So what is digital audio?


As you know, microphones use membranes which vibrate to sound, and translates the oscillations into an electrical signal, ie variations in voltage, down a cable. This signal can be amplified and sent to a speaker, which turns the electrical signal back into sound waves by moving its speaker membranes according to the incoming signal.

The electrical signals to/from microphones and speakers is said to be an "analog" of the sound and examples of "analog" signals.

There are several benefits in converting this information into digital format. I will explain some of these benefits further below, but let's look at how it all works first.

The standard representation of digital audio in computers, video and on CDs is called PCM (pulse-code modulation). The basic idea is simple. You feed the analog signal into a analog-to-digital converter (ADC). The ADC samples the analog signal at regular intervals and outputs a series of digital readings ("samples") of the signal. The phrase "digital" by the way means discrete (discontinuous), ie a numeric representation of the strength (amplitude) of the signal at that point in time. This series of digital samples constitutes the digital audio data. A wav file on your computer is exactly this. And because the information is described as numbers, it can be stored on digital media (hard drives, USB sticks, servers, CDs, DVDs, etc) and sent over digital communication channels (eg the Internet, your home wireless network, or the cable between your digital source and your DAC).

The functional opposite of the ADC is the digital-to-analog converter, aka the DAC, which takes linear PCM data and reproduces the original analog signal so that it can be fed to a pre-amplifier stage (and ultimately speakers).

There are 2 key factors to consider here:

a) sampling period, ie the time spacing between samples. This is usually described in terms of frequency and in Hz (ie number of samples per second). So the higher the sampling frequency, the tighter spaced the samples and the more detail you can capture in the original analog signal. There is an upper theoretical limit to the frequencies that can be represented through sampling, the so called Nyquist frequency, which is 0.5 times the sample frequency. So for example, with 44.1kHz sampling, which is the frequency used on CDs, you can theoretically capture and represent anything up to 22kHz. For perfect representation, however, the reconstruction requires an ideal filter that passes some frequencies unchanged while suppressing all others completely (commonly called a brickwall filter). Unfortunately such a filter is unattainable both in practice and in theory, so in reality, 18-20kHz is a more realistic cut-off and of course it's no coincidence then that 44.1kHz has been chosen to represent roughly the frequency band that humans can hear. DACMagic owners will be familiar with these types of filters by the way as they have a couple to choose from. Usually, the filter and its parameterization is a fix part of the design.

b) the resolution (bit depth) of the individual samples. You will have heard CD being described as 44kHz / 16bit - so while the first is the sampling frequency, the second is a measure of the number of possible values each sample can take. 16 bits will give you 65,536 possible combinations, so the sample values in that example range from -32768 to +32767 (minimum amplitude to maximum amplitude).

So what are the key differences between analog and digital representation? First of all, an analog signal has a theoretically infinite resolution. The digital representation has not. But in practice, analog signals are subject to noise and are sensitive to small fluctuations in the signal. It is difficult to detect when such degradation occurs, and impossible to correct when they do. A comparable performing digital system is more complex – but with the benefit that degradation can not only be detected but corrected as well. Examples of this below.

So how do you store the data and what are the challenges?

Formats

There are many audio file formats, generally in 3 categories: uncompressed (eg WAV or raw PCM), compressed (eg FLAC or Apple Lossless), and formats with lossy compression (eg MP3, AAC, Vorbis). The format describes how the audio is organised in the audio file.

The most 'vanilla' format is raw PCM or WAV, which is not much more than the PCM data, sample by sample, organised in a long sequence.

FLAC is getting a lot of attention right now. It uses linear prediction to convert the audio samples to a series of small, uncorrelated numbers which are stored efficiently, and usually reduces the overall size by about 50% (compared to raw PCM data). The encoding is completely reversible, ie the data can be decompressed into an identical copy to the original, hence the term lossless.

MP3 and other lossy compression methods generally use principles that were discovered by a French mathematician called Fourier in the 19th century. He proved that any periodic signal can be broken down into a sum of simple oscillating functions, and he provided methods and equations for doing so. Although it wasn't his original intent, his theories applies very well to audio -- think of it in terms of sound that can be broken down into frequencies. As it happens, if you store the audio signal as scaling factors of the frequency components, as opposed to sample by sample as described above, size can be reduced dramatically (this is a little simplified but you get the idea). The 'lossy' part primarily comes from the fact that these formats also tend to use psychoacoustic methods to throw away sound components that (allegedly) can't be heard. This type of processing is easier to do once you have the sound wave broken down into its components.

Any digital audio format which will be sent through a DAC has to be converted back into raw PCM first. This is usually done in real-time by the playback device and software en route to the DAC. The DAC does not know whether the audio was previously compressed or not --and with loss-less compression methods it should make no difference anyway -- unless the software or music server is doing something strange in the process of decoding the samples and streaming them to the DAC.

Storage and transmission, S/PDIF and USB

As touched upon earlier, one of the clever aspects of digital systems is the ability to detect and correct errors in stored and transmitted data. There are a multitude of techniques to ensure that data is transmitted without errors, even across unreliable media or networks. The scientific discipline which concerns itself with these problems is called 'information theory' -- it's father is an American mathematician called Shannon who published a landmark paper in the 1940s which established the discipline and brought it to worldwide attention.

Even today there are 2 basic ways to design an error correcting system:

* Automatic repeat-request (ARQ): The transmitter sends the data and also an error detection code, which the receiver uses to check for errors, and requests retransmission of erroneous data.
* Forward error correction (FEC): The transmitter encodes the data with an error-correcting code (ECC) and sends the coded message. The receiver never sends any messages back to the transmitter. The receiver decodes what it receives into the "most likely" data. The codes are designed so that it would take an "unreasonable" amount of noise to trick the receiver into misinterpreting the data.

It is possible to combine the two, so that minor errors are corrected without retransmission, and major errors are detected and a retransmission requested. Incidentally most wireless communication is built like this, because without FEC it would often suffer from packet-error rates close to 100%, and with ARQ on its own it would generate very low goodput.

All error detection codes transmit more bits than were in the original data. Typically the transmitter sends a fixed number of original data bits, followed by fixed number of check bits which are derived from the data by some deterministic algorithm. The receiver applies the same algorithm to the received data bits and compares its output to the received check bits; if the values do not match, an error has occurred at some point during the transmission.

Error-correcting codes are redundant data that is added to the message on the sender side. If the number of errors is within the capability of the code being used, the receiver can use the extra information to discover the locations of the errors and correct them.

Error correction are suitable for simplex communication, ie communication that occurs in one direction only, for example broadcasting. They are also used in computer data storage, for example CDs, DVDs and hard drives.

S/PDIF for digital audio is simplex (one-way). As far as I know it does not have error correction and only very rudimentary error detection but I couldn't find much information on this when I searched the net. Maybe someone can fill in? Certainly it becomes obvious that the standard was designed nearly 30 years ago and in a way such that it would make it easy (read: cheap) for manufacturers to implement. It's a shame that this it is still the prevailing standard, because its weaknesses, although relatively easy to overcome today, will continue to fuel marketing hype and digital audio cable debates...

So let me try to put this into perspective. The Ethernet standard for computer networks has been around since the 70s. The original standard ran at twice the data rate of CDs, but today's Ethernet is capable of nearly 700 times the speed needed to stream CD audio. The standard contains error correction and is duplex, ie data flows in both directions, so the receiver can ask for data to be re-sent when an error is detected. Your computer almost certainly contains an Ethernet chip already but otherwise an Ethernet card can be bought for about £5. An Ethernet cable is a couple of £ and there is no need to pay any more, because errors will be detected and corrected if/when they occur, and even so, the error rates on a cheapo cable are very low. To my mind, this puts significant doubt around the value of spending tens or even hundreds of £ on a digital cable for S/PDIF, but I am aware this is a highly controversial topic...

So then along came DACs with USB interfaces. I for one was excited about this, because the USB protocol contains robust error detecting and error correction mechanisms, so I was hoping it would be the end to over-priced digital audio components and cables. I was equally disappointed to learn that the USB standard offers an 'isochronous' transfer mode which is simplex (one-way) and still offers error detection, but no retry or guarantee of delivery, because no package responses are sent back as part of an isochronous transfer. This is the transfer mode that DACs typically use. Again, disappointingly, it simplifies the the design of the DAC, but has inherent weaknesses in that the data integrity is not guaranteed, very much like S/PDIF.

Fortunately, new DACs are starting to arrive into the market which uses the standard 'Bulk' USB transfer mode. Implemented properly, this should eliminate any theoretical chances of transfer errors, jitter, etc. Specifically, it can ask the host computer to re-send any packets that arrived with errors, and separately, use its own internal clock to drive the digital-to-analog conversion which means no jitter (or, at least, no jitter which has anything to do with the way the data is streamed into the DAC).

You can compare this with a USB printer. Clearly you don't expect your printed document to come out with garbled words or spelling mistakes created by USB transfer errors. Well you won't, because the protocol corrects them. Also, the printer prints at a certain rate, but the computer is not aware of (or doesn't care) exactly how quickly the ink is sprayed onto the paper -- instead, the printer asks the computer for data when it needs it, and loads ahead into local memory so that it always has enough data available to drive printing function.

Wavelength, Ayre and dCs make such DAC products today but I'm hoping this will become standard practise. There is no reason such a DAC has to cost £1000 or more -- remember that you can buy a USB printer for £25 which already implements all of this.

So finally -- do USB sticks and hard drives make a difference?


I doubt it, and there are several reasons.

First of all, they both use error detecting and error correcting codes, so there is no loss of data unless there is a catastrophic failure on the device (which would very much be noticed).

Second, hard drives and USB sticks use filesystems for storing and organizing the computer files and data that they contain. There are many flavours of filesystem around, but typically it will be one of NTFS (Windows), HFS (Mac), Ext (Linux) or FAT/FAT32 (MS-DOS, but still common on USB sticks and portable hard drives because of its simplicity and portability). The filesystem adds another layer of control and integrity on top of the raw data.

Thirdly, hard drives and USB sticks read and write data in 'sectors'. For an given file on your device, the filesystem will contain tables and references which tell you what sectors you need to read to fetch your data -- the file may be split into many parts in various locations on the drive. There is no concept of 'streaming' data at a fix rate from a hard drive or a USB stick -- it will simply read and return the sector(s) that you ask it for, and the speed of this will vary depending on several factors including the capabilities of the drive, whether the data is at the beginning or the end of the disk, etc.

In order to 'stream' audio data at a fix rate from a hard drive or a USB stick you must load the required data into a buffer, and stream data out of the same buffer at a fix rate. As far as the hard drive or the USB stick is concerned, all that matters is that it can read data quickly enough for the buffer to never become empty. If that would occur there would be a drop-out of the sound -- not subtle differences.

In light of all this, it is hard to see how a USB stick from Sony can 'sound' different to one from Kingston.

Sorry for the long post, but I hope that addresses some of the issues...
 

Messiah

Well-known member
Excellent post! - Very interesting
emotion-21.gif
 

shado

New member
Aug 22, 2008
126
0
0
Visit site
Thanks very much for this, you have explained a complex subject into simple terms especially as my next Hifi will be media based.
 

idc

Well-known member
Brilliant post storsvante. Can you comment on my summation.

Digital storeage and transfer of music files, because of checking, error correction and the sheer nature of the information being transferred, means that no matter the device the end product is exactly the same for the DAC. So from that, it not unreasonable to conclude that there can be no difference in the end sound of the file.
 
A

Anonymous

Guest
Thanks! I was also surprised to find out that the prevailing USB method did not offer data error detection and two way communication. And all the trouble engineers had to go through to fix this and keep the non communicating clocks in sync, as mentioned in one of my previous posts. (BTW the same can be said about CD technology, also an ancient relict from the past without error correction and checksums).

Two remarks:

The first part you write is about the streaming route storage (ssd, hdd, usb stick...) > computer > usb or s/pdif > dac > analog. I have no comments on this, but there are also usb sticks that you can plug directly in an amplifier rather than as computer storage. If so the interface is not according to the usb-audio spec, but the normal usb for file storage spec (hence, indeed, FAT etc again come into play), i.e. the stick functions as storage and the amp is the boss and reads files and processes them (without S/PDIF or USB audio protocols). This is different (but again doubtful whether there is any difference in SQ between sticks).

Second, what you do not consider are the non-digital aspects of a connection between two components (for instance a ground signal that causes interference, humms or what not). With a passive USB stick in an amp I do not see much danger, but a computer > usb dac > analog signal this is perhaps different, what do you think?
 

PJPro

New member
Jan 21, 2008
274
0
0
Visit site
A great read. You've really risen to the challenge and then some.

Agree with Pete10, there are other aspects to consider when connecting a computer based source to a DAC and there are sometimes benefits to be had with electrical isolation.

I think you're a little of target when it comes to DACs. Even within the various DAC chips themselves there appears to be general agreement that they are not all equal. Clearly, as a unit, these differences can be even more marked.
 
A

Anonymous

Guest
Pete10:

The first part you write is about the streaming route storage (ssd, hdd, usb stick...) > computer > usb or s/pdif > dac > analog. I have no comments on this, but there are also usb sticks that you can plug directly in an amplifier rather than as computer storage. If so the interface is not according to the usb-audio spec, but the normal usb for file storage spec (hence, indeed, FAT etc again come into play), i.e. the stick functions as storage and the amp is the boss and reads files and processes them (without S/PDIF or USB audio protocols). This is different (but again doubtful whether there is any difference in SQ between sticks).

Yes, agreed. Plugging the USB stick directly into the amp (which is then assumed to have an integrated DAC) ought to be preferable to having to route the data over something like S/PDIF as an intermediate step. As you say, the amp unit then has to implement the USB mass storage protocol and have to understand filesystems etc so it makes it a little more complex -- although these things are bread and butter today and shouldn't cost much to implement.

Pete10:

Second, what you do not consider are the non-digital aspects of a connection between two components (for instance a ground signal that causes interference, humms or what not). With a passive USB stick in an amp I do not see much danger, but a computer > usb dac > analog signal this is perhaps different, what do you think?

Right. I guess this will remain a potential problem for with any analog components, irrespective of whether there is a digital signal path involved somewhere -- and sure, cables intended to carry digitally encoded information will carry these leak currents just as any other physical connection. I guess optical cables are attractive from that stand point... Or wireless...
 
A

Anonymous

Guest
PJPro:I think you're a little of target when it comes to DACs. Even within the various DAC chips themselves there appears to be general agreement that they are not all equal. Clearly, as a unit, these differences can be even more marked.

Yes, I agree. I didn't mean to say that all DAC chips and/or the way they are implemented as a unit will produce identical results. Half of the DAC is analog, for starters. ;-) And there are many techniques used to prevent aliasing (*), including but not limited to clever filter designs, oversampling, etc. So absolutely -- different DACs will produce different results -- and different ADCs will produce different results on the input end too.

(*) If a piece of music is sampled at 44.1kHz, any frequency components above 22,050 Hz (the Nyquist frequency as you all know by now) will cause aliasing, meaning they will be incorrectly recorded as lower frequencies. So an analog low-pass filter is required before sending the signal through the ADC. For the same reason, the output of a DAC needs reconstruction filters on the output signal to prevent aliasing (hello DACMagic users).
 
A

Anonymous

Guest
idc:
Brilliant post storsvante. Can you comment on my summation.

Digital storeage and transfer of music files, because of checking, error correction and the sheer nature of the information being transferred, means that no matter the device the end product is exactly the same for the DAC. So from that, it not unreasonable to conclude that there can be no difference in the end sound of the file.

Something like that. But equally frustrating is that fact that although it is perfectly possible to design a 100% bit perfect digital audio system -- all the way from distribution into the converter chip of your DAC -- there are several sub-optimal but simple and (therefore?) well-established standards around (eg S/PDIF) which prevents such solutions from really hitting the mainstream. We're pretty close though.

It's great to read that Linn are starting to base their components around Ethernet. That's a big step forward and I hope others follow.
 

idc

Well-known member
What is the advantage of ethernet and what is going to stop an ethernet cable debate springing up?!

(Thanks again. You are the first 'sceptic' I can remember who has come up with simple, understandable and detailed responses.)
 

Messiah

Well-known member
storsvante:
So let me try to put this into perspective. The Ethernet standard for computer networks has been around since the 70s. The original standard ran at twice the data rate of CDs, but today's Ethernet is capable of nearly 700 times the speed needed to stream CD audio. The standard contains error correction and is duplex, ie data flows in both directions, so the receiver can ask for data to be re-sent when an error is detected. Your computer almost certainly contains an Ethernet chip already but otherwise an Ethernet card can be bought for about £5. An Ethernet cable is a couple of £ and there is no need to pay any more, because errors will be detected and corrected if/when they occur, and even so, the error rates on a cheapo cable are very low. To my mind, this puts significant doubt around the value of spending tens or even hundreds of £ on a digital cable for S/PDIF, but I am aware this is a highly controversial topic...

I think this is why...
 

manicm

Well-known member
storsvante:

PJPro:I think you're a little of target when it comes to DACs. Even within the various DAC chips themselves there appears to be general agreement that they are not all equal. Clearly, as a unit, these differences can be even more marked.

Yes, I agree. I didn't mean to say that all DAC chips and/or the way they are implemented as a unit will produce identical results. Half of the DAC is analog, for starters. ;-) And there are many techniques used to prevent aliasing (*), including but not limited to clever filter designs, oversampling, etc. So absolutely -- different DACs will produce different results -- and different ADCs will produce different results on the input end too.

(*) If a piece of music is sampled at 44.1kHz, any frequency components above 22,050 Hz (the Nyquist frequency as you all know by now) will cause aliasing, meaning they will be incorrectly recorded as lower frequencies. So an analog low-pass filter is required before sending the signal through the ADC. For the same reason, the output of a DAC needs reconstruction filters on the output signal to prevent aliasing (hello DACMagic users).

Another well known UK mag which reviewed the DACMagic in a group test thought its filters were, in a nutshell, worthless (not to take away anything from the DM overall - still got a very good rating). Indeed the Beresford and if I'm correct, even the pricier Benchmark do not have user-selectable filters.
 
A

Anonymous

Guest
idc:
What is the advantage of ethernet and what is going to stop an ethernet cable debate springing up?!

Ethernet is inherently two-way and when used with standards like TCP/IP (which is what drives the Internet and most computer networks also what Linn uses) there is robust error handling built in which makes the cable pretty uninteresting. This would be a tough case for the high-end cable manufacturers to argue, because if we couldn't rely on TCP/IP over Ethernet over standard Cat5e cables to guarantee 100% free error transmissions every time then the world would stop tomorrow. Corporations would cease to function, the internet would grind to a halt, and if you walk into a bank branch to withdraw money you'd better check your balance afterwards, as the Ethernet cable from the counter to the servers may have sneaked an extra zero into your withdrawal amount. You get the idea... ;--)

But of course this wouldn't necessarily stop another cable debate...
 
A

Anonymous

Guest
manicm:
Another well known UK mag which reviewed the DACMagic in a group test thought its filters were, in a nutshell, worthless (not to take away anything from the DM overall - still got a very good rating). Indeed the Beresford and if I'm correct, even the pricier Benchmark do not have user-selectable filters.

Worthless they are not, because they prevent aliasing. ;-) But I get your point - perhaps having 3 different user-selectable implementations is a gimmick. Think you're right about the Beresford and the Benchmark. Usually the filter choice is a fix part of the design.
 
A

Anonymous

Guest
AFAIK the wolfson DAC has these options built into the silicon, most companies opt not to implement them. CA have decided they give the user some sense of control, they are a marketing feature I think.

great post btw
 

professorhat

Well-known member
Dec 28, 2007
992
22
18,895
Visit site
storsvante:idc:
What is the advantage of ethernet and what is going to stop an ethernet cable debate springing up?!

Ethernet is inherently two-way and when used with standards like TCP/IP (which is what drives the Internet and most computer networks also what Linn uses) there is robust error handling built in which makes the cable pretty uninteresting. This would be a tough case for the high-end cable manufacturers to argue, because if we couldn't rely on TCP/IP over Ethernet over standard Cat5e cables to guarantee 100% free error transmissions every time then the world would stop tomorrow. Corporations would cease to function, the internet would grind to a halt, and if you walk into a bank branch to withdraw money you'd better check your balance afterwards, as the Ethernet cable from the counter to the servers may have sneaked an extra zero into your withdrawal amount. You get the idea... ;--)

But of course this wouldn't necessarily stop another cable debate...

But then you couldn't use the likes of TCP/IP in a music device without some sort of buffer, resulting in a delay in playback from the point of pressing play - something you might get away with in a streaming device since people aren't used to them and how they work, but most people wouldn't be happy with one in the likes of a CD player. "My old CD player used to play immediately, why should I have to wait for this one?". And before you argue people would be happy with 10-20 seconds delay for better quality, just look at how people react when their Blu-Ray player takes an extra 20 seconds to a load a disc...
emotion-16.gif


Agree with others though, nice to see a post of this sort with reasoned arguments that you can actually understand - makes a nice change!
 
A

Anonymous

Guest
Given the potential data rates that ethernet offers, I'd be suprised if the buffering takes longer than CD takes to spin up.

The AE has a lag, as the computer needs to start streaming Apple Lossless to the AE

with the cheapness of the ethernet cards and cables, it is a good trade off
 
A

Anonymous

Guest
The problem is not speed I think, it is the manufacturers who have to come up with a decent protocol for the exchange of information for the actual datastream, There are already protocols like RTSP for transfer over networks.
 

The_Lhc

Well-known member
Oct 16, 2008
1,176
1
19,195
Visit site
professorhat:storsvante:idc:
What is the advantage of ethernet and what is going to stop an ethernet cable debate springing up?!

Ethernet is inherently two-way and when used with standards like TCP/IP (which is what drives the Internet and most computer networks also what Linn uses) there is robust error handling built in which makes the cable pretty uninteresting. This would be a tough case for the high-end cable manufacturers to argue, because if we couldn't rely on TCP/IP over Ethernet over standard Cat5e cables to guarantee 100% free error transmissions every time then the world would stop tomorrow. Corporations would cease to function, the internet would grind to a halt, and if you walk into a bank branch to withdraw money you'd better check your balance afterwards, as the Ethernet cable from the counter to the servers may have sneaked an extra zero into your withdrawal amount. You get the idea... ;--)

But of course this wouldn't necessarily stop another cable debate...

But then you couldn't use the likes of TCP/IP in a music device without some sort of buffer, resulting in a delay in playback from the point of pressing play - something you might get away with in a streaming device since people aren't used to them and how they work, but most people wouldn't be happy with one in the likes of a CD player. "My old CD player used to play immediately, why should I have to wait for this one?". And before you argue people would be happy with 10-20 seconds delay for better quality, just look at how people react when their Blu-Ray player takes an extra 20 seconds to a load a disc...
emotion-16.gif


10 to 20 seconds? How big do you think the buffer is and just how slow do you think TCP/IP is?

There are already devices doing this, <banging on> Sonos for example </banging on>. I select want I want to listen to, press play and it plays. If it isn't instant it's so fast that I don't notice any delay.

It's faster than any CD player I've used, the one in my (new) car takes ages to start playing a CD.
 

professorhat

Well-known member
Dec 28, 2007
992
22
18,895
Visit site
You're right, 10 - 20 seconds is excessive for a local ethernet connection. I guess I'm just as confused as everyone else as to why it's not been implemented in digital audio devices from the start and trying to think of reasons why it hasn't - surely there must be some technical limitation?

Or maybe I'm just being monumentally naive
emotion-1.gif
 

The_Lhc

Well-known member
Oct 16, 2008
1,176
1
19,195
Visit site
professorhat: You're right, 10 - 20 seconds is excessive for a local ethernet connection. I guess I'm just as confused as everyone else as to why it's not been implemented in digital audio devices from the start and trying to think of reasons why it hasn't - surely there must be some technical limitation?

If there is it's not a speed issue, as storsvante said Ethernet is now approx. 700 times faster than the speed required to stream CD data.

Or maybe I'm just being monumentally naive
emotion-1.gif


That would be a novelty!
 
A

Anonymous

Guest
@professorhat I suspect it is innate conservatism on hardware manufacturers vs the cost of doing something new.

CDP - ethernet - AMP means two devices with "computer" hardware inside them.

Fear of the general purpose device is quite a powerful thing

RTSP, a network based streaming protocol has been around for 10 years or more, but it was focused at streaming small video clips to other computers over the internet, not local device to device streaming.Bluray and AV systems got ethernet earlier than hifi, as hifi was seen as CDs.

The rash of external DACs from the 90s could have provoked a wave of computers as digital sources, but the 100 meg hard drives would not allow that to happen. Terabyte drives made computers as music sources worth connecting to hifi is a fairly recent thing.
 

professorhat

Well-known member
Dec 28, 2007
992
22
18,895
Visit site
zzgavin:Bluray and AV systems got ethernet earlier than hifi, as hifi was seen as CDs.

But then these are, in the main, used to update the software running within these devices, not for digital streaming (except in the case of a few AV amps of course which allow you to stream some media from your computer).

So it's still not particularly prevalent even today, and even then, it's limited entirely to computer based solutions - is this really just down to conservatism from manufacturers? I find that difficult to believe, but as I say, maybe I'm being naive.
 
A

Anonymous

Guest
I might be wrong, but from what I've read on innovation people tend to get more of the same rather than repeated novel breakthroughs. Put another way, there needs to be enough of a market for your new idea and an infrastructure to support it.

Computers as an increasingly mainstream way of listening to music has only become popular in the last 10 years or so, starting with dreadful 64k MP3s etc. Decent music came on CDs / LPs, iTunes arrived alongside reasonable hard disk storage and you get what we have now. The 256k AAC is arguably as dominant as the CD.

I suspect that if you'd gone to Cyrus when they release the 3 series and said how about making the link between the amp and CD over ethernet, because it'll mean you can use a 5 quid cable and they might have laughed you out the door, no matter how valid the argument. It would have been to novel for them, probably too expensive to implement too. Interconnects did the job well enough and they fitted into the world view of how hifi worked.
 

TRENDING THREADS