A
Anonymous
Guest
Hi Monstrous - I am working on an answer to your question - however I have
kids and a lot on at the moment- I want to write with some clarity as the
earthing/ground, screen and drain thing is easy to get right - and it's also
easy to over look a ground loop as well - and it does make a difference.
What I will say to people who doubt the sensitivity of the ear to anything
above 20Khz and the importance of maintaining the integrity of the timing of
the audio within microseconds (millionths) is that there is a part of the brain
called the medial olivary nucleus - it's a nucleus that has a mess of nerves
with nodes at very small offset differences on them. It is understood that this
measures very precise time difference between the arrival of sound between the
left and right ear - the Interaural Time Difference (ITD - look it up). The measuring
the offset of the position of the nodes indicates sensitivity to full wave
lengths of time periods of 10 microseconds – that's 100 KHz.
That nucleus exists with several others- all working coherently but within
different realms and on different principles (The Interaural Phase Difference
(IPD) nucleus, pressure difference detection and many other interlinked neural 'coding'
systems for the brain to process). Bear with me on this it's worth it…
The experience of listening correlates with the various systems identified
in the brain. Maybe making an analogy with vision helps visualise (excuse pun) the mechanisms
at play. Our nervous systems
firstly identify light levels, then monochromatic imagery, then colour, and
finally depth of image. The clearer the view, the more information we
have and the higher the function is that comes into play and takes dominance.
The order of theses sensitivities correlates with the order they evolved (-
funny that - where did we come from?).
Aural sensitivity also seems to have a similar order of sensitisation - firstly we detect
vibration, then sound, then differences in pressure on each ear, bone and face
tissue, then phase differences (IPD) for
course directionality, and finally fine interaural time differences (ITD) for precise positional info.
All these systems work together to form our perception of space around us. We
need all the correct information for the brain to do the processing in a
complete way.
Back to the visual analogy. At the cinema we switch off our 3D judgement and enjoy a film
never the less, fooling ourselves that the action is really in front of us.
Totally enjoyable, but we 'know' that the picture is a 2D image- we just
suspend that sensitivity. 3D (albeit in its post primeval bog infancy) adds
something and brings depth. It does not reduce our experience of the excellence
and 'realism' of the 2D cinema or even monochromatic films of the past.
Audio is the similar- good reproduction with detailed timing thrown
away can and generally will sound good these days – It's a basic 2D image we
can get- sometimes with a bit of fundamental depth separation and a basic,
course 3D imagery. At this stage the interaural phase difference bit may be
dominant. With the detailed timing info lost or masked, either through poor
sampling, jitter or the addition of noise, the ITD nucleus simply has nothing
to work on and so our minds switch of that sensitivity – and we still enjoy the
music, listening to the dominant bass, the sweetness of the violins….whatever…but
with the timing maintained and reproduced faithfully (from well recorded and
mastered source material), then our brains ITD nucleus contributes to our
sensing and starts to be able to place the images in 3d as recorded. Simple
really.
What's interesting is that people with diminished hearing do not loose
their ability to locate the source of the sounds that they can hear – the IPD
and ITD measure is still active and sensitive to low frequencies. In fact in
blind people and people with diminished hearing range this ability seems to be
more dominant. My first hand and somewhat shock of an experience of this was
when I was knocked sideways when a guy walked into a room with hearing aids whistling
with feedback because they had so much gain, and he turned to me and asked why
I had changed the cable back (I had swapped cables back when he was out the
room without him knowing)…I know it defies common sense….but what do we know….I also used to work with Robbin Millar, a very talented producer. He went totally blind, but his sense of directionality with sound become incredable
When a record- replay chain has absolute timing integrity to the high frequencies,
instruments can be placed and there is a separation that then comes into being,
and things take the position they were recorded at...recording and microphone technique
allowing....
As mentioned earlier, its not only loss of timing info during the recordings,
but masking of the detail that counts. Noise and jitter being typical masks. That's
why, when you lower the noise floor the separation comes back to represent the recording
environment as the mics compounded together to captured it. Mains hum, switch mode
HF, unstable system zero volts plane and eddy currents all contribute as noise.
Perhaps that explains your experience.
How it is recorded is a different matter - ever wondered why orchestras and
classic 'greats' sound wonderful against some of the impeccably produced blandness
we have to put up with? Maybe it's something to do with the fact that the perennial
greats that keep popping up in remixes etc and most classical recordings are
done as a groups so that the mics pick up ambient information from all round
the room; - timing about the room and where the instruments are is captured.
The violin mic will pick up the clarinet microseconds later, and vice versa.
Good mic technique creates wonderful recordings. Pre 1985 pop recordings tended
to be done this way (Queen, The Who, The Beatles, Elvis). The basic tracks were
laid in one take, versus today's anodyne track by track layering. Food for
thought all you music lovers out there....
Finally, for whoever posted a pic of the Lochness monster, I
agree- there is a similarity between the two- the similarity between the ITD and
IPD nucleus research and the Lochness Monster is that they have both been
difficult to tie down and
analyse– the difference is that the aural nervous system is being researched and documented in
university hospital archives and in reputable medical journals such as the
Lancet. But what do I know….
kids and a lot on at the moment- I want to write with some clarity as the
earthing/ground, screen and drain thing is easy to get right - and it's also
easy to over look a ground loop as well - and it does make a difference.
What I will say to people who doubt the sensitivity of the ear to anything
above 20Khz and the importance of maintaining the integrity of the timing of
the audio within microseconds (millionths) is that there is a part of the brain
called the medial olivary nucleus - it's a nucleus that has a mess of nerves
with nodes at very small offset differences on them. It is understood that this
measures very precise time difference between the arrival of sound between the
left and right ear - the Interaural Time Difference (ITD - look it up). The measuring
the offset of the position of the nodes indicates sensitivity to full wave
lengths of time periods of 10 microseconds – that's 100 KHz.
That nucleus exists with several others- all working coherently but within
different realms and on different principles (The Interaural Phase Difference
(IPD) nucleus, pressure difference detection and many other interlinked neural 'coding'
systems for the brain to process). Bear with me on this it's worth it…
The experience of listening correlates with the various systems identified
in the brain. Maybe making an analogy with vision helps visualise (excuse pun) the mechanisms
at play. Our nervous systems
firstly identify light levels, then monochromatic imagery, then colour, and
finally depth of image. The clearer the view, the more information we
have and the higher the function is that comes into play and takes dominance.
The order of theses sensitivities correlates with the order they evolved (-
funny that - where did we come from?).
Aural sensitivity also seems to have a similar order of sensitisation - firstly we detect
vibration, then sound, then differences in pressure on each ear, bone and face
tissue, then phase differences (IPD) for
course directionality, and finally fine interaural time differences (ITD) for precise positional info.
All these systems work together to form our perception of space around us. We
need all the correct information for the brain to do the processing in a
complete way.
Back to the visual analogy. At the cinema we switch off our 3D judgement and enjoy a film
never the less, fooling ourselves that the action is really in front of us.
Totally enjoyable, but we 'know' that the picture is a 2D image- we just
suspend that sensitivity. 3D (albeit in its post primeval bog infancy) adds
something and brings depth. It does not reduce our experience of the excellence
and 'realism' of the 2D cinema or even monochromatic films of the past.
Audio is the similar- good reproduction with detailed timing thrown
away can and generally will sound good these days – It's a basic 2D image we
can get- sometimes with a bit of fundamental depth separation and a basic,
course 3D imagery. At this stage the interaural phase difference bit may be
dominant. With the detailed timing info lost or masked, either through poor
sampling, jitter or the addition of noise, the ITD nucleus simply has nothing
to work on and so our minds switch of that sensitivity – and we still enjoy the
music, listening to the dominant bass, the sweetness of the violins….whatever…but
with the timing maintained and reproduced faithfully (from well recorded and
mastered source material), then our brains ITD nucleus contributes to our
sensing and starts to be able to place the images in 3d as recorded. Simple
really.
What's interesting is that people with diminished hearing do not loose
their ability to locate the source of the sounds that they can hear – the IPD
and ITD measure is still active and sensitive to low frequencies. In fact in
blind people and people with diminished hearing range this ability seems to be
more dominant. My first hand and somewhat shock of an experience of this was
when I was knocked sideways when a guy walked into a room with hearing aids whistling
with feedback because they had so much gain, and he turned to me and asked why
I had changed the cable back (I had swapped cables back when he was out the
room without him knowing)…I know it defies common sense….but what do we know….I also used to work with Robbin Millar, a very talented producer. He went totally blind, but his sense of directionality with sound become incredable
When a record- replay chain has absolute timing integrity to the high frequencies,
instruments can be placed and there is a separation that then comes into being,
and things take the position they were recorded at...recording and microphone technique
allowing....
As mentioned earlier, its not only loss of timing info during the recordings,
but masking of the detail that counts. Noise and jitter being typical masks. That's
why, when you lower the noise floor the separation comes back to represent the recording
environment as the mics compounded together to captured it. Mains hum, switch mode
HF, unstable system zero volts plane and eddy currents all contribute as noise.
Perhaps that explains your experience.
How it is recorded is a different matter - ever wondered why orchestras and
classic 'greats' sound wonderful against some of the impeccably produced blandness
we have to put up with? Maybe it's something to do with the fact that the perennial
greats that keep popping up in remixes etc and most classical recordings are
done as a groups so that the mics pick up ambient information from all round
the room; - timing about the room and where the instruments are is captured.
The violin mic will pick up the clarinet microseconds later, and vice versa.
Good mic technique creates wonderful recordings. Pre 1985 pop recordings tended
to be done this way (Queen, The Who, The Beatles, Elvis). The basic tracks were
laid in one take, versus today's anodyne track by track layering. Food for
thought all you music lovers out there....
Finally, for whoever posted a pic of the Lochness monster, I
agree- there is a similarity between the two- the similarity between the ITD and
IPD nucleus research and the Lochness Monster is that they have both been
difficult to tie down and
analyse– the difference is that the aural nervous system is being researched and documented in
university hospital archives and in reputable medical journals such as the
Lancet. But what do I know….