BigH
Well-known member
busb said:davedotco said:If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!
It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.
However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.
One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.
I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!
Dave, I have taken part in fairly informal ABX tests myself. Due to the nature of what was being compared, I had expectation bias from the start. However, I got a free lunch out of WHF & a fascinating day in Teddington. If I didn't believe that ABX testing wasn't pointless, I would suggest that the recording stage of subject's conclusions be conducted on paper otherwise, as soon as others state they hear a difference, no one wants to feel they were cloth-eared by saying they couldn't, due to peer preasure.
My beef with ABX tests is that just because it works very well for stuff like comparing camera lenses, it must work for audio. I dispute that is the case & want prove that it's effective. Am I being unreasonable?
Lets take the lens example where subjects are asked if they can see any differences between the photos taken on 2 different models where everything else is equal. Let's conjecture that the results were very inconclusive (statistically insignificant) & 2 possibilities existed. The 1st being that any differences were undectable but one manufacturer argued that the test method was flawed so preposed a test for the test. That additional test invloved degrading one of 2 otherwise identical photos then repeating the test to see if the subjects could spot the differences. If they couldn't, the test method itself was dubious (this ain't no scientific paper so we have to ignore the degree of degredation before a threshhold is reached). Conversely, if the distribution of results was wide but random, indentical photos could be slipped in to see if the distribution converged. These secondary sequences weed out erroneous answers & prove the method or not.
Properly conducted ABX testing can become extremely tedious for sure. If we are to use science & good engineering practice, lets not make assumptions that it must work for audio but test that assertion. ABX testing needs to be able to prove negative & positive results otherwise its like asking if God exists & drawing up a test where he/she is invited to reveal themself. If God shows up it proves the positive but if God doesn't show, does it prove non-existance? What if God always declines party invitations?
Please, somone point me to a paper where deliberately introduced distortions have been used & heard by test subjects & I'll shut the hell up about ABX or point out the flaws in my arguments.
I thought with ABX testing you did not know what was being tested and the WHF tests were just blind not ABX?
I don't think you can compare audio with lenses, I've seen many lens tests but not seen any ABX tests, whats the point when you can just compare images and look at lots of measurements.