ABX Testing

Page 2 - Seeking answers? Join the What HiFi community: the world's leading independent guide to buying and owning hi-fi and home entertainment products.

BigH

Well-known member
Dec 29, 2012
142
18
18,595
Visit site
busb said:
davedotco said:
If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!

It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.

However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.

One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.

I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!

Dave, I have taken part in fairly informal ABX tests myself. Due to the nature of what was being compared, I had expectation bias from the start. However, I got a free lunch out of WHF & a fascinating day in Teddington. If I didn't believe that ABX testing wasn't pointless, I would suggest that the recording stage of subject's conclusions be conducted on paper otherwise, as soon as others state they hear a difference, no one wants to feel they were cloth-eared by saying they couldn't, due to peer preasure.

My beef with ABX tests is that just because it works very well for stuff like comparing camera lenses, it must work for audio. I dispute that is the case & want prove that it's effective. Am I being unreasonable?

Lets take the lens example where subjects are asked if they can see any differences between the photos taken on 2 different models where everything else is equal. Let's conjecture that the results were very inconclusive (statistically insignificant) & 2 possibilities existed. The 1st being that any differences were undectable but one manufacturer argued that the test method was flawed so preposed a test for the test. That additional test invloved degrading one of 2 otherwise identical photos then repeating the test to see if the subjects could spot the differences. If they couldn't, the test method itself was dubious (this ain't no scientific paper so we have to ignore the degree of degredation before a threshhold is reached). Conversely, if the distribution of results was wide but random, indentical photos could be slipped in to see if the distribution converged. These secondary sequences weed out erroneous answers & prove the method or not.

Properly conducted ABX testing can become extremely tedious for sure. If we are to use science & good engineering practice, lets not make assumptions that it must work for audio but test that assertion. ABX testing needs to be able to prove negative & positive results otherwise its like asking if God exists & drawing up a test where he/she is invited to reveal themself. If God shows up it proves the positive but if God doesn't show, does it prove non-existance? What if God always declines party invitations?

Please, somone point me to a paper where deliberately introduced distortions have been used & heard by test subjects & I'll shut the hell up about ABX or point out the flaws in my arguments.

I thought with ABX testing you did not know what was being tested and the WHF tests were just blind not ABX?

I don't think you can compare audio with lenses, I've seen many lens tests but not seen any ABX tests, whats the point when you can just compare images and look at lots of measurements.
 

davedotco

New member
Apr 24, 2013
20
1
0
Visit site
BigH said:
busb said:
davedotco said:
If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!

It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.

However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.

One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.

I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!

Dave, I have taken part in fairly informal ABX tests myself. Due to the nature of what was being compared, I had expectation bias from the start. However, I got a free lunch out of WHF & a fascinating day in Teddington. If I didn't believe that ABX testing wasn't pointless, I would suggest that the recording stage of subject's conclusions be conducted on paper otherwise, as soon as others state they hear a difference, no one wants to feel they were cloth-eared by saying they couldn't, due to peer preasure.

My beef with ABX tests is that just because it works very well for stuff like comparing camera lenses, it must work for audio. I dispute that is the case & want prove that it's effective. Am I being unreasonable?

Lets take the lens example where subjects are asked if they can see any differences between the photos taken on 2 different models where everything else is equal. Let's conjecture that the results were very inconclusive (statistically insignificant) & 2 possibilities existed. The 1st being that any differences were undectable but one manufacturer argued that the test method was flawed so preposed a test for the test. That additional test invloved degrading one of 2 otherwise identical photos then repeating the test to see if the subjects could spot the differences. If they couldn't, the test method itself was dubious (this ain't no scientific paper so we have to ignore the degree of degredation before a threshhold is reached). Conversely, if the distribution of results was wide but random, indentical photos could be slipped in to see if the distribution converged. These secondary sequences weed out erroneous answers & prove the method or not.

Properly conducted ABX testing can become extremely tedious for sure. If we are to use science & good engineering practice, lets not make assumptions that it must work for audio but test that assertion. ABX testing needs to be able to prove negative & positive results otherwise its like asking if God exists & drawing up a test where he/she is invited to reveal themself. If God shows up it proves the positive but if God doesn't show, does it prove non-existance? What if God always declines party invitations?

Please, somone point me to a paper where deliberately introduced distortions have been used & heard by test subjects & I'll shut the hell up about ABX or point out the flaws in my arguments.

I thought with ABX testing you did not know what was being tested and the WHF tests were just blind not ABX?

I don't think you can compare audio with lenses, I've seen many lens tests but not seen any ABX tests, whats the point when you can just compare images and look at lots of measurements.

ABX testing is quite specific and controlled, nothing remotely like the informal 'blind' testing carried out by WHF among others.

It is rather difficult to convince anyone of the validity of ABX testing when they actually have no idea what it is and have never taken part in one.

I actually went to the trouble, earlier in the thread, of explaining precisely what ABX testing is and how it works, yet people still do not understand yet feel free to comment on it.
 

BigH

Well-known member
Dec 29, 2012
142
18
18,595
Visit site
drummerman said:
'AB ... ' tests, a favorite buzzword and yet most have never participated in one.

I am a sceptic as to their accuracy in certain circumstances.

Perhaps if done in familiar surroundings on a familiar system with familiar music they have relevance.

In any other circumstances expectation pressure, anxiety/stress and to many unknown factors will probably distort any result to unusable levels especially if it concerns subtle differences such as cables and perhaps even digital sources.

I did do one many a moon ago and the 'stress' of finding the place, the excitement of meeting certain people and totally unfamiliar surroundings/system/music etc will probably have made a mockery of the result.

This is why reviewers have reference systems, treated rooms etc as known constant to assess products. It's easy to take make fun of reviewers but they have one up on most of us when it comes to find differences, often 'hifiesque' language and personal preferences notwithstanding.

Thats a fair point but they are not blind tests, they know what they are testing? and the price?

Also that room is nothing like a normal living room, no wonder they have different results from users. The problem is most manufacturers usually design products for a living room not a fully treated room, one was saying the other week that the WHF room may favour brighter sounding speakers, so when they say dull that maybe the reason.
 

The_Lhc

Well-known member
Oct 16, 2008
1,176
1
19,195
Visit site
busb said:
davedotco said:
If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!

It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.

However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.

One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.

I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!

 

Dave, I have taken part in fairly informal ABX tests myself. Due to the nature of what was being compared, I had expectation bias from the start. However, I got a free lunch out of WHF & a fascinating day in Teddington. If I didn't believe that ABX testing wasn't pointless, I would suggest that the recording stage of subject's conclusions be conducted on paper otherwise, as soon as others state they hear a difference, no one wants to feel they were cloth-eared by saying they couldn't, due to peer preasure.

My beef with ABX tests is that just because it works very well for stuff like comparing camera lenses, it must work for audio. I dispute that is the case & want prove that it's effective. Am I being unreasonable?

Lets take the lens example where subjects are asked if they can see any differences between the photos taken on 2 different models where everything else is equal. Let's conjecture that the results were very inconclusive (statistically insignificant) & 2 possibilities existed. The 1st being that any differences were undectable but one manufacturer argued that the test method was flawed so preposed a test for the test. That additional test invloved degrading one of 2 otherwise identical photos then repeating the test to see if the subjects could spot the differences. If they couldn't, the test method itself was dubious (this ain't no scientific paper so we have to ignore the degree of degredation before a threshhold is reached). Conversely, if the distribution of results was wide but random, indentical photos could be slipped in to see if the distribution converged. These secondary sequences weed out erroneous answers & prove the method or not.

Properly conducted ABX testing can become extremely tedious for sure. If we are to use science & good engineering practice, lets not make assumptions that it must work for audio but test that assertion. ABX testing needs to be able to prove negative & positive results otherwise its like asking if God exists & drawing up a test where he/she is invited to reveal themself. If God shows up it proves the positive but if God doesn't show, does it prove non-existance? What if God always declines party invitations?

Please, somone point me to a paper where deliberately introduced distortions have been used & heard by test subjects & I'll shut the hell up about ABX or point out the flaws in my arguments.

 

Your lens test argument is completely flawed, if all you're doing is looking at two different pictures you don't even have proof that the two different lenses were used to take them.

And I don't understand why you expect distortion to be deliberately added to an audio test. That doesn't make any sense.

Google the methodology for abx testing and then explain why you think it wouldn't work. Only one parameter is being changed, if no difference is heard that's because there is no difference. I don't understand why that's such a difficult concept to grasp?
 

manicm

Well-known member
pauln said:
Some claim that abx testing is flawed because it puts people under 'stress' - the only stress I can see is that some people may be in fear of being found out. If differences are described as being like 'night and day' or 'a veil being lifted' then one would expect that those differences could easily be discerned in virtually any situation.

For the record I could distinguish between a high DR and a lower DR remaster of the same track but not between a flac and a 320 mp3 version of a track. (Ripped and compressed from CD on my laptop and listened to with Sennheiser HD650 headphones via an ODAC)

From the few times I've tried it it's bloody exhausting if not stressful. I lost my patience after 10 minutes, which then begs the question how useful is it for some people? For me personally it's not practical. I'd rather listen blindly to a system for 10 minutes straight with uninterrupted music, followed by another system playing same or different music for another 10 minutes. Yes memory may be an issue, but I'd gather when I'm enjoying myself more.
 

busb

Well-known member
Jun 14, 2011
86
10
18,545
Visit site
BigH said:
busb said:
davedotco said:
If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!

It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.

However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.

One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.

I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!

Dave, I have taken part in fairly informal ABX tests myself. Due to the nature of what was being compared, I had expectation bias from the start. However, I got a free lunch out of WHF & a fascinating day in Teddington. If I didn't believe that ABX testing wasn't pointless, I would suggest that the recording stage of subject's conclusions be conducted on paper otherwise, as soon as others state they hear a difference, no one wants to feel they were cloth-eared by saying they couldn't, due to peer preasure.

My beef with ABX tests is that just because it works very well for stuff like comparing camera lenses, it must work for audio. I dispute that is the case & want prove that it's effective. Am I being unreasonable?

Lets take the lens example where subjects are asked if they can see any differences between the photos taken on 2 different models where everything else is equal. Let's conjecture that the results were very inconclusive (statistically insignificant) & 2 possibilities existed. The 1st being that any differences were undectable but one manufacturer argued that the test method was flawed so preposed a test for the test. That additional test invloved degrading one of 2 otherwise identical photos then repeating the test to see if the subjects could spot the differences. If they couldn't, the test method itself was dubious (this ain't no scientific paper so we have to ignore the degree of degredation before a threshhold is reached). Conversely, if the distribution of results was wide but random, indentical photos could be slipped in to see if the distribution converged. These secondary sequences weed out erroneous answers & prove the method or not.

Properly conducted ABX testing can become extremely tedious for sure. If we are to use science & good engineering practice, lets not make assumptions that it must work for audio but test that assertion. ABX testing needs to be able to prove negative & positive results otherwise its like asking if God exists & drawing up a test where he/she is invited to reveal themself. If God shows up it proves the positive but if God doesn't show, does it prove non-existance? What if God always declines party invitations?

Please, somone point me to a paper where deliberately introduced distortions have been used & heard by test subjects & I'll shut the hell up about ABX or point out the flaws in my arguments.

I thought with ABX testing you did not know what was being tested and the WHF tests were just blind not ABX?

I don't think you can compare audio with lenses, I've seen many lens tests but not seen any ABX tests, whats the point when you can just compare images and look at lots of measurements.

I think there are degrees of ABX testing from the fairly informal to full-on where neither the facillitor nor the subjects know what's being tested to the extent you may even have a quartet playing behind acoustically transparent curtains. The most important premise is that you play A, B then either as X without any one participating knowing. This could be entirely automated with the facillitator pressing a button for the next sequence with any required switching being under computer control. The idea that this facillitator is equally unaware elliminates any question of subliminal bias from him or her - hence double blind.

The idea of using the analogy of lenses was to try to illustrate different approaches for different types of testing. In a strict sense we could show people photo A, then photo B, remove them then show them photo X then ask if what they'd seen was A or B. They can only see one photo at a time, This method would be a direct analogue to audio. However, it would be far more useful to just compare the 2 photos together, otherwise it become more a test of memory than visual accuity!

That highlights my point with audio ABX - it's a memory test!! Are other people not even slightly curious as to why nearly every formal audio ABX test has nearly always thrown up negative results & rarely positive ones? We know that audio memory isn't exactly that great yet we persist in proving it with what I suspect are erroneous tests?
 

busb

Well-known member
Jun 14, 2011
86
10
18,545
Visit site
The_Lhc said:
busb said:
davedotco said:
If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!

It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.

However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.

One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.

I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!

Dave, I have taken part in fairly informal ABX tests myself. Due to the nature of what was being compared, I had expectation bias from the start. However, I got a free lunch out of WHF & a fascinating day in Teddington. If I didn't believe that ABX testing wasn't pointless, I would suggest that the recording stage of subject's conclusions be conducted on paper otherwise, as soon as others state they hear a difference, no one wants to feel they were cloth-eared by saying they couldn't, due to peer preasure.

My beef with ABX tests is that just because it works very well for stuff like comparing camera lenses, it must work for audio. I dispute that is the case & want prove that it's effective. Am I being unreasonable?

Lets take the lens example where subjects are asked if they can see any differences between the photos taken on 2 different models where everything else is equal. Let's conjecture that the results were very inconclusive (statistically insignificant) & 2 possibilities existed. The 1st being that any differences were undectable but one manufacturer argued that the test method was flawed so preposed a test for the test. That additional test invloved degrading one of 2 otherwise identical photos then repeating the test to see if the subjects could spot the differences. If they couldn't, the test method itself was dubious (this ain't no scientific paper so we have to ignore the degree of degredation before a threshhold is reached). Conversely, if the distribution of results was wide but random, indentical photos could be slipped in to see if the distribution converged. These secondary sequences weed out erroneous answers & prove the method or not.

Properly conducted ABX testing can become extremely tedious for sure. If we are to use science & good engineering practice, lets not make assumptions that it must work for audio but test that assertion. ABX testing needs to be able to prove negative & positive results otherwise its like asking if God exists & drawing up a test where he/she is invited to reveal themself. If God shows up it proves the positive but if God doesn't show, does it prove non-existance? What if God always declines party invitations?

Please, somone point me to a paper where deliberately introduced distortions have been used & heard by test subjects & I'll shut the hell up about ABX or point out the flaws in my arguments.

Your lens test argument is completely flawed, if all you're doing is looking at two different pictures you don't even have proof that the two different lenses were used to take them.

And I don't understand why you expect distortion to be deliberately added to an audio test. That doesn't make any sense.

Google the methodology for abx testing and then explain why you think it wouldn't work. Only one parameter is being changed, if no difference is heard that's because there is no difference. I don't understand why that's such a difficult concept to grasp?

The problem with analogies is that that they can & do break down. If I understand your lens test objection, you are saying that the subjects only have the word of those carrying out the tests rather than proof. The whole process of using the 2 lenses could be video'd for verification if challenged.

The closest approach for the lens analogy would be to show pic A, remove it, show pic B, remove it then show pic X that's either A or B & ask which one it is. That would be nuts because it has become a test of visual memory & we could just present both pic together & ask if anyone could tell them apart

That's the crux of my objection to even scrupulously conducted DB ABX tests is that they effectively become a damn memory test! Audio happens in time, photos in space. I'm willing to bet that visual memory is more persistent than audio. ABX testing for audio is asking the impossible.

My point in suggesting also adding forms of distortion is to weed out test subjects who are trying to spoil the results either on purpose or subliminally. If you invited a bunch of sceptics to an audio test, Do you think they would suddenly think "Hell, I can tell that there's a difference afterall!" Their results would probably have a random distribution. How about subjects who had inpaired hearing? Their results would probably be pretty random as well - both groups would effectively be guessing. If we included sequences of A & A with quantifiable defects, the distribution of results should rise beyond random for those test sequences because they are real & not imagined (& measureable, therefore repeatable). So, if it proved that the distribution of results for the sequences that were distorted in some way were the same as for say tests for cable A & cable B it would prove that the whole method is flawed. I'm personally very sceptical of any test that can't distinguish between positive, negative & null results. The added distortions could be a number of different aspects such as channel imbalance, increased noise floor, added harmonic distortion etc, in varying degrees. To recap: if test subjects can't tell if music has been doctored deliberately, what chance have they got detecting fiarly subtle differences in amplifiers or cables?

I am suggesting ABX testing is fairly pointless but am trying to offer a test for its effectiveness. If I'm right, we need to rethink how we could test. Now, if I can find (or someone else) can find proof that what I've suggested has been done scientifically already, I'll acknowledge that I've been fooled not just some of the time but every time I thought I heard a difference despite often NOT being able to tell a difference!
 

lpv

New member
Mar 14, 2013
47
0
0
Visit site
davedotco said:
lpv said:
davedotco said:
If you read my description of what and how ABX testing is actually designed to work, you can see that it is totally effective. It is simple scientific method reduced to it's most basic, it is how the world works!

It is worth pointing out that if no difference can be heard, the the concept of better or worse has no meaning.

However, tests of this type are time consuming and expensive to conduct, the hi-fi industry has no interest in setting them up, for obvious reasons. Companies like Harman are known to use them but do not, often, publish the results.

One of the most enlightening experiences is, in my view, taking part in a third party blind test. There is no need for it to be rigorous enough to satisfy scientific scrutiny, just a simple, level matched test where the specific components are not known to the listener.

I have been involved in a number of such tests, both as operator and listener and, to be honest, the results are startling, pretty much every time!

any chance you can describe the results of one or two most interesting/ shocking/ startling ones?

The first is quite famous, it took place in 1978, I was not involved but several journalists and industry figues were. It involves a comparitive test of 3 very different amplifiers 'known' to have very different sounds and to have generated many articles and discussion about the merits of each. It is easily researched.

The first was a classic valve, push/pull design delivering around 8-12 watts depending on the distortion level considered acceptable, the second a regular class AB solid state design delivering up to 45 watts and the third was an innovative (for it's day) Current Dumping design with 100 watts.

A system was set up using revealing electrostatic loudspeakers and levels carefully matched, at normal (non overload) levels, no one could tell which amplifier was playing, despite pretty much everyone present being aware of the 'known' differences.

Secondly, around the same time, I was involved in a number of blind tests carried out for the early editions of Hi-fi Choice. These were not scientifically rigorous tests, we listened and discussed as a group and in most cases knew what items were being tested but not the order in which they were being played.

Once again the shocking results were just how difficult it was to hear differences in amplifiers, I mean really difficult, despite knowing that, in 'normal use', some of the amplifiers to be as different as 'chalk and cheese', to use the terminology of the day.

What was even more startling was when the testing was of loudspeakers. Levels were matched using an SPL meter with the result that several expensive, highly regarded models, sounded no better than some much cheaper models. This caused some consternation, everybody knew that speakers made a big difference but the tests told us a very different story. Sure there were a handful of designs that stood out, usually by being quite poor, but the difference between compedent designs of a similar type was a lot less than you might think.

Again these early Hi-fi Choice 'Group Tests' are researchable on line.

cheers dave... it's not difficult to do some abx test at home.
 

davedotco

New member
Apr 24, 2013
20
1
0
Visit site
You really are clutching at straws here.

First of all an ABX is not a memory test, in the case of comparing cables the switching can be instantaneous and under the control of the listener.

Willful 'sabotage' by 'sceptics' or otherwise is possible but this is true in any comparitive testing. If the test is for serious scientific purposes (rather than hi-fi) then the participents will be randomised.

The ABX is simply the best test available to determine whether a real difference exists or not, the methodology can be set up in various ways but the simplest is usually the best. Adding distortion is another nonsense, ABX deals with one variable at a time not two or more.

Suggesting that the tests are unreliable because they can be 'rigged' is irrelevant, possibly paranoid and why would you conduct audio tests with subjects you know to be 'hearing inpaired'. Ridiculous.

ABX is not a qualitive test, it is not asking 'which is better', so there is no 'positive' or 'negative' results as you suggest, simply yes or no. The statistical analysis is child's play.

You appear to believe that the whole scientific community is, in this instance, trying to con you, why on earth would they do that? This is not climate change lobby trying to justify it's funding, there is no money in hi-fi testing.
 

RobinKidderminster

New member
May 27, 2009
582
0
0
Visit site
Reviewers and manufacturers claim huge differences between equipment - deep bass, sweet treble, increased soundstage etc. Yet we are talking stringent scientific testing to ensure the accuracy and validity of our own hearing to detect the smallest differences between different equipment. Has to be some humour here?
 

RobinKidderminster

New member
May 27, 2009
582
0
0
Visit site
Reviewers and manufacturers claim huge differences between equipment - deep bass, sweet treble, increased soundstage etc. Yet we are talking stringent scientific testing to ensure the accuracy and validity of our own hearing to detect the smallest differences between different equipment. Has to be some humour here?
 

BigH

Well-known member
Dec 29, 2012
142
18
18,595
Visit site
What it does seem to show is in blind tests no one can hear those night and day differences.

I do agree some tests are better than others, I prefer the ones where you can switch between the different samples. Some of the mp3 v cd/flac tests the samples are too long and you can't just switch bettween them, why do I want to some music for 1 minute over and over again? When the critical part maybe at 50 seconds.

The point is if you have difficulty in picking out a difference when listening side by side, could you pick out which is which in your system?
 

BigH

Well-known member
Dec 29, 2012
142
18
18,595
Visit site
RobinKidderminster said:
Reviewers and manufacturers claim huge differences between equipment - deep bass, sweet treble, increased soundstage etc. Yet we are talking stringent scientific testing to ensure the accuracy and validity of our own hearing to detect the smallest differences between different equipment. Has to be some humour here?

Of course they do they are selling a product, they are hardly to say its almost the same as the last model. Its called marketing I believe, they are all in it together, ever wondered why there are so few bad reviews? Nearly everything gets 4 or 5 stars when really a lot of those should be only 3 stars, but then 3 stars don't sell, what would Richers do?
 

spiny norman

New member
Jan 14, 2009
293
2
0
Visit site
BigH said:
Of course they do they are selling a product, they are hardly to say its almost the same as the last model. Its called marketing I believe, they are all in it together, ever wondered why there are so few bad reviews? Nearly everything gets 4 or 5 stars when really a lot of those should be only 3 stars, but then 3 stars don't sell, what would Richers do?

Little grey men, lizard people, secret world government, second shooter, etc.
 

Craig M.

New member
Mar 20, 2008
127
0
0
Visit site
busb said:
So, can someone please provide links to scientifically verifiable data that DB ABX testing for audible differences works rather an merely assuming it does. Hopefully such proof will include raw data rather than just interpretation.

I can't provide a link because I'm not sure it's hosted anymore but I read a report that I found a link to on headfi (I think, it was a few years ago) that was aimed at testing the audibility threshold of jitter, iirc the guy was contacted by someone (from an audio forum) who suggested to him that ab/x wasn't the best method and he would get better results if the listeners had longer periods to get used to the sound - or something like that. So he decided to try to test if ab/x or longer listening sessions were better for detecting small differences. I can't remember how many people took part but they included students, musicians, audiophiles and they were divided into 2 groups - one group would do ab/x and the other would listen for 2 weeks or so - and the aim was to listen to their own systems at home with a 'black box' inserted somewhere (can't remember where) that had a switch, one switch position did nothing the other added a small amount of distortion, and determine which position was which. The ab/x lot with rapid switching every 10 seconds or so were able to quickly determine which was which and could identify it blind, the long term listeners scored no better than guessing.

You may be able to find copies of the report as it was done (I think) for the Japanese Audio Engineering Society, I also recall a guy called Sergeauckland (on Hifi wigwam) had copies of it that he was happy to email. The original link I followed from Headfi didn't work the last time I looked, which was probably a couple of years ago. I'm not sure it's any kind of definitive proof but I did spend a while looking for tests of ab/x itself for audio purposes and it was all I could find.
 

busb

Well-known member
Jun 14, 2011
86
10
18,545
Visit site
Craig M. said:
busb said:
So, can someone please provide links to scientifically verifiable data that DB ABX testing for audible differences works rather an merely assuming it does. Hopefully such proof will include raw data rather than just interpretation.

I can't provide a link because I'm not sure it's hosted anymore but I read a report that I found a link to on headfi (I think, it was a few years ago) that was aimed at testing the audibility threshold of jitter, iirc the guy was contacted by someone (from an audio forum) who suggested to him that ab/x wasn't the best method and he would get better results if the listeners had longer periods to get used to the sound - or something like that. So he decided to try to test if ab/x or longer listening sessions were better for detecting small differences. I can't remember how many people took part but they included students, musicians, audiophiles and they were divided into 2 groups - one group would do ab/x and the other would listen for 2 weeks or so - and the aim was to listen to their own systems at home with a 'black box' inserted somewhere (can't remember where) that had a switch, one switch position did nothing the other added a small amount of distortion, and determine which position was which. The ab/x lot with rapid switching every 10 seconds or so were able to quickly determine which was which and could identify it blind, the long term listeners scored no better than guessing.

You may be able to find copies of the report as it was done (I think) for the Japanese Audio Engineering Society, I also recall a guy called Sergeauckland (on Hifi wigwam) had copies of it that he was happy to email. The original link I followed from Headfi didn't work the last time I looked, which was probably a couple of years ago. I'm not sure it's any kind of definitive proof but I did spend a while looking for tests of ab/x itself for audio purposes and it was all I could find.

Thanks - some interesting point you raise. I've just had a quick look at the link in your sig. Will return to it. This may be of interest to anyone not completely bored by the subject:

https://en.wikipedia.org/wiki/ABX_test

Being wiki it has extensive links to statistical analysis as well as highlighting some of the flaws. The whole subject of ABX testing isn't as cut & dried as some here would have us belief.

While researching, I'll try to find references to thrreshhold tests that may well throw doubt on the idea that ABX is ineffective. I'm quite aware that trusting my hearing is far from reliable so I tend to be sceptical as a default & repeat tests if interested enough such as putting back an "inferior" cable (as an example) to find out if the SQ drops. If it doesn't, I was simply mistaken as has been the case. I'm also quite aware of expectation bias, peer pressure etc. I also know there is a need for objective tests - as long as they are effective. Saying "Well, ABX tests are the best we have!" is hardly good science. I do agree with Dave when he says that switching needs to be near instantaneous which invalidates most informal tests. As for testing at home, Foobar2000 can do that.

As an aside, I don't find every thread here of interest so I simply ignore them.
 

Craig M.

New member
Mar 20, 2008
127
0
0
Visit site
Personally I prefer a/x, if you take cables as an example you listen to cable 'a' (doesn't matter if you know which cable it is at this point) and then you listen to 'x' (x is listened to blind and could be 'a' again or the new cable), all you have to do is say if it's the same or different. I find ab/x tests boring and hard work so anything that makes it quicker is ok by me, cuts down on the memory aspect of it too.

As an aside, I was in the 'believer' camp until I made the mistake of trying to prove to someone I could hear a difference by doing so blind. *blush*
 

davedotco

New member
Apr 24, 2013
20
1
0
Visit site
Craig M. said:
Personally I prefer a/x, if you take cables as an example you listen to cable 'a' (doesn't matter if you know which cable it is at this point) and then you listen to 'x' (x is listened to blind and could be 'a' again or the new cable), all you have to do is say if it's the same or different. I find ab/x tests boring and hard work so anything that makes it quicker is ok by me, cuts down on the memory aspect of it too.

As an aside, I was in the 'believer' camp until I made the mistake of trying to prove to someone I could hear a difference by doing so blind. *blush*

A damascene moment there, one that many people have had and many more should.

Leaving the rigorous science and methodology aside, a third party operated blind test (just get someone else to make the changes for you) with the levels carefully matched will show just how tiny the differences are between cables or hi-fi electronics.

It quickly teaches you that the differences that you hear when auditioning are often not what you think they are, I find it brings a strong sense of reality to the proceedings.
 

TRENDING THREADS

Latest posts