QuestForThe13thNote said:
I but I'm not sure about patterns emerging more than chance.
The chance of one cable group being least ranked is 1/5 on a random basis, which isn't hugely improbable.
But what they would have done in a proper science test, for such a low number of samples, is do a test using speaker cables and interconnects all of the same low price but different makes, to see if people would mark them similarly or not. If so (or even if not) you could compare the results of these against the test they did in a t test to work out if the sets of data are statistically significantly different.
The other way they could have done that was to have each as a control. But if one of the controls was not used, say the siltech and it was replaced by one of the other cables, it is possible to see the effects on the rankings, as the siltech could have conceivably scored higher. By marking it 10 and not potentially more, the results are effected. However having each as a control, you've applied the same system across each cable. But they did say they didn't have time, so that's why you need time and resource to do these tests better.
there are also so many variables like construction, the electrical properties of the wire too, that would have to be similar or considered by testing within the test or limiting the cables to certain properties. Too many variables to draw reliable inferences and conclusions from the results they have. You might start with a test of the purity of copper and see effects on sound, and then with those results assuming same purity cables can be discerned, you can then reliably choose relevant cables like those in this magazine. But if you haven't excluded a variable you can't draw a conclusion which is reliable. If for argument sake we selected a cable that has same properties to the Valhalla and is constructed similarly but much cheaper, could we then say the Valhalla rightly has a place of being good by expense, if this similar cheaper cable ranked better, which is what the article tries to infer and seems to be its hypothesis. I don't think we can, but that comes back to not necessarily all expensive cables being best.
so I agree with you subjectively in drawing the first bullit point but I don't think you can draw any reliance on anything else.
- I think the aim of the test was kept simple ie.
a) Do cables sound different, in a way that can be identified when listening blind.
b) Are more expensive cables preferable to cheaper cables....as "different" may not be "preferable" or "better".
- Too many cables would make the test too cumbersome....so what they needed, were cables spanning a big price range....and make it interesting, by using different constructions. Cable sceptics will tell you that fancy constructions are just snake oil, in order to justify the high price....and to a degree, I'd agree that the profit margins on cables are huge.
- I don't agree with you, when you say that no patterns emerged that were above pure chance.
- The Valhalla was a clear favorite, both in terms of Points and Ranking.
- The QED, both in score and ranking, was far enough below the Valhalla, to be more than chance
- If cables all sounded the same/very similar, then the Siltec wouldn't have shown such a consistantly abysmal score on the first system.
What you are looking for is a much more complicated test, looking at effects of construction etc etc
What I think you can take from the test is:
- There are identifiable differences.
- Expensive cables can sound better....but you can't rely on it to be the case.
- Certain cables seem to perform above their price....and some aren't worth their money (compared to what's available at less money).