• If you ever spot Spam (either in the forums, or received via forum direct message) please use the Report button at the bottom of each post to make sure a Moderator can handle it quickly. Thanks for your help in keeping things running smoothly!

Just trying to clarify some general thoughts about the testing of equipment........

admin_exported

New member
Aug 10, 2019
2,556
2
0
I would like to know something from the WHF team, and others opinions..........

I was wondering how similar evolutionary products from manufacturers get such different reviews and receptions.....?

The example that I will use is the difference between the Arcam A85 and the Arcam A90. To my mind (and I could be completely wrong) they are the same Amp, all Arcam have done is slightly tweaked the power output, by 5 watts, and yet when the A85 came out it was a 5-star product and, obviously, the A90 was not. Similar examples could be used using early NAD equipment (and not localised to What Hi Fi - but the 3020 [I know I am always going on about that but I read a lot about NAD when researching which amp to get!]) was worshipped and its derivatives, generally, received luke warm reviews.

I know this begins to look like a doubting Thomas question, but it is not. I am just wondering about the factors that go into making reviews different (I think that what I and others find hard is that whereas What Hi Fi have a clear idea of the parameters of the yardsticks they use when testing, I do not - it might almost make a very interesting article to define them, not in gold, but just a 'Here is what we do.....' article, or maybe 'a day in the life of a tester.....'). So for example can the difference between the A85/A90 be put down to:

a) things had just moved on in the time between the issue of the A85 and the A90 and the amp sounded dated.

b) there was a difference in price range & that meant that it was pegged at a different class and did not cut the mustard.

c) the testers.... they were different people with different ears and different tastes, therefore there was a slightly different score.

d) Althought the Arcam tweak appeared small - it created a different sound and that was that.

e) all of the above.

f) none of the above you fool, we are sick of answering these type of questions, why not go away? You imbecile, jeezus I hate my job because I have to deal with cretins like you and you make me sick (or something like that anyway).
 

Clare Newsome

New member
Jun 4, 2007
1,657
0
0
a) and b) - with possibly a touch of (d), i'd say.

The key thing you've got to keep in mind are that all our tests are comparative, and performance-per-pound. That means a single product - let alone a new release - can change in star rating (up or down) during its lifetime as newer rivals appear and prices rise/fall. It's all about putting products in context in their market.

Think of it in sporting terms - you can achieve a personal best but still be out of medals contention if your fellow competitors have surpassed their own PBs by even more!

And as for (c) - remarkably consistent team (another important fact- never just one person's opinion) for many years. Many members of the team have done 10+ year service on WHF, and the newest member of our test team has now been here three years!

There's more on how/where we test here.
 

chebby

Well-known member
Jun 2, 2008
1,232
4
19,195
Clare Newsome:There's more on how/where we test here.

Interesting link. Thanks.

Looks like the most significant room (for two channel and even many AV users) is the 'small' room (room 3) given that a significant percentage of your readership are probably enjoying their system in similarly sized rooms.

(Although it is difficult to see exactly how small or otherwise these rooms are without knowing if a wide angle lens was used! Even a cupboard looks big with a 16mm lens.)

A lot of systems (not just budget micro-systems) are capable of magic in a small - medium sized room but have a scale of sound and imaging that retains the same smaller dimensions when placed in a much larger space.

I hope care is taken over the appropriateness of the testing room. (Even a £3000+ system may comprise expensive mini-monitor standmounts with relatively low wattage - but high quality - class A amps so I hope it is not all room allocation by price.)
 
A

Anonymous

Guest
Clare Newsome:
There's more on how/where we test here.

Well that about covers it...... I would be interested in understanding the reference system...... basically I only found out about this in the post where you all talked about your own systems....... is this a system which is considered, if not exactly supreme, a sugnificant yardstick to which other equipment can be compared?

I think that WHF have the srongest reviewing system but of course it does through up, its own, interesting problems.... for example does a five star winner from one category eclipse a three star winner from the category above.... but, of course, these can all be batted away easily by saying that the test scores are guides and we should all demo our own potential purchases.......
 

Clare Newsome

New member
Jun 4, 2007
1,657
0
0
As well as being a benchmark, the reference system is designed to be as transparent as possible - ie so we can clearly see/hear what a test product is capable of, rather than any character of the system itself. But, as mentioned, it's only one weapon in our testing armoury - range of price-comparable products comes into play, too.

As for ratings - discussed to death here recently, but it's worth reiterating that our Best Buys and Awards are a clear indication of the step-up point for each category.

In other words, to get five stars at a higher price point, a product has to convince us it's worth the premium - in other words it's worth more in performance-per-pound terms - over the best product in the price class below.
 

Clare Newsome

New member
Jun 4, 2007
1,657
0
0
chebby:
Looks like the most significant room (for two channel and even many AV users) is the 'small' room (room 3) given that a significant percentage of your readership are probably enjoying their system in similarly sized rooms.

(Although it is difficult to see exactly how small or otherwise these rooms are without knowing if a wide angle lens was used! Even a cupboard looks big with a 16mm lens.)

A lot of systems (not just budget micro-systems) are capable of magic in a small - medium sized room but have a sound and imaging that retains the same smaller dimensions when placed in a much larger space.

I hope care is taken over the appropriateness of the testing room. (Even a £3000+ system may comprise expensive mini-monitor standmounts with relatively low wattage - but high quality - class A amps so I hope it is not all room allocation by price.)

Yes, a wide-angled lens was use, and yes - of course! - we choose test rooms very carefully. Often a product will get tested in several rooms (acoustically treated and non-treated, too) before we're sure of its worth....
 
A

Anonymous

Guest
I'm a huge believer in Blind listening tests to remove any placebo effect.

After all if you're a reviewer lets say and your favourite manufacturer brings out a "super new" Amp you're bound to get somewhat excited and perhaps already have exellent performance in mind before turning it on. This can surely infulence a review to some degree? (Not criticising here - just discussing human nature!)

After all speakers and amps havent changed much in 25 years, I should know both my HiFi amps are over 20! They still sound fab.
I wonder what review my Audiolab would get if it was accidentally sent in as the new IAG 8000s?
:)

IMHO the sonic difference between 4 and 5 star reviewed products is small. These days there is just so much good equipment about, its hard to buy a lemon. This is different to say in the 80's when there was a lot of toot around mascarading as HiFi.
 

jaxwired

Well-known member
Feb 7, 2009
283
4
18,895
Clare,

Just read your link about your testing facilities. That's awesome. I have a question about your review process. It is well known that groups bias impacts decision making. Do your individual team members write prelimary opinions in isolation prior to discussion.

For example, if I go to a meeting with all the management and the big boss says "I think project A is a great idea, what do you all think", surprising everyone agrees (LOL).

I'm just wondering when you all meet as a group to discuss your impressions of a piece of equipment if "group bias" impacts the reviews.

-Jax
 
A

Anonymous

Guest
Agreed about the blind listening, which I think WHF do for some of their tests - cables, I think? Be good if they did it for everything.

I also think that WHF test in specific areas for each product, though the results don't seem to make it into the mag except as general parts of the description. I'm talking about dynamics, bass, stereo separation, voicing, soundstage etc.

What concerns me is one of the photos, which shows three people esconced on a comfy couch. Do reviewers note down their conclusions first, or is it straight into a general discussion? I'd be concerned if it wasn't the former.

I'm also a bit mystified by the demotion/promotion aspects of the star system - vfm might change, but surely not the sound? Yes, some products might come along later that sound better, but wasn't that what the various awards are meant to be for?
 

chebby

Well-known member
Jun 2, 2008
1,232
4
19,195
jaxwired:For example, if I go to a meeting with all the management and the big boss says "I think project A is a great idea, what do you all think", surprising everyone agrees (LOL).



"I didn't get where I was today without always thinking Project A was a great idea."
 

Clare Newsome

New member
Jun 4, 2007
1,657
0
0
jaxwired:
Clare,

Just read your link about your testing facilities. That's awesome. I have a question about your review process. It is well known that groups bias impacts decision making. Do your individual team members write prelimary opinions in isolation prior to discussion.

For example, if I go to a meeting with all the management and the big boss says "I think project A is a great idea, what do you all think", surprising everyone agrees (LOL).

I'm just wondering when you all meet as a group to discuss your impressions of a piece of equipment if "group bias" impacts the reviews.

-Jax

You haven't met our testers, have you? Happy to argue anything with anyone, us lot! We certainly don't hire shrinking violets who are frightened of expressing an opinion - either singularly or collectively.... (You haven't been in our management meetings, either - maybe it's a meeejah thing, but disagreeing with the boss is far more likely than acquiescence.)

But in more direct answer... Yes, reviewers spend time with products individually, as well as in groups, and everyone has time to note their own conclusions before discussion. And plenty of blind testing (eg kit behind the testers, being operated by another) - even with TVs in a dark room, you're looking at the picture, not the badge....

Some products unite opinion, but more often than not there is debate, discussion - sometimes very heated - and lots of re-testing (bringing in extra kit; moving rooms; trying new cabling/stands etc) before the final star rating is decided. And then when it comes to Awards judging, it gets even MORE intense
 

jaxwired

Well-known member
Feb 7, 2009
283
4
18,895
Clare,

Great response thanks. I'm reminded of something I've seen in a different magazine, that you might consider. This other mag will do a small side bar box called "Second Opinion" or something like that when there is less than total agreement about a piece of kit. Allowing the descenter to have a tiny rebuttal in the review. I've always liked that.

-Jax
 

jaxwired

Well-known member
Feb 7, 2009
283
4
18,895
Regarding the policy of star ratings by price category, I've seen this criticised by some people who say "5 stars is 5 stars regardless of price". I personally think the star rating by price is the perfect approach. This approach serves the buying public the best. When people are making a purchase, they need to know what are the best choices for their budget. If you did not utilize ranking by price category, then most 5 stars would go to super hi-end products and star rankings would be much less useful for purchasing decisions.

However, with that said, I think what some people would like to know is, the difference between "5 stars at this price" and "5 stars at any price" and the WHF star ratings do not convey that information. Although I think the yearly awards do help in that arena.
 

ASK THE COMMUNITY

TRENDING THREADS