I've just come across this thread, and it makes interesting reading. Strange I can just about see both sides (I'm sitting on the fence here methinks).
The downgrading of stars argument makes a lot of sense in the context of the market at that moment in time, and is ideal for anybody looking for the latest product. This seemingly keeps the ratings relevent to all current products. It's perfect as a quick glance way of seeing "what's hot and what's not", although of course as with any adjustable rating system, it does fall foul of those who feel the need to have all their gear five star rated, otherwise the world will end. But then they have more to fear from their bank managers...
The part where it falls down is where products are replaced/unpgraded - and therefore reviewed/retested - at differing intervals, if at all. As has been suggested up the thread an older device could still show as a five star product when a lot of more recent products have been downgraded from their five stars, simply because this particular device hasn't been retested (I guess if a manufacturer chooses not to re-submit their device for testing, this could easily be the case). Now to somebody with a few quid to spend on the latest top of the range five star product this doesn't matter, but for somebody hunting out a bargain, this older device appears to be a five star product, but when compared with newer equivalents may not even manage a two star review.
I can see exactly what Andrew getting at in that downgrading products in this context, and ‚n masse, is going to be very difficult. However, I wonder if there may be a tiny get-out. In the occasional group test, where an older product is still available in the category, and it achieved a suitably high rating first time out (five star/group test winner etc), would it be worth including the older product in the test? In this way, as an ongoing course of action, the older products in the Buyers Guide would slowly be re-graded according to the current market, and the readers will either see a bargain gem holding its own against newer opposition, or in the case of kit that's shows its age an insight into just how far the technology has moved on. Surely this would satisfy all the doubters, and for those of us mere mortals not in a position to have heard all these fantastic pieces of kit, it might just open up new doors (or close some in the case of kit that has aged badly). Of course if the manufacturer refuses to supply the kit for review then it makes things a little difficult, but then that could be mentioned in the grouptest and the ratings in the Buyers Guide pulled. The threat of losing their place in the Buyers Guide would surely be enough for a manufactuer to play ball on this one.
The only other ways I can see are to highlight grouptest winners in the Buyers Guide, maybe with different coloured stars. And maybe do the same for Award winners (didn't this happen in years gone by?). Then for example a potential buyer in three months time could look at the Buyers Guide and from the review dates, ascertain that the AV Receivers above were all feature in the same issue, and the Sony was clearly the winner.
The only way I can see to satisfy the people who must always have the top rated products is to introduce a larger scale, say 1-10 stars, and hardly ever use 10. With a five star system people tend to only want four or five star products. With ten stars, people will be happy with anything over six. I'm not a fan of this option as going down this can carry on to the N'th degree, and you end up doing million star ratings just to ensure the highest degree of accuracy in your ratings - when actually reading the review is what the READER should be doing to find out what they need to know.
Whinge over.
EDIT ******, another post that makes War & Peace look like a Mills & Boon.