1 reply [Last post]
Joined: Jan 8 2001

How do magazines come up with their ratings for software or hardware. There is a comment in another thread of this forum complaining about software being given a good review when it has very basic failing; in the view of the writer at least.
I have recently read a review of VideoWave 4 in another magazine which gives 8 or 9 out of 10 in all areas. This for a piece of software that fails miserably to keep video and audio in sync for sections over about 6 minutes. Surely being easy to use and cheap should not affect the overall rating if the item reviewed can't do the basic job.
Can I also suggest another area to be included in all reviews. That is after sales tech support. The time taken to respond to telephone and e-mail questions is surely also of great importance particularly in 'specialist' areas such as video editing.
DV Doctor does a great job but shouldn't have to take over the job from the suppliers.

[This message has been edited by Keitht (edited 12 February 2001).]

Regards Keith

Joined: Mar 7 1999


I can only speak for one magazine, but this is how we do it.

First, we expect the reviewer to SUGGEST the individual point-by-point score, and to then work out the average to give the overall CVratings as a percentage of the total score - thus 4 lots of 4 out of 5, would give an average of 80%.

Oddly, it is the case that some mags, including those within our company, don't do that ie, they could have 4 lots of 4 out of 5 and yet have an overall score that was greater or lesser than 80%.

When we edit the review, we do our own judgements and see how they compare with the reviewers. If there is any significant difference, we'll talk to the reviewer and see how come the reviewer came to a different score than we did.

*I* then make the final decision as to what the scores and overall rating will be.

We also expect the reviewers to suggest what awards, if any, to give, and the same separate analysis and checking are done - but with me having the final word.

It could very reasonably be argued that what we do is pretty haphazard, and that what is really needed is some form of weighting but, in my view, how WE score products pretty accurately reflects the value and usefulness that our readers find when they buy, so I'm pretty content with it.

As for the quality of support - it would take us HOURS (sometimes days!) to check this on a per-product basis, though it is the case that if we have cause to use tech support, then we do report how we found it.

I'd hate to think how much this sort of thing would add to the cost of the review.

When reading a review, all you need to do is turn back to our help&letters and help on the net pages to discover who the good guys and the bad guys are!

And, if you've any doubt whether or not, say, Pinnacle's support is still lousy, you know where to ask the question, surely?

Bob C