DPReview - Rating System

20 November 2017 by Marcell Szeles.

How we rate and score cameras

In 2010 we undertook a major overhaul of the way we rate and score cameras for the final conclusion of our in-depth reviews. This page explains how we produce those scores, how to interpret them and how they compare with the old system (which had run essentially unchanged since the late 1990's).

Camera ratings: at a glance

What the scores mean

The category scores (Rating bars) are designed to give you an idea of where the camera's strengths and weaknesses lie - few cameras are bad (or excel) at everything, so there's a tendency for the final scores to 'even out'. Although no replacement for actually reading the review, the results box is designed to give you an 'at a glance' view of the camera based on our findings, and how it compares to its competitors. Very short bars mean below the average for the category, very long lines mean above average. The scores behind these ratings are then combined to produce the single '%' score at the end

Although, taken on its own, this overall rating figure has little real 'meaning', it does give you an idea of how the camera ranks - when taken as a whole - compared with the other cameras in the same category.

The overall scores sit in the following very rough bands:

  • 0-40% Totally unacceptable. Run away
  • 41-50% Poor to Below average, avoid
  • 51- 60% At best average, treat with caution
  • 61-70% Average to Good
  • 71-80% Very Good to Excellent
  • Anything over 80%: Outstanding

This is all you really need to know:

  • All scores are relative to the other cameras in the same category
  • For compacts the scores are also broadly comparable across categories
  • The lengths of the bars represent the weighted average of a range of measurements and scores
  • The final score is calculated using a weighted average of the main category scores
  • A camera's score represents 'a moment in time' - the date the review is published
  • Since the scores, weights and ratings contain elements of opinion (these are, after all, reviews) you should take the time to read the entire review if you feel our opinions and priorities may not match yours.
  • The ratings are produced after extensive consultation between all the editors and reviewers and after careful tabulation of all available information.

A word on weighting

You can read more about it down the page if you really want, but if all you want is the basics then read on. The final score is, as mentioned above, produced by weighting the scores of the various categories then taking an average. The weighting is the 'dpreview' opinion of what matters most (based on a lot of experience, it should be said).

Broadly speaking our main priority is image quality, so the categories that make up that get the highest weight. In future we will allow you to apply your own weights if you have more specific needs.

Want to know roughly how the final score is calculated? The weights applied break down as shown below (you can see more detail later on this page if you want):

Understanding the scores and ratings

Category

Every camera is placed into a broad category based on its market position, feature set and price point. The scores and ratings we assign are based on the camera's performance relative to the other cameras in that category (though there is a natural tendency for average scores to rise slightly as you move up the category range - see below). If a camera is very near the boundary of the next category (in either direction) we also consider the nearest competitors in the next category if it makes sense to do so.

SLR / interchangeable lens camera categories:

  • Entry-level
  • Mid Range
  • High End Enthusiast / Semi-Pro
  • Professional

Compact (fixed lens) cameras are also categorized, though one of the categories (the first) covers a far, far wider range than any other. Scores for compacts are essentially comparable across categories simply because that's the nature of the compact camera market (where features and styling are the main differentiators), but when we're scoring we're only taking into account sensible competitors (including those in other categories where applicable).

Fixed lens camera categories:

  • Compact camera - this is most point-and-click models on the market today
  • Compact 'superzoom' - small cameras with big zooms (over 8x)
  • Superzoom / Bridge - SLR-styled cameras with large zooms and electronic viewfinders
  • Enthusiast compacts - High-end models with extensive photographic control and (usually) raw mode

How we score - in detail

The basic scoring process is as follows:

  • Review camera
  • Collate all measurements and sit down with other editors to score 'unmeasurable' aspects
  • Combine all these (with weighting where appropriate) into 27 basic rated attributes
  • Apply weightings and combine these into the 11 (or 12) rating categories published
  • Produce the final % score from the weighted average of the category scores

In future we intend to allow users to override the last two stages, applying their own weightings (to produce bespoke, personalized rankings/ratings).

All our cameras are put through a series of standardized tests - most (but not all) of which are reported in the reviews themselves. Virtually all these tests produce some kind of measured value, and these measured values are used to establish the average performance across an entire category. A very similar process is used to score attributes that are essentially part of the spec (viewfinder size, feature comparisons etc).

This allows us to establish a benchmark for producing our scores - cameras with measured results at, or near, the average get a certain score.

The exact score (out of 10) an 'average' result in a category for a particular generation of cameras gets depends on several factors, including the following:

  • The spread of results (how much better the best is than the worst in the entire category)
  • How 'good' the average is (based on our own long-developed criteria of acceptability)
  • How rapid the trend toward improvement is (based on our historical test results)
  • How accurately we consider our controlled tests reflect real world experiences

This means that the 'average' score (that given to cameras near the middle of the pack) for SLR auto white balance, for example, will be slightly higher than the average score for SLR movie mode - we see little change or variance in AWB performance (and it's normally pretty good overall), but we do see a lot of difference - and rapid change - in movie mode functionality, so the average performance is expected to rise, and differences to grow between models.

This system allows us to produce a series of scores that reflect not only how well a particular camera performs against its peers, but how good the average camera in any particular category does. This aspect of the scoring system means that, although there's no absolute relationship between the scores in different categories, there is a general increase in the average score as you move up the range (presuming, of course, that the cameras do, on average, get better as you move up the categories).