Why image quality is not just about megapixels

When it comes to image quality, there is a big misconception that if you want better quality, you need more megapixels. Unfortunately (or fortunately, depending on how you look at it) it isn’t that straightforward.

Over the years camera manufacturers have dedicated much of their efforts to improve the number of megapixels in their latest models. At the start of the century, both Nikon and Canon were producing DSLRs with just 3MP — a far cry from the 100MP cameras we see today.

This drive has led many to believe that megapixels are all that matters when it comes to image quality, but to better understand image quality, considering the number of megapixels alone isn’t enough. You also need to look at the size and type of sensor, understand how images are formed, how light is focused onto the sensor, the impact of lens choice and quality, and consider the pixel size to get the full picture.

Image quality

Camera sensors and image quality

Sensor types: CCD and CMOS sensors

The type and size of the camera sensor has a large impact on image quality. There are two main types of camera sensors: CCD (Charge-Coupled Device) sensors and CMOS (Complementary Metal-Oxide Semiconductor) sensors.

Both of these, essentially, do the same job: they detect light and interpolate this information as an image.

While CCD sensors used to be the most commonly used type of sensor, technological advancements combined with the lower manufacturing costs of CMOS sensors has changed this. Traditionally, CCD sensors produced higher quality and lower noise images than CMOS sensors (when compared at the same low ISO), but used a lot more power. However, over the years these differences have become a lot less marked.

Sensor sizes: Crop-sensor, full-frame & medium-format 

Although the type of sensor has less of an impact on image quality than it used to, sensor size (or format) is something that does have a significant impact.

The three most common camera formats are APS-C (or crop-sensor), full-frame (or 35mm), and medium-format.

Camera sensor size comparison

Crop-sensor cameras are the smallest of the common sensor sizes, measuring approximately 23.5 x 15.6mm. Their smaller size, lighter weight and lower price tag make them most popular with those just starting photography and they are commonly used in both entry-level and mid-level cameras.

These sensors are relatively new in terms of photography — in the days of film, crop-sensor cameras didn’t actually exist. The most common film formats were 35mm, medium-format and large-format. Crop-sensor cameras only really came out of necessity for a cheaper alternative to 35mm sensors. As technology continues to progress and the prices of sensors reduce, we may even see the disappearance of these smaller sensors.

Full-frame cameras are slightly larger than crop-sensor cameras, with sensor sizes of about 36mm x 24mm. Larger and heavier than crop-sensor cameras, they offer better image quality, perform better in low light conditions, and allow for greater depth of field control, but at a slightly higher price.

The introduction of mirrorless cameras has also changed the game when it comes to size variations between crop-sensor and full-frame cameras. Because they do not have a mirror-system, mirrorless 35mm cameras are now basically the same physical body size as crop-sensor DSLR cameras. This means you can have a higher quality image, but in a smaller, more light-weight camera body.

Finally, medium-format cameras, with sensors of about 53.4mm x 40mm, offer the greatest image quality of the three, as well as greater resolution. However, this quality does come at a much higher price, which means they are often only used by professional photographers or those looking for the highest quality.

The differences in medium-format sensors

What’s important to note about medium-format is that not all medium-format sensors are created equal. There are a number of medium-format cameras on the market that don’t strictly fit the dimensions of medium-format cameras, such as the Leica S-System (30mm x 45mm) for example. Even medium-format cameras from the same manufacturer don’t always have the same medium-format sensor sizes (for example, the Hasselblad X1D II sensor measures 43.8mm x 32.9mm compared to the 53.4mm x 40mm sensor of the H6d-100c).

Medium format camera comparison

So why does the sensor size have an impact on image quality? To understand this, we need to understand how images are formed.

How digital images are formed

The process of how digital images are formed is covered in our ‘Introduction to Photography’ course, but quite simply, an image is recorded when light passes through a lens and is recorded by the sensor (previously, this used to be film).

Introduction to photography course
PHOTOGRAPHY COURSE

Introduction To Photography

Take control of your camera. Make better photos.

In this course, you will learn the 6 essentials that will get you confidently shooting creative images in manual mode.

Watch Course
Watch Course
Join Now

Sensors are what allow digital cameras to record images. Made up of millions of photosites (which are what record the information contained in individual pixels), sensors record an image when the shutter button is pressed. This exposes the photosites to the incoming light that is then recorded as an electrical signal on the sensor. The strength of each of these signals is converted to digital values that essentially produce the image once the exposure has ended.

Each photosite, due to a filter placed over the top, is only able to capture one of three primary colours (red, green or blue). The most common of these filter systems is called a Bayer array. This array, invented in 1974 by Bruce Bayer, consists of alternating rows of red-green and green-blue filters. This deliberate decision by Bayer was based on the science of human visual perception — human eyes are more sensitive to green light than red or blue, so his Bayer array comprises of 50% green filters, 25% red filters and 25% blue filters.

Bayer Array Filter

As each photosite is only able to record one colour, certain colour data is lost when initially capturing the image, so a process called demosaicing is required to convert the array of primary colours into the final photo. This is done using certain mathematical algorithms, and it is these algorithms that result in the different colour representation of the various camera brands and is ultimately what dictates why a Hasselblad colour rendition may be different from Phase One, for example. One of the reasons I shoot Hasselblad is because I’ve found their Natural Colour rendition processing to be the best I’ve seen from on any digital camera.

In addition to Bayer Array filter systems, Fuji created a different, more random array called an X-Trans sensor. This randomised pattern uses a 6x6 random filtration method and is better at reducing interference patterns associated with moire, especially on smaller sensors, partly because it doesn’t require the low pass filter that the Bayer 2x2 Array needs.

While, theoretically, an X-Trans sensor can record more resolution, it does have drawbacks in other areas, such as software support and flare lit situations. Interestingly it seems Fuji only consider this sensor design advantageous on smaller sensors as they still choose to use conventional Bayer array filters on their medium-format sensors.

It is worth mentioning that Fuji, Nikon Phase and Hasselblad all use sensors designed and manufactured by Sony, while Canon still produce their own sensors. But as mentioned earlier, two camera brands can use the same sensors but their expertise, processing algorithms and lens design can lead to vastly different looks of the final output images.

Other things worth mentioning in sensor design is back-illuminated sensors, where the metal wiring is positioned below the photodiode substrate (this is essentially to photosite that captures and records the light). This means an increased light capturing ability but this itself has required a few other technical problems to be overcome. Back-illuminated sensors are commonly seen in Sony cameras, where the first full frame version was introduced on the 42mp A7rii, and they are now branded as Exmor sensors, which they claim are twice as sensitive to light as conventional front-illuminated sensors. 

Common in all sensors now for the last 10 years is micro-lens arrays, which are essentially tiny lenses over each photosite that funnel and direct the light more effectively to the sensor’s photo-sites.

Back-side illumination and front-side illumination sensor

This increased light gathering ability meant that photosites could become smaller, but this unfortunately also increases the opportunity for diffraction to happen earlier when closing the aperture of the lens down. Images shot at small apertures such as f16, f22 and f32 will have a reduction in sharpness and contrast due to diffraction. This unfortunately can only be overcome by using larger photosites and better lens design.

In my opinion (and if Sony are listening), I’d prefer they ditched the 100MP medium-format sensor and come up with an 80MP 54x40mm sensor with a photosite size of around 6 microns. This would combine the best combination in resolution and photosite size for better low light and performance and reducing diffraction (and while they’re at it, make it a back illuminated sensor design too).

Unfortunately though, many of the manufacturers went off on a megapixel quest as a marketing opportunity rather than an image quality quest, which is, of course, what it should be.

Image quality, megapixels & resolution

Many photographers fall into the trap of believing more megapixels equal better quality, and while more megapixels do equate to higher resolution, it does not necessarily mean better image quality.

Megapixels are made up of millions of pixels. These pixels contain specific colour information that make up the image.

Megapixels diagram
Megapixels and image quality

Often used interchangeably with megapixels, the term resolution does not simply refer to the number of megapixels in an image. More accurately, it refers to how clearly the medium can capture and record detail (this is particularly important when it comes to printing images). This can be influenced by factors such as lens and sensor quality, file type, and ISO.

Often as important as the number of pixels is the size of the pixels, measured in microns (µm), which is determined by the size of the sensor (this is because you can only fit a specific number of pixels in a given area). Photosite sizes can range from as small as 1.1µm in smaller smartphone sensors, to 8.4µm in other, larger formats. Larger photosites can record much better dynamic range, which gives a better transitional tonal value, greater tonal accuracy, and better colour accuracy.

For example, the image quality of a 50-megapixel camera phone will be far less than that of a 50-megapixel medium-format camera. This is because the photosites on the smartphone sensor will be much smaller than that of the physically larger sensor of a medium-format camera, which means the sensor’s ability to capture and record light is less. Think of it this way: a larger bucket will catch more rain than a smaller bucket.

Calculating megapixels and pixel size

Megapixels can be calculated by multiplying the height and width of a sensor in pixels (or, in other words, the dimensions of the image) and dividing that number by 1 million. For example, an image with a resolution of 5472 x 3648 would  produce a 20MP image:

(5472 x 3648) / 1 000 000 = 20MP

Individual pixel size can be calculated by taking the width of the sensor in millimetres divided by the image width in pixels, and multiplying by 1000. For example, a camera with a 5616 x 3744 resolution and a 36 x 24mm full-frame sensor would have 6.4µm pixels.

(36 / 5616) x 1000 = 6.4µm

Other factors that influence image quality

While sensor type and size, megapixel count and pixel size are all important when it comes to image quality, there are other, often overlooked, factors such as lens choice, file type, and the setting combinations used to capture the image that also have an impact.

Lens choice

Depending on which lens you’re using, the quality of an image can vary greatly, even when using the same camera. An image shot with an older model lens will result in a lower resolution image than the same image, shot with a newer model lens with better optical design. Even though the same number of megapixels will be recorded (because it is the same camera), the newer lens design will likely have better contrast, colour accuracy and sharpness, resulting in better resolution.

One of the most important things to consider when it comes to lens choice is the actual quality of the lens. Chromatic aberration is one factor that you should pay particular attention to. A common optical problem in lower quality lenses that results in colour fringing along high-contrast edges, chromatic aberration is a result of dispersion when wavelengths of colour are focused at different positions on the focal plane (or, in simpler terms, when a lens fails to focus wavelength colours to the same point).

Chromatic aberration
Chromatic aberration

Regardless of how many megapixels there are in an image, if a lens is poor quality, a factor like this will always result in a lower-quality image when compared to a lens that does not result in chromatic aberration.

A second factor to consider is diffraction, which can result in reduced sharpness in an image. This is a result of light rays passing through a small opening, such as the aperture. It can also occur if the photosites on a sensor are too small. If the opening or photosites are too small, it causes the rays to overlap and interfere with each other. This interference means light is added in some places, and reduced in others.

When the rays hit the sensor, they create a pattern known as an Airy disk, which basically describes the best-focused spot of light. If the focused spots of light are too close, it becomes impossible to resolve the individual patterns, causing a loss of sharpness and image quality.

No lens diffraction example
Lens diffraction example

File type - JPEG vs RAW

The file format you choose to shoot in will have a big impact on how much information can be stored in an image. Although both file types contain the same number of pixels, RAW images store far more information within those pixels than JPEG images (you could think of it as hidden data which can be extrapolated from the RAW file to produce better quality images), which compresses the data and results in less being available to process.

Photographic knowledge

Another factor that influences image quality is knowledge. Only if you understand how cameras work; how to correctly expose an image; where to focus; what lenses, equipment and setting combinations to use; and how to correctly light images to compensate for dynamic range will you be able to create the highest quality images.

Dynamic range refers to the number of steps between the blackest blacks and whitest whites in an image. Each camera (and even the different recording mediums) has its own dynamic range, although the ultimate goal is to have the maximum range of tones in between the black and white values.

Dynamic range example

Your camera settings also play a role in the image quality, particularly the aperture. Using small apertures like f22 will enhance the diffraction problem I mentioned earlier and result in small details in the image to be lost, causing a softer looking image.

Understanding the bigger picture

As you can see from the points discussed above, image quality does not just depend on the number of megapixels in an image. Even though this perception may be perpetuated by camera manufacturers, please don’t be fooled. Although megapixels play a role in overall image quality, they are not the be-all-and-end-all. Instead, try to think of image quality as circular, with each of the points discussed working together to give the final result.

My camera, sensor, and lens choice always take all of these things into consideration so that I can produce the highest image quality possible. So next time you’re trying to determine the expected image quality of a given camera, remember to keep in mind each of the points discussed here.

© Visual Education. No text, graphics or images may be shared or used by third parties.

Recommended Content

To learn more about photography, make sure to browse our wide range of photography classes. You'll learn key techniques for everything from portrait photography to product photography and also important theoretical concepts that will help you get the most out of your camera.

Comments

  1. A Great article. Really puts everything in order.
    Just a small update: As from the D850, Nikon uses sensors from “TOWER Jazz” in Northern Israel (Partially a TOSHIBA factory), No longer from SONY.

  2. Thank you Karl for the informative and interesting post. What do you think, If you willing to share, about FUJIFILM x-trans sensors which have different array than the Bayer array?
    Also, what are your thoughts about newer BSI sensors?

    1. Hi Ariel, this blog post is going to become a forthcoming youtube video and I’m currently collating some more information on sensors, micro-lensing to expand on it a little I’ll look to include the information you mentioned during my research.

Leave a Comment