RESOLUTION,
DETAIL,
+
SHARPNESS


The resolution race is real. Whether we’re talking TVs, phones, or cameras, we can’t escape it.

But what does all this talk about resolution even really mean?

Does it matter?

Are there benefits to this race of pixels?

The enthralling answers to the above questions are, respectively: It depends. Sometimes. And sure, kind of.

Let’s dig in.

WHAT IS RESOLUTION?

Most of the time when people refer to resolution, what they mean is the number of pixels across the horizontal plane of an image by the number of pixels across the vertical plane of the image. In short, an image’s width by its height, in pixels.

If each of these colorful squares were a pixel, this image would have a resolution of 3x2, with a total resolution of 6 pixels.

If each of these colorful squares were a pixel, this image would have a resolution of 3x2, with a total resolution of 6 pixels.

Sometimes these amounts are directly stated, such as 1920x1080 video, which has 1,920 pixels in each (horizontal) row and 1,080 pixels in each (vertical) column. Note that resolution should be written width x height.

MEGAPIXELS

Other times, people might refer to the total number of pixels, such as an 8 megapixel (MP) image. A megapixel is just one million pixels in total.

That seems like a lot - and it is - but it’s pretty common to come across images that are a few thousand pixels wide and a few thousand tall, totaling several million pixels, or megapixels.

Speaking of cameras, they are often referred to and measured by the number of megapixels they can produce in an image. A common error is for people to say a camera has a ‘24 megapixel sensor’, implying that the sensor itself is covered in pixels. That’s not far from the truth, but isn’t exactly representative of what’s going on.

Basically, a camera sensor is covered in millions of photosites. The number of these photosites then directly affects the number of megapixels that the camera can produce in an image. There are usually more photosites than resulting pixels, however. The extra photosites might help increase the detail in the resulting image, correct lens distortion, etc.

The information collected by the photosites is extrapolated and processed by the camera’s processing engine, which uses the information to create the resulting image of however many megapixels.

PIXEL DENSITY

A less frequent use of the term resolution is in regards to pixel density. Programs that are used in the context of print media, such as Photoshop or InDesign, sometimes deal with canvases and art boards that aren’t based exclusively on pixel size, but might be set up with physical dimensions - inches, centimeters, millimeters, etc - instead. When basing your canvas on a physical size, you need to define how many pixels there are in each inch or whatever. This might be referred to as pixels-per-inch (ppi) or dots-per-inch (dpi), which are practically the same.

The most common pixel densities are 72dpi for print media and 300ppi for media displayed on high resolution screens (the latter, though, is a bit arbitrary).

Snobby photographers often act like photographic prints need to be 300ppi or even 600ppi, but that only results in relatively small prints that you need to be very close to appreciate. There’s nothing wrong with that, but practically speaking, you can make very nice wall-hanging prints with a much, much lower pixel density. Honestly, 72ppi is perfectly fine for a large print, considering you expect the viewer to stand farther back when viewing it.

WHAT ARE SOME STANDARDS OF RESOLUTION?

It’s hard to define too many standards for resolution in today’s ever-changing digital landscape, but there are some, especially as it relates to film and video.

Here are some video standards, current and upcoming:

  • 1280x720 - HD. The original ‘High-Definition’. You’ll still find this on most basic-cable networks, as well as on social media. Specifically, native Facebook and Twitter videos still use this format for various reasons.

  • 1920x1080 - FHD (Full HD). Most basic streaming services offer this format as their standard viewing resolution. This is also the resolution of the standard Blu-Ray.

  • 2048x1080 - DCI 2K. This is the resolution width of a standard digital movie screening.

  • 3840x2160 - UHD (Ultra High-Definition), or QFHD (Quad Full HD, but no one calls it that). If you pony up a little extra dough per month, you can watch some Netflix shows and movies in this format. They’re increasingly available, but not everything will be in UHD, of course. You’ll find a lot of UHD content on YouTube, as well as Vimeo. The best UHD experience, however, will be had with 4K UHD Blu-Rays. You’ll need a 4K TV, monitor, or projector to really benefit from this format; luckily, they’ve come way down in cost the last few years.

  • 4096x2160 - DCI 4K. This is the resolution width of an IMAX digital movie screening, although the industry is likely on the cusp of upping the IMAX standard to 8K.

  • 7680x4320 - 8K UHD (or whatever they end up calling it). TV manufacturers decided to skip over 6K entirely, and just quadruple up the resolution again. You can only watch 8K content on YouTube, shot on the few cameras that can actually create 8K footage (even fewer when you only include the ‘real’ cameras that can capture in 8K; sorry smartphones). 8K TVs have been available for about a year-and-a-half now, but they’re still extremely expensive. They’re also almost exclusively very large TVs, because the manufacturers know that the TVs need to be bigger for the viewer to really benefit at all from the increased resolution over 4K.

MORE RESOLUTION MEANS MORE DETAIL, RIGHT?

No.

Resolution is just a measurement of the number of pixels of an image or video. It in no way is equated with the detail of an image, or a camera’s ability to actually capture detail.

For clarification, detail is visual information in an image. Detail might include fibers, twigs, grains of sand, strands of hair, etc. Anything small and minute. To actually capture these details, you need a good lens, camera, and photographer. Or an artist - digital, painter, or otherwise - who is willing to put in the time to create those small details.

While they don’t have especially high resolution sensors, Fujifilm’s X-Trans color filter array allows their moderate resolution sensors to forego anti-aliasing filters, resulting in notably more detail compared to similar megapixel sensors that use the more conventional Bayer color filter array with anti-aliasing filter. I took this photo at Hurricane Ridge in Olympic National Park, Washington.

Phones have been driving the resolution race for years. Before 4K was widely available as a video format in real cameras, it was available in Samsung Galaxies and iPhones. But it looked terrible! Nothing but over-sharpened edges. No detail.

The same thing is happening all over again with 8K. Some phones are already marketing their abilities to capture video in the next step of the resolution race. The only difference is that phone sensors have improved quite a bit to actually be able to render a decent amount of detail (albeit with the help of AI; more on that below). It’s certainly nowhere near 8K worth of detail, but at least phone footage actually looks okay nowadays (as long as the sun’s out). So, when the next iPhone is announced, touting 8K video, just know it’s (mostly) bullshit.

But how can it be 8K if it’s not 8K?

Resolution formats are basically containers for information. You could transcode any image or video of any resolution, with any level of detail or quality, to 8K. All that means is that the file that’s put out is technically 7680x4320 in pixel count. That doesn’t mean that each of those pixels has unique, quality information.

In fact, most phones aren’t actually capable of capturing much detail, or information, at all. But the resulting images look good, right? Sure. That’s mostly because of what’s now known as computational photography.

Basically, computational photography is the incorporation of AI (artificial intelligence) engines to supplement what the camera actually captures. The sensor and lens capture as much detail as they’re capable of, then the AI engine looks at the information, and creates new information based on its best guess as to what it’s looking at. And it’s shockingly good at it!

While AI may still be quite stupid, evil, and/or socially oblivious in most cases, it’s quite good at recognizing objects in images, and supplementing those images with relevant detail. (If you’re reading this, HAL, I’m only trying to be funny; please let me live.)

ALRIGHT, BUT WHAT THE HELL IS SHARPNESS?

Sharpness is just contrast added to the edges of a subject in an attempt to make them ‘pop’ out of the image more, and create the illusion of detail.

Sharpness and detail are not the same thing. You can have either, both, or neither in an image.

Where it gets confusing is when people refer to the ‘sharpness’ of a photographic lens. What they really mean is a lens’s ability to capture detail. Sometimes, digital imaging semantics can be… frustrating.

CAN I HAVE TOO MUCH SHARPNESS?

YEP.

One of the worst aspects of (relatively) affordable, modern, high-resolution, large format cameras is the number of nauseatingly over-sharpened images on the internet. These images have insane contrast and halo-ing over the entire image.

I intentionally over-sharpened an image of mine. Gross. I took this photo in Olympic National Park, Washington. It normally looks better than this, I promise.

Try not to over-sharpen your images, please.

WHAT ARE THE BENEFITS OF HIGH RESOLUTION?

As it relates to cameras, there are several advantages of a high resolution sensor.

Oversampling is the biggest advantage of having a high resolution sensor. Oversampling, in a nutshell, is when the recording device - photographic, audio, or otherwise - takes more samples of a ‘source’ than is necessary to more accurately reproduce that source in a defined signal (whatever our resolution goal is).

So, if our source is a magnificent and detailed landscape, and we’d like to have it as a 4K image for a desktop background or whatever, then we’re better off capturing that landscape with a camera that’s capable of a ‘sampling rate’ (photosites and megapixels) significantly higher than our desired finished product. The more you can oversample, the more accurate the product is in reproducing the ‘source’ (the actual landscape).

Another advantage is that more resolution makes sensor noise smaller, and more likely to be dithered or compressed out in the final image.

While spatial representation (resolution) isn’t the best application of it, the Nyquist Rate applies to imaging as it relates to oversampling. The Nyquist Rate simply states that you need to sample any ‘bandwidth signal’ at least two times (2x) to make an accurate reproduction of it.

So, according to the Nyquist Rate, if we want a truly accurate 4K image, we need to capture the landscape in *at least* 8K. While technically true, this is arguably a bit excessive when it comes to photographic reproduction, and the Nyquist Rate is more relevant and applicable to audio sampling.

High resolution images also benefit from being more ‘future-proof’ than lower resolution images. Chances are pretty good that human civilization will carry on for at least another decade or two, in which screens will continue to become higher resolution. Meaning high resolution images of today will still look good on high resolution displays of tomorrow, whereas lower resolution digital images - whether created with ‘low’ megapixel cameras or embarrassingly archived on social media - will look (and feel) awful by comparison.

SO WHAT ARE THE DRAWBACKS OF HIGH RESOLUTION?

Glad you asked.

When it comes to photos, the only real drawbacks are in the level of storage space that it takes to archive hundreds and thousands of high resolution photos. Twice the resolution means twice the file sizes, and so on.

A legacy issue with high resolution sensors was that it also meant smaller photosites, which meant more noise in the final photo. Advancements in sensor technology have all but nullified this rule-of-thumb, however. While a high resolution sensor will still have more noise at the individual pixel level than a lower resolution sensor, modern high resolution sensors typically have better signal-to-noise ratio than lower resolution sensors once you downsample the high resolution image to match the lower resolution one. That’s a mouthful.

Here’s an example:

It’s common for 35mm camera manufacturers to have a ‘standard’ resolution sensor (typically 24MP) and a high resolution sensor (in excess of 40MP). At the same ISO settings, the 24MP camera will create images with a little less noise at the pixel level - meaning when you view the image at 100% scale and are theoretically viewing the image as close as you’re meant to. However, if you take the image from the 40+MP camera and downsample it to 24MP - meaning you’re resizing the image to match the resolution of the 24MP image - the extra information of the higher resolution sensor will create a better 24MP result in all but the most extreme cases.

I know, I know. I’m supposed to be talking about the drawbacks to high resolution.

The worst drawbacks of high resolution are actually related to the video capabilities of a high resolution camera. Generally speaking, video is quite demanding from the camera sensor and processor. It has to try to read as many photosites on the sensor as possible, and do that 24, 30, 60, or even 120 or more times per second.

The problem with high resolution sensors is that there are just too many photosites. The processor literally can’t read them fast enough. This results in a slower sensor readout speed than lower resolution sensors. Sensor readout speed is the time it takes the processor to read all of the necessary photosites on the sensor. Slow sensor readouts create an effect known as rolling shutter. This is where different parts of the sensor are read by the processor at measurably different times. Enough time passes, in fact, that the subject may have moved through a significant portion of the frame in the time it takes the sensor to read from the top to the bottom. This results in a jello effect, where the subject becomes warped or distorted in the frame. This used to be a problem with all CMOS sensors due to the processor reading the sensor in a linear, row-by-row manner, but now is mostly a problem with larger, especially high resolution sensors. To be fair, though, sensor readout speed is affected more by physical sensor size than it is pixel count and density.

One way a high resolution camera might try to reduce rolling shutter, and the overall processing load of reading all of the photosites, is through processes known as pixel binning and/or line skipping. Basically, the processor ignores a large chunk of the sensor’s photosites; maybe even entire rows of them! This also creates a number of issues.

First, it resurrects the old issue of noise, since the sensor isn’t sampling all of the pixels to average out the information. The higher noise at the pixel-level becomes more evident in the larger, final image.

Even more troublesome, pixel binning causes aliasing and moire. Aliasing is when a recording device insufficiently samples a source, and the resulting reproduction is inaccurate or has defects.

In photo and video, this usually takes the form of lines that appear to jump back and forth in the image, especially when filming subjects with almost straight lines, such as buildings and other architectural features. This phenomena is commonly referred to as simply ‘aliasing’ as well. Confusing, I know.

A specific type of aliasing that’s a common problem is what’s known as moire. Moire is when the camera becomes confused by a very fine signal. This might be some sort of detail in the distance, but is commonly seen when shooting fine fabrics and textures, such as a subject’s shirt. Moire is often produced as concentric circles over the fine details.

Moire-werk1.16.2013.jpg

An example of moire. This is actually the cover to an EP by the artist Moire.

ARE THERE BENEFITS TO LOW RESOLUTION?

Some, sure.

The benefits to low resolution sensors are pretty much the inverse of high resolution ones.

First and foremost, less resolution means smaller file sizes. You can store more images in the same amount of space.

Images will have less sensor noise at the pixel level in their native resolutions. However, as we’ve already discussed, this really has little practical benefit, if any, when it comes to still images once the higher resolution sensor is downsampled.

Most of the benefits of a low resolution sensor are related to video. The photosites are larger, so sensor noise is lower. Fewer photosites means less work for the processor. Sensor readout speeds are generally better (faster) than higher resolution sensors. There’s also little or no need for pixel binning and/or line skipping.

WHAT DRAWBACKS ARE THERE TO LOW RESOLUTION?

Aside from less detail and lower accuracy compared to the higher sampling rate of high resolution sensors, there are still some other drawbacks.

If the sensor resolution is too close to the final resolution output - for example, a 4K sensor creating a 4K image - you will also run into issues with aliasing. The Nyquist Rate returns, but this time we’re not sampling enough to meet the Nyquist Rate, so our image is an inaccurate reproduction of the source. So, once again, we get visual errors and artifacts like aliasing, moire, etc.

HOW MUCH RESOLUTION IS ENOUGH RESOLUTION?

Well, it depends.

If you’re only concerned about curating your Instagram - which still only supports a miserably low native resolution of 1080 pixel width - then pretty much any modern camera will provide you with *enough* resolution to look good. Even phones. At that point, the question becomes which format of camera will deliver the look you desire as it relates to depth-of-field, detail, dynamic range, color tonality, etc.

Likewise, delivering images for social media, in general, is a pretty low bar when it comes to resolution requirements. Most other social sites like Twitter and Facebook don’t support high resolution images, but they’ll still look good on your little pocket screen.

For that matter, most phones still have relatively low resolution screens, and pretty much all modern cameras can create images that sufficiently out-resolve a phone screen. So, if you just want neat-looking wallpapers, you don’t need to spring for that high resolution camera, either.

Additionally, many sites have very specific dimension limits and requirements that limit how high of resolution images and video you can upload and use. These requirements are usually pretty reasonable for the contexts in which you’ll see the images, though.

Other than that, *enough* will be greatly determined by your use-case for photo and video. If you want to look at images on a large 8K monitor, closely, then you should probably get the highest resolution camera that fits your budget or workflow.

Related, your photos and videos will always be limited, or bottle-necked, by the displays they’re viewed on. 4K video won’t look better than anything else on a 720p projector. Likewise, viewing 4K content on a 1080p display doesn’t really offer much benefit.

On the other hand, modern devices and displays are almost all outfitted with smart ‘upscaling’ algorithms that can rather convincingly make 1080p video look like 4K and so on. Different devices use different algorithms to go about this, but generally speaking they simply create new pixels in between the existing ones. It’s not quite the same as viewing a 4K image that was created via oversampling from a high resolution camera, but all things considered, it’s pretty good.

Another factor that is often overlooked is how closely you intend to view your images. You don’t have to be very far from a 4K TV or monitor before it becomes indiscernible from a 1080p TV or monitor. This depends on the size, of course, but that’s assuming we’re talking normal sized displays.

resolution_chart.png

Resolution-benefit chart by Carlton Bale, a home theater enthusiast. (https://carltonbale.com/home-theater-seatings-distances-field-of-view-vs-resolution/)

More accurately, the most important consideration with resolution is knowing how much of your field-of-view the display will take up in your vision. Display size and viewing distance really only matter so much as they translate to field-of-view. If you intend to view photos and videos in such a way that they take up a significant portion of your field-of-view, then you would probably benefit from high resolution images.

All that said, resolution is often overrated when it comes to the final delivery of content. Human eyes differ significantly in their ability to resolve detail, but a lot of people are physically incapable of seeing the difference between 1080p and 4K except at unreasonably close distances. This inability to see increases of resolution will only get worse as the pixels become more numerous and more microscopic.

WHAT’S THE FUTURE FOR HIGH RESOLUTION Then?

As companies continue to pressure us to buy higher resolution displays and cameras with higher resolution sensors, I think (hope?) they’ll eventually run into the issue where people realize they can no longer see any difference whatsoever as the resolution increases.

Once this issue starts to affect sales, I think we’ll see a push for more interactive and/or engaging types of media, where the viewer is encouraged to zoom into images and videos. I think this could potentially open a lot of doors, creatively and narratively, but may or may not be limited by whether a viewer is willing to actively engage with their media, or whether they’d prefer to just sit in front of their giant 32K display, covered in pixels they simply can’t see. Only time will tell.