Welcome to Part 2 of our two-part series on ISO! In the first article, we debunked a common misconception of ISO and discussed how ISO actually works to create an image. In this article, we look more deeply into the details of digital light sensors in cameras. We describe how the sensors work and, in particular, where the noise in images comes from.
We explain the science behind the fact that increasing the ISO setting on a camera tends to increase the prominence of noise in the resulting images. As in the first article, the discussion will be mostly descriptive, but this time we will discuss more technical aspects of the subject.
The Basics of Digital Sensors
The way that digital sensors convert light into an image differs markedly from the way film does. Digital sensors are made of crystalline silicate materials called semiconductors. Like the photo-chemistry of film, the details of these crystals are not important for our discussion, but their general properties are.
To understand how solid-state detectors work, we first have to know the basics of how the atoms are arranged within them. For present purposes, we need only recall the elementary concepts from high school or college chemistry related to atomic structure.
If this is something you did not learn, or if your knowledge is rusty, you can read a little bit about it before diving into the material below. There are lots of online resources that should suffice. Specifically, the structure of the electron cloud in an atom and in crystals is relevant to this discussion.
Energy States, Electrons, and More
Because of the configuration of the electrons in the sensor crystals, they have what are called allowed atomic energy levels that are shared among many atoms. These electron energy levels are arranged in bands with different energies. The lowest energy band consists of a “sea” called the ground state of the crystal. The electrons in the crystal congregate in these ground state energies if left on their own.
There is another band (or sometimes bands) of energies, called excited states, that have higher energies than the ground state has. Between these bands of energy there is a gap (in energy) in which no allowed states exist.
The excited states are generally empty because energy is needed to promote an electron from the lower energy ground state to the higher energy excited states. The energy configuration is shown schematically in the figure below.
It is possible for electrons to move from the ground state into the excited states, but to do that the electron requires energy, enough to jump the energy gap. One way to provide the necessary energy is to shine light onto the crystal. The crystal can absorb a photon (a particle of light) and use its energy to promote an electron up to the excited states. This assumes the photon has enough energy to push the electron over the energy gap separating the two allowed energy regions.
The Concept of Energy State Change Simplified
If these ideas seem strange, one way to understand the process is to think about a more familiar and completely analogous one. Imagine you want to put a ball into a basket, but the ball is on the ground, and the basket is ten feet above the ground. You might have even tried this yourself once or twice. Or if not, maybe you have seen some extraordinarily tall (and well-compensated) men and women play at doing this on TV from time to time. Anyway, even if you have never attempted this challenging task yourself, you probably realize that the ball will not spontaneously jump up into the basket.
You have to put the ball into the basket, and you have to expend effort (energy) to do it. If you don’t expend enough energy to get the ball to rise up through the ten feet from floor to basket, there is no way it will go in. Electrons in the crystal of a digital detector work the same way.
Of course, in basketball, if we get the ball into the basket it just falls right back out again. The electrons in a sensor don’t do that. It would be difficult to record the image if they did.
How Do Digital Sensors Record an Image?
So why don’t the excited electrons in a sensor fall back down to the ground state like a basketball leaving the net? Well, the material in the detector is prepared in such a way that to get into the excited states the electrons have to go over a little energy hump, and then they fall into the excited states and are trapped there. We then have to give them a little energy to get over that hump and back out of the excited states again.
It’s as if in basketball we used real baskets (which the original game did) but didn’t bother to cut the bottoms out of them. Or in the current incarnation of the game we could tie the net shut at the bottom. The effect would be the same.
We would have to go up and pull the ball out through the top of the hoop, giving it enough energy to clear the top of the rim. If we didn’t, the ball would sit up in the basket (or net) forever. The process of removing the electrons out of the excited states, called reading out the sensor, is the same. We’ll talk more about that process shortly.
We now have enough information to understand how the sensor in a camera works. To recap, light incident on the sensor is absorbed, and its energy causes electrons to move from the low-energy ground states into the higher-energy excited states. The electrons then stay there until we remove them when we read out the sensor, and the image is recorded as a result.
The more light that hits the sensor, the more electrons that will collect in the excited states, up to a point. Eventually, the excited states run out of space for additional electrons, and any subsequent light is not recorded. This is a bit like trying to fill the basket in basketball with more balls than can fit inside the basket. Any additional balls will simply bounce off the full basket and land back on the ground.
With full excited states, any additional promoted electrons just dissipate their energy and fall back into the sea of ground states, so we are not able to record them. This condition is called saturation. You might have seen it in your images. Blown highlights are an example of saturation.
Incidentally, there is another kind of saturation that occurs when the brightness in the sensor exceeds the range of numbers that can be represented by our digitization scheme. That is a purely numerical effect having to do with the way computers (like our cameras!) represent numbers. It is not related to what we have discussed above, though it can have the same effect on the appearance of an image.
If you are shooting RAW, you can often correct numerical saturation with software (by adjusting various exposure/highlight/shadow sliders, curves, etc), but you cannot do anything to correct the charge saturation in a sensor. That information is just lost. We won’t say anything more about numerical saturation.
The Sensitivity of Digital Sensors
So what does all this have to do with ISO and the sensitivity of digital sensors? Imagine a simplified example in which ten photons are incident on the sensor. We could say that if, on average, the detector promoted five electrons, it would have an efficiency of 50%. On the other hand, if it promoted 7 electrons we would say its efficiency was 70%. The higher the efficiency, the higher the sensitivity to photons.
Digital photodetectors do much better than these examples, with sensitivity, which is called the quantum efficiency, usually well over 90%, and actually quite close to 100%. Recent detectors are able to convert nearly all the photons they absorb into electrons held in the excited states, though not necessarily one electron for one photon as in our simplified example.
The figure above shows a representative quantum efficiency curve for incident photons (red curve) and for absorbed photons (black curve) for photons with wavelengths over the visible and near infrared (NIR) parts of the electromagnetic spectrum. The reflectivity of this detector material is shown by the blue curve.
Sources of Noise in Digital Sensors
To understand the origin of the noise in a digital sensor, we have to recall the nature of digital sensors: they excite electrons into high energy atomic states of a crystal and hold them there until we read them out. Each sensor is composed of multiple identical (at least in principle) “mini- sensors” called pixels (for picture element) that are arranged in a grid, sort of like a chess board.
Camera sensors have tens of millions of these pixels arranged in an array that is, say 8256 pixels wide by 5504 pixels high for the 45 megapixel sensor on a Nikon D850. By reading out each of these pixels and keeping track of their relative positions in the array, your camera, and subsequently your computer software, is able to display an image.
The sensor readout is done by electronic circuitry that measures the level of voltage held in the excited states in each of the individual pixels. For a perfect detector, the voltage would be a perfect representation of the number of electrons held in that pixel, and these would in turn be a perfect representation of the number of photons incident on the detector at that spot.
This is indeed the case, except that in a real detector there are variations in the number of electrons even if the amount of incident light remains constant. Some of the variations are related to the number of incident photons, but others are completely unrelated to those photons. The variations are called noise, and the noise in the image basically has three sources, described below.
1. Statistical or Random Noise
Whenever we measure a signal by counting objects (like photons or electrons), there is random noise equal to the square root of the number of items counted. So if we measure N photons in our detector one time, we can expect a separate independent but identical measurement of the same light source to be close to N, but with a typical variation (about 68% of the time) between N + √N and N − √N .
This statistical behavior is very general and will also be the result if, say, we do something as simple as flipping a coin. In that case we expect the number of heads and tails to each be about 50% of the total tosses. If you do this experiment you will find that they will be, typically within a variation equal to the square root of the number of heads and tails you count. (If they are not, there is something funny about the coin you are using.) As you try more and more tosses, this variation becomes proportionately smaller, but it never goes away entirely.
To put this mathematically, the fraction of the variation in counts to the total counts, √N/N = 1/√N, gets smaller as N gets larger. The reciprocal of this quantity, the ratio of signal (the number of counts, N) to the noise (the square root of N), gets larger for larger N: N/√N = √N.
Thus the signal overcomes the noise as we take more measurements (count more objects). Or put another way, any variations become less pronounced. This important idea is encapsulated in the signal-to-noise ratio, or just signal-to-noise. It completely determines the objective quality of an image. A high-quality image has a lot of signal and not much noise. A poor quality image has a relatively low signal with relatively high noise. Quality here is not meant in a subjective or artistic sense, just in an objective sense as measured by the signal and the noise in an image.
This statistical noise is present in every measurement we make. Of anything. We cannot ever expect to do better than this, so we never get a perfect reading of the image brightness at a given point. In fact, we don’t even manage to do this well. Read on.
2. Thermal Noise
In addition to statistical noise, we also have to contend with sources of noise unrelated to the incoming light. These sources are integral to the camera itself. The first is called thermal noise. It is the result of motion within the crystal of the detector.
At all temperatures above zero kelvin (about minus 273 Celsius) there will be small vibrations of the atomic nuclei comprising the crystal. In fact, the temperature is a measurement of the energy held in these motions. The higher the temperature, the more energetic are the motions of the nuclei and electrons in the crystal.
Some of the vibrating nuclei will, from time to time, interact with an electron in the crystal. The interacting electron can gain energy in the process, perhaps gaining enough energy to be promoted to one of the excited states. The process can happen even in total darkness because it is only the result of the thermal motions within the crystal.
Thermally excited electrons are indistinguishable from electrons excited by photons. When we read out the voltages from the sensor, thermal electrons make it appear that light hit the detector, even if no light at all was present. For this reason, this kind of noise is called dark counts. Thermal motion is always present and is worse at higher temperatures than at lower temperatures.
Fortunately, for most photography the light sources are bright enough that dark counts are not a problem. The photons from the object we are photographing completely swamp the few counts with a thermal origin. It is only in low light conditions (like when we want to boost our camera up to a higher ISO setting) that the dark counts become noticeable. They are the source of some of the noise we see in these low-light, high ISO images.
The problem is especially acute for night sky images. For this reason, astronomers usually cool their cameras to minimize the dark counts to a level of practical undetectability. Most of us don’t bother with that. The dark counts are always some portion of our images, but unless we are at extremely low light levels we don’t have to worry about them.
3. Read Noise
The third and final source of noise we will consider is related to reading out the sensor. When measuring the voltages in each pixel of the sensor there is always a bit of variation (i.e., noise) that arises. This noise is the result of the electronics used to make the measurement and is intrinsic to the read process.
By using better circuit design and construction, and better physical tolerances for the circuits, the read noise can be reduced to quite low levels. But in real devices with real materials it can never be made to vanish completely. Like the dark counts, the read noise is generally so small that it doesn’t matter. It is only under conditions of low light, when the signal is particularly small, that read noise becomes a problem.
The read noise is a constant property of each camera and is generally available from the camera manufacturer. It can also be measured if need be, though special software is required to make the necessary measurements.
How Increasing the ISO Exacerbates Noise
So we have seen that sources of noise are always present in our images. When we increase the ISO setting we make any noise present much more noticeable. This happens because raising the ISO setting does not make the signal any stronger and it does not raise the sensitivity of the camera.
Raising the ISO merely boosts the output of the sensor readout, multiplying it by a constant factor called the gain. When we boost the output we boost both the signal and the noise by the same amount, and that does not change the signal-to-noise ratio. If before the boost we had a dim image with low signal-to-noise, afterward we have a bright image, still with low signal-to-noise. And, again, it is the signal-to-noise ratio that matters in the quality of the image.
The figure above illustrates the effect of increasing the gain. Synthetic data are shown that could represent, for example, the brightness of pixels (number of electrons held in each pixel) plotted on the vertical axis, and the pixel number along a row in a camera sensor running on the horizontal axis. The noise is visible as the random vertical spread in the base value of these data.
There are four “signals” injected into these data, and each is a multiple of the typical noise value. The first is equal to the noise value (called 1-sigma), the second is twice the noise value (2-sigma), the third is three times, and the final signal four times, the noise value, or 3- and 4-sigma, respectively.
The lower plot shows the data themselves. The upper plot is the data multiplied by a constant gain of 3, a multiplication that is analogous to increasing the ISO setting on a camera to make the image brighter. The 3-sigma and 4-sigma signals are both easily seen in either plot. The 1-sigma and 2- sigma signals are lost in the noise in both.
The plots demonstrate how increasing the brightness of an image by increasing the gain does not improve the quality of the image if the signal-to-noise is too low. If we have regions where the signal is high enough, increasing the gain can help. Otherwise, it just gives us a brighter field of noise, with little or no image improvement.
Since the ISO setting for a camera only increases the gain of the image, and therefore its brightness, it tends to make any noise more prominent, just as is the case in the last figure.
And since images always contain noise at some level, increasing the ISO will always make images look noisier. That is why it is generally best to use the lowest ISO setting possible when making an image, preferably the base ISO of the camera, and then adjust the exposure using shutter speed and aperture in order to obtain a bright enough image.
However, if the choice is between a noisy image and no image, which it can be in very low light conditions, then it might be better to boost the ISO and capture the image. Every photographer must decide on her or his own what is an acceptable noise level, as conditions dictate.
We hope these two articles on what ISO is and the science behind digital noise helped clear up any confusion about this important camera setting. If you have any questions about this discussion on ISO, please comment below.