which wavelengths of light are received at each pixel. Several methods for discerning this information have been explored over the years, but two distinct systems have evolved to become the most widely used in modern digital cameras. Common in the broadcast industry is the three-sensor camera. In this design a prism splits the light entering the camera into red, green, and blue components, directing each part towards its own unique sensor. Each of the three sensors contains the same number of pixels, and the intensity of light at each pixel is recorded, yielding red, green, and blue information at each point. This information is combined in order to form a color image. The other prevalent system, and the one we will focus on in this article, is the single-sensor camera. As the name describes, only one sensor is used, but with the addition of a segmented color fi lter on top. The fi lter covers each pixel cavity, and only light with wavelengths to which the fi lter is transparent can pass through. The most common fi lter pattern is the Bayer Array, in which red, green, and blue fi lters are arranged in a tile-like fashion ( Figure 2 ). Figure 3 – The Bayer filter allows only specific color regions to pass into each pixel. At least three pixels—one each of blue, green, and red—are required in order to fully resolve the color of light in a given area. To compensate for this decrease in resolution, an interpolation process takes place in the software of the camera to estimate the color at each pixel by taking its neighbors into account. Pixels are grouped into sets of overlapping arrays, which are then compared and averaged to extract more information from the image. The ultimate goal is to have enough color information to correctly determine the real-world colors in the scene, as they would be perceived by the human eye. Although most feature fi lms and television productions prefer to adjust color, often unnaturally, to suit the director’s intent, it is useful to start with as accurate an image as possible. No matter how close we can come to the eye’s response, there is one critical difference: adaptability. The human visual system is highly adaptive to our surrounding environment. Some of the adjustment occurs optically, when our retinas enlarge and shrink according to the intensity of light that hits them, and other adaption occurs mentally, when our brain helps to color-correct the images, adjusting the color to fi t what we think “looks right.” Digital sensors are quite the opposite, however. They record exactly what they see, leaving manipulation of the image to the camera software and postproduction process. Focusing on the spectrum The three types of cones in our eyes cover a range of wavelengths, from roughly 360 nm to 830 nm, but with greatly diminished sensitivity at either end. Although camera manufacturers would very much like to construct digital sensors with sensitivities matching those of the human visual system, they are restricted by the limits of Bayer Array fi lter material and manufacturing methods. The sensitivity of a single-sensor digital camera varies widely from manufacturer-to-manufacturer and camera model-to-camera model (sometimes even from camera-to-camera). Yet most generally follow a format of three spectral peaks, corresponding to the blue, green, and red transparency of the fi lters in the Bayer Array. Figure 2 – Bayer Array filter pattern Notice that the Bayer system uses twice as many green tiles as red or blue tiles. Since our eyes are most sensitive to green light, camera manufacturers are able to gain an improvement in detail and image noise by doubling the number of green tiles. An obvious limitation of this single-sensor approach is that it reduces the resolution of information that can be captured, since each pixel only receives light within a specifi c color region ( Figure 3 ). Figure 4 – Example of spectral sensitivity in a single-sensor digital camera. FALL 2011 17 PROTOCOL