Seeing What's Not There
Text and photography copyright © Richard Bernabe. All rights reserved.

One of the first phrases lamented by a beginning photographer is, "My pictures don’t look anything like what I saw." The following course of action is either purchasing a newer, more expensive camera or actually learning a few of the differences in how the digital camera or film "sees" and interprets the world compared to the human visual system.

A seasoned photographer would have advised that they save their money and do a little research instead. It’s no secret that there are vast differences between how the two systems perform and operate and that equivalence should not be expected.

Listed below are just a few examples of how our eyes and brain see and perceive things differently than our camera and vice versa. Some of these are fairly obvious and some are not. The conclusion, however, is that with photography, what you don’t see is just as important as what you do.

Lost Dimension

This may seem obvious, but it may also be too obvious for you to have been given it much thought. We have two eyes and binocular vision, therefore we can see in three dimensions. The camera can only capture and represent two. So during the photographic process, we are creating a two-dimensional representation of a three dimensional experience.

If an important element of the scene you are photographing includes depth, this handicap must be considered. Using leading lines, near/far compositions with a wide-angle lens, and side lighting are just a few of the ways to create the illusion of depth in a flat, two-dimensional image.

A wide-angle perspective creates the illusion of depth.

Color Constancy

Our human visual system subjectively perceives the color of objects as relatively constant, even under varying illumination and lighting conditions. This tendency is called color constancy. Anyone who learned outdoor photography with color film knows all about the color shifts that occur under different light temperatures. White water or snow will be rendered as blue in open shade. That is because the shade receives no direct illumination from the sun and the only light it gets is radiating from the blue sky.

Tungsten lighting creates a yellowish-orange colorcast, as long as daylight-balanced film is used. That yellowish-orange color is really there, even if we don’t perceive it!

We see the colors as the same, regardless of the light’s color. With digital cameras, we control these color shifts with White Balance. Auto White Balance does a decent job at making corrections in-camera, but it’s still important to be aware of the color of the light, even if we cannot see it. If you shoot raw, being aware and sensing the color of the light can influence how you represent the scene later during processing.

Dynamic Range

Mileage varies, but the image sensors in most digital cameras have a dynamic range of five to seven stops. Color transparency film has even less than that. Since our eyes are capable of seeing usable detail in a scene upwards to thirteen stops, this represents a formidable visual hurdle for beginning photographers to overcome.

Lots of experience and dozens of failures teach us to evaluate a potential scene much differently than how we have been conditioned to see it throughout our life. That enormous dynamic range that our eyes can accommodate must be consciously compressed in order to see areas of probable highlight clipping and/or blocked-up shadows, despite the fact that the scene looks just fine to the naked eye. Learn to see the scene not as it is, but how it will be captured by the camera.

Digital blends, HDR, fill flash, reflectors, graduated neutral density filters, or visiting a location under different lighting conditions are a few methods or techniques to overcome this disparity.

An HDR composite of three different exposures allowed me to preserve details in both the brightest and darkest areas of the scene. I was careful not to make the darker areas, such as the foreground, too unnaturally bright, however. This closely represents the range of tones that I experienced in person.

Tonal Response

Your digital camera captures light in a linear fashion, while you do not. For digital photographers, this creates a problem capturing intense light in the same way we see it.

I like to illustrate it this way. If you were to walk into a completely dark room and turn on a 100-watt light bulb, your eyes will register a given amount of light that is equivalent to the output of that light bulb. If you turn on a second 100-watt light bulb, the output has now doubled – there is now twice as much light. But you don’t register or perceive twice as much light. You may perceive marginally more light, but not nearly twice as much.

Your digital camera, however, would capture twice as much light. It’s a one-to-one, linear relationship between light output and what it records. This is the primary reason why capturing extremely bright areas – such as the sun – is much more difficult with digital cameras than with film. Film, by the way, has a tonal response that is very similar to how humans see and interpret light intensity, which is on a non-linear curve represented as gamma.

Depth of Field

Do we humans see the world with very little or a lot of depth of field? When I ask this question of my classes they nearly always answer, "A lot." Then I ask everyone to raise a finger and hold it eight inches or so away from their face and intensely focus on it. Go ahead, try it. Now with your peripheral vision, try and catch a glimpse of the other side of the room and ask yourself if it’s in focus or not. It shouldn’t be.

Since we rarely, if ever, stare at a single focal point for very long, this is not how we actually interpret the world. Instead, our eyes are constantly darting around and scanning the scene – back and forth, up and down, near and far. The brain creates a “composite image” of different focal planes and we interpret the entire scene as being in focus.

This may be why landscape images are regularly captured and represented by photographers in sharp focus from near to far when our eyes cannot physically see the scene in that way.

You are free to file this in the useless information department, but perhaps it is interesting to you none-the-less: based on the diameter of the pupil and the distance measured between the iris and the fovea, the maximum f-stop of the human eye is close to f2.4 fully dilated, with the minimum at f9.

Our eyes cannot possibly see this much depth of field, but this is how we perceive the scene anyway.

Conclusion

These are not the only ways our eyes and brain differ from the camera in how visual information is seen and interpreted. There are also the human vision’s angle-of-view, context, and the clever editing job our brain performs on the scene to simplify it before our very eyes. There are many, many others, in fact, but those mentioned above are some obvious examples, which we should all be reminded of from time to time.

Reconciling how you and your camera see and perceive the world can make a big improvement in your photography. Remember, however, it’s not always about seeing what's there before your eyes that’s important, but seeing what’s not there.

Comments on NPN nature photography articles? Send them to the editor. NPN members may also log in and leave their comments below.

Richard Bernabe has been a full-time professional outdoor photographer since 2003. Information on the photography workshops he regularly leads can be found at Mountain Trail Photo and his website.

Print This Page Download Adobe Acrobat Reader 5.0