The era of computational photography

Above: The Light L16 camera looks like the mutant love child of an iPhone and a common housefly. Image courtesy Light .

BitDepth#1015 for November 17, 2015

Photography was once straightforward. Unbelievably fussy and complex, perhaps, but the rules it followed were understandable to anyone who got to at least a third form class doing chemistry and physics.

Even the structure of film itself made sense to anyone who’d ever baked a cake. One layer trapped green, the next blue and so on.

Black and white film, where you started if you got into the craft three or four decades ago, was even more direct.

Light sensitive crystals suspended in a gel coated onto plastic and dried. You soaked it in one solution to develop the exposed bits and in another to fix the image so it could be used in the light.

The dirty secret of today’s vastly simplified digital era of photography is that it is underpinned by exponentially more complex technologies that are understood by very few of the people using all those image capture devices.

Twenty years ago, the automatic mode on a camera made exposure calculations. Modern cameras consult a vast database of image models programmed into it to deliver results that are based on extensive forecasting of what the light focused on the sensor is likely to mean.

It’s no insult to note that many digital cameras are actually better informed about common photographic scenarios than the people pressing their shutters are.

The next revolution in photography is just beginning, and it isn’t going to be about mirrorless cameras, the current prix du jour, at all.

Photography became inextricably linked with computers with the coming of the Barneyscan photo scanner, which shipped with an image editor that would soon eclipse the device it supported.

Adobe took one look at the software and bought it, along with the brothers who had developed it, renaming it Photoshop. Other image editors have come and gone, but the seminal work of the Knoll brothers remains the market leader for anyone who wants to work with pixel based images.

The relationship between traditional photography and computing has become ever more tenuous in recent years. In my own workflow, I create in two diametrically opposed ways. For documentary and editorial work I treat the files like I would a Tri-X negative, moving it from capture to final display with only the lightest of bit-level touches.

For my commercial work, pixels are putty, and my initial photography is often no more than raw material harvesting. For architectural and industrial work it’s not uncommon for the final image to be made up of between five and fifteen different captures, merged using high dynamic range software and stitched together.

Is it the result true to life? Most are far truer to what I witnessed than a single photo could ever be, though the images range far beyond what’s possible with traditional techniques.

You might be surprised to discover that most digital camera sensors are colour blind. Images of the Malleganic Cloud, for instance, are black and white photos enhanced using the best scientific understanding of how light is likely to behave over massive distances.

Enabling your grayscale camera sensor to figure out colours using a Bayer filter is an afterthought once you’ve done that type of math.

An scanning electron microscope doesn’t even use light to capture its photos of the extremely tiny.

The big brains that drove the development of your Canon 5D haven’t stopped thinking about how photography might work and there’s going to be a day soon when the word photoshopped is going to sound as quaint and obscurantist as the idea of a darkroom seems today.

Look to the way the Lytro and Light L16 camera record images for an indication of the way photography is going to be done in the future.

I played around with a second generation Lytro camera, the Illum, at last year’s PhotoPlus Expo and while I’d understood what it was supposed to do, actually seeing it happen was an overwhelming experience.

I’d spent the last thirty-nine years of my life working at nailing focus, one of the basics of photography, and this was a camera that made that effort optional.

The Light L16 is an even wilder proposition. Once you get past the rather disturbing front face of the camera, which points 16 lenses of different focal lengths at a potential subject and looks like a camera imagined by Poe, the capabilities of the camera are positively mind-blowing.

The camera offers the equivalent of a 35-150mm zoom lens in a package the size of an unusually tall paperback book of otherwise medium thickness.

The user zooms on the smartphone style display screen to frame a shot, and the camera calculates which ten lenses it should use to actually take the photo.

Depth of field, like the Lytro, is an after-the-fact user decision.

These new devices, introduced like Apple’s early QuickCam are being positioned in the market for curious early adopters, but lightfield capture and computational photography are only in their earliest incarnations.

More will follow, and fast.