Digital photography has changed a lot over the past two decades, with clunky DSLRs giving way to sleek smartphones. Over the next 10 years, expect a similar evolution as the science behind the art changes.
Much of the technology in use today represents the breakthroughs of the first generation of digital cameras. Film was stripped away and digital image sensors took its place, but much of the rest of the camera — things like lenses, shutters, autofocus systems — often stayed largely the same. Manufacturers centered camera designs on the single, fleeting snap of the shutter.
Now two big trends are reshaping our expectations of digital photography., which uses computing technology to improve photos, vaults over the limits of smartphone camera hardware to produce impressive shots. And , which drops hardware once necessary for film and elevates the image sensor’s importance, overhauls the mechanics of traditional cameras. Old assumptions about optics are being reconsidered — or discarded — as computer processing takes over.
“Cameras will change more in the next 10 years than in the past 10,” said Lau Nørgaard, vice president of research and development at Phase One, a Danish company that makes ultra-premium 151-megapixel medium-format cameras costing $52,000 apiece.
The changes will matter to all of us, not simply professional photographers on fashion shoots. New technology will mean better everyday snapshots and new creative possibilities for enthusiasts. Everything — selfies, landscapes and family portraits — will simply look better.
For much of camera history, bigger meant better. A larger frame of film could capture more image detail, but that meant a bigger camera body. Bigger lenses offered more detail, but that meant more weight.
Computational photography, which runs on powerful processors, will change that paradigm. And that’s good news because most of us rely on our phones for taking pictures.
Perhaps some of the most advanced computational photography available now is in Google’s Pixel 3 phone, which arrived in October. Here’s some of what it can do:
- Combine up to nine frames into a single shot with a technology called HDR+ that captures details in both dark shadows and bright highlights.
- Monitor how much your hands shake the photo so it can snap shots during fleeting moments of stillness.
- Compare multiple shots of photos to find the ones where people aren’t blinking or suffering from awkward facial expressions.
- Brighten the parts of the image where it detects humans and slightly smooths skin to make subjects look better.
- Zoom in better by capturing more data about the scene from multiple shots and and using artificial intelligence technology that predicts how best to expand an image.
- Photograph in dim conditions by merging multiple shots through a technology called Night Sight.
Isaac Reynolds, Google’s Pixel camera product manager, says his company’s product underscores a fundamental change in what we think cameras are. Much of the Pixel 3’s performance and features come not from the lens and sensor but from software running on the phone’s chip that processes and combines multiple frames into one photo.
“You’re seeing a redefinition of what a camera is,” Reynolds said. “The Pixel 3 is one of the most software-based cameras in the world.”
Seeing in 3D
It’s all pretty radical compared with a shutter flipping open for a moment so photons can change the chemistry of film. And it’s only the beginning.
Two years ago, the iPhone 7 started using two cameras side by side, which lets the phone judge just how far away each element of the scene is. The phone’s computing hardware then constructs a 3D-infused layer of information called a “depth map” in which each pixel of a photo holds both color and spatial information.
Initially, Apple used the technology to re-create a style used in portrait photography that requires expensive camera lenses. Those lenses could shoot a shallow depth of field that focused on the subject but left the background an undistracting blur. Apple used software to do the blurring.
The depth map has more to offer. With Lightroom, Adobe’s widely used photo editing and cataloging software, you now can adjust an iPhone photo based on that 3D information. For example, you can selectively brighten shadowed subjects in the foreground while leaving a bright background unchanged.
That’s a manual process photo enthusiasts will appreciate, but it should help smartphones take photos automatically, said Google distinguished engineer Marc Levoy, who coined the term “computational photography” in 2004 when he was at Stanford University. A camera that could generate reliable depth maps means a camera app could fix problems with brightness and color balance so photos look more natural.
“We have just begun to scratch the surface with what computational photography and AI have done to improve the basic single-press picture taking,” Levoy said.
Generations of photographers grew up using SLRs — short for single lens reflex. It’s named after its reflex mirror that bounces light from the lens into a viewfinder so you can compose a shot. When you take the photo, the mirror flips out of the way and the shutter opens to let light reach the film.
The first serious digital cameras — DSLRs — replaced the film with an image sensor and memory card. But they left almost everything else the same — the mirror and viewfinder, the autofocus system, the mount for interchangeable lenses.
Now mirrorless cameras are changing that setup, dumping the mirror and optical viewfinder. You compose your shots using a screen. It might be the screen on the back of the camera or a smaller electronic viewfinder you use like a film-era photographer.
With mirrorless cameras, the sensor is recording nonstop. It’s essentially taking a video but throwing away most of the data, except when you push a button and pluck out a single frame. Indeed, this video-centric design makes mirrorless cameras adept at video.
What’s so great about mirrorless designs? They offer smaller, lighter camera bodies that can shoot photos silently; use autofocus across the frame, not just in the central portion; make it easier to compose shots at night; shoot fast bursts of photos; and preview shots more accurately through an electronic viewfinder so you can do better dialing in exposure, focus and aperture.
“There’s none of this dropping the camera down, looking at the image and seeing if it’s too bright or dark,” said wildlife photographer David Shaw, who sold his Canon gear to move to Panasonic’s Lumix G9 camera, which is smaller and a quarter the weight. “I can see it all as I’m shooting.”
Canon and Nikon embrace mirrorless
Mirrorless cameras have been gaining traction for years, but here’s what changed in 2018: Canon and Nikon.
The two DSLR heavy hitters, still the top dogs of the traditional photography market, started selling high-end mirrorless models. Nikon’s Z7 and Z6 and Canon’s EOS R. Following Sony’s lead, they come with large “full-frame” sensors that are best at capturing color and light data. Nikon and Canon aren’t phasing out their traditional SLRs, but their mirrorless models will be peers. Meanwhile, mirrorless pioneer Panasonic joined in with plans for two full-frame models debuting in 2019.
Mirrorless is the future, says Stuart Palley, a Newport Beach, California, professional photographer whose specialty in wildfire photography appears in his new book Terra Flamma.
“DSLRs are going the way of medium formats and Speed Graphics,” Palley said, referring to film-era camera designs that now are mostly footnotes in history. He’s begun shooting with a Nikon Z7 and likes how it’s lighter than his Nikon D850 DSLR.
“It’s so liberating carrying around less,” Palley said.
The Z7, like the Sony and Panasonic full-frame mirrorless models, also can move its image sensor to compensate for shaky hands — something utterly impossible with film. “I can shoot a handheld image of the Milky Way now. It’s crazy,” Palley said.
Outpaced by phone innovation?
The traditional camera makers are adapting. But will they adapt fast enough? There’s nothing in principle that stops them from using the same computational photography techniques that smartphone makers do, but so far that’s been a secondary priority.
“The camera guys have to look at what’s going on with handsets and computational photography and see what’s’ adaptable to traditional cameras,” said Ed Lee, a Keypoint Intelligence analyst. He expects the pace of change in photography technology to increase.
The phone makers are moving fast, but Phase One’s Nørgaard doesn’t see any problem embracing their technology. Indeed, the company has begun embedding its Capture One editing software directly into the camera body.
“The cellphones make really good images from a small camera,” Nørgaard said. “We can do the same on the other end, where we start with an absolutely fantastic image. The software approach will push that even further.”
But smartphones have gobbled up the point-and-shoot camera market and each year pick up more high-end camera abilities. Phones that sell by the tens of millions offer a huge incentive for chipmakers like Qualcomm to push photography technology. The company’s next-gen mobile chip, the Snapdragon 855, adds all kinds of photo smarts, like the ability to detect, identify and track objects in a scene, to create depth maps and to counteract shaky hands.
And that’s just next year’s chip, said P.J. Jacobowitz, Qualcomm’s senior marketing manager for camera and computer vision.
“In this book, there are about 50 chapters,” he said of digital photography tech. “We are in chapter two.”
Techhnews’s Holiday Gift Guide: The place to find the best tech gifts for 2018.
Cambridge Analytica: Everything you need to know about Facebook’s data mining scandal.