With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.
One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we`re seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.
But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach - at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we`ve seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus.
Let`s take a closer look at some of the Pixel 3`s core technologies.
1. Super Res Zoom
Last year the Pixel 2 showed us what was possible with burst photography. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. When you press the shutter, the camera essentially goes back in time to those last nine frames1, breaks each of them up into thousands of `tiles`, aligns them all, and then averages them.
Breaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. Averaging simulates the effects of shooting with a larger sensor by `evening out` noise. And going back in time to the last 9 frames captured right before you hit the shutter button means there`s zero shutter lag.
Like the Pixel 2, HDR+ allows the Pixel 3 to re ...
|