As part of our regular appearances on the TWiT Network (named after its flagship show, This Week in Tech) show `The New Screen Savers`, our Science Editor Rishi Sanyal joined host Leo Laporte and co-host Megan Morrone to talk about how smartphone cameras are revolutionizing photography. Watch the segment above, then catch the full episode here.
Rishi has also expounded upon some of the topics covered in the segment below, with detailed examples that clarify some of the points covered. Have a read after the fold once you`ve watched the segment.
You can watch The New Screen Savers live every Saturday at 3pm Pacific Time (23:00 UTC), on demand through our articles, the TWiT website, or YouTube, as well as through most podcasting apps.
So who wins? iPhone X or Pixel 2?
Not so fast. Neither.
Each has its strengths, which we hope to tell you about in our video segment above and in our examples below. Google and Apple take different approaches, and each has its pros and cons, but there are common overlapping practices and themes as well. And that`s before we begin discussing video, where the iPhone`s 4K/60p HEVC video borders on professional quality while Google`s stabilization may make you want to chuck your gimbal.
Smartphones have to deal with the fact that their cameras, and therefore sensors, are tiny. And since we all (now) know that, generally speaking, it`s the amount of light you capture that determines image quality, smartphones have a serious disadvantage to deal with: they don`t capture enough light. But that`s where computational photography comes in. By combining machine learning, computer vision, and computer graphics with traditional optical processes, computational photography aims to enhance what is achievable with traditional methods.
Intelligent exposure and processing? Press. Here.
One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more. No exposure ...
|