Google Pixel when released made a wave of news both for bad things and good things. The bad things that plagued the Pixel’s 3rd generation were the photos not being saved, camera encountering fatal errors, battery issues and the dreadful notch. And as if the single hideous notch was not enough the Pixel started showing Two notches; one the physical notch at the top and a bug with a notch on the side. But even with the problem and issues we can actually almost forgive the Pixel lineup for two simple reasons, the bloatware free, simple and clean pure Android UI and one of the best (if not the best) smartphone camera to ever come out.
If we are to give a look at the camera spec sheet of Pixel 3 then we are not going to find any differences. Both the second and the third generation of Pixel have the same single 12.2 megapixel camera on the rear and single 8 megapixel front camera albeit with another wide-angle selfie shooter in the third generation Pixel phones which means the latest iteration of Pixel phones have two selfie shooters.
But the major standout features of the cameras aren’t the hardware rather they are what’s on the inside meaning the software. And boy have Google nailed it. The awesome pictures produced by the Pixel cameras are done and processed by the software made by Google and is able to make other phones with dual, triple and quadruple camera bite the dust.
Among the multitude of camera features introduced by Google in its third generation of Pixel phones the NightShift have gotten applause and appraisal from every photo and tech enthusiasts. And why won’t it? It produces pictures from the dark that are way above what the modern smartphone industry can provide. But another standout feature of the new Pixel camera is the Top Shot. This mode allows users to take photos that are perfectly timed and are often found difficult to capture at the right time.
Top Shot is an upgraded version of the Motion Photos from previous Pixel phones. What Top Shot does is it captures clips before you even press the shutter button and after you have pressed the shutter button. This allows the users to scroll through the clips and find the perfect shot that are Instagram ready.
Google explained that what the camera does is that it takes upto 90 images before the shutter button is even fired up and it analyses those 90 images captured. After analyzing those pictures what it essentially does is recommend the best snap among those based on various factors such as smiles, emotional expressions, facial expressions, open eyes,etc. The pictures are also selected on the basis of technical information such as exposure, lighting and optical flow. I know what you’re thinking. Isn’t capturing 90 images in 1.5 seconds (yes 1.5) gonna strain the battery and the phone as a whole? Well that’s where Google’s machine learning comes into play. It has made Top Shot power efficient and is adaptive which means this is gonna put very minimal strain on your phone.
Another big question arises then. What if there aren’t any faces? Which means there are no human expression to gauge the best snaps. Fortunately Google has made contingency plans for such shots too with the machine learning. The machine learning along with the facial and emotional expressions also takes motion blur, lighting, exposures, object motion, focus and white balance. All of these combined and the camera produces excellent photos worthy of every bit of the appraisal we have been hearing about it.
Here’s a video posted by Google showing the Top Shot in action: