Original article was published on Artificial Intelligence on Medium
Computational Photography at its Finest
The usual conception for mobile photography is that the higher the megapixel size of a camera, the better it is. It’s because of the amount of pixels that it captures that even if you zoom it in, the photo is supposed to remain, or still at least, clear and detailed. That’s why over the years, different smartphone makers tend to increase the megapixels of their flagship phones for every iteration. For marketing purposes? Yes. Even if it is true or not, we can’t ignore that fact that the higher the number, the more it is appealing to the consumers.
The problem with higher megapixel cameras, however, is that they capture less amount of light than lower megapixel cameras. We all know that light is the greatest factor when you want to capture the best quality photo. Low light photography in mobile has been struggling ever since while flash photography doesn’t fully resolve the issue. So the question is, how do you solve a problem something like this wherein two important aspects of photography are inversely proportional with each other. The higher the pixel count, the lower light captivity resulting to a less-detailed low light photography. While the higher light captivity, the lower the pixel count which results to blurry image when you zoom in the photo. Let’s discuss the former.
A lot of you may be wondering sometimes why your phone lags a little bit after capturing a photo from your camera app. The reason for that a decade ago may be obvious since smartphones before are still slow but why is it still happening today even though smartphones are already hundred of times faster? The answer is post-processing. As the word implies, post-processing is the enhancement process being done to the photo by your smartphone after you take the shot from your camera. Since not all of us are photo editors, our smartphone saves us the time of doing an enhancement of our captured photo. I will show you an example which is the Night Mode from the iPhone 11 series.
Below is a photo taken from a normal shot.
You could see that the normal low light photography of iPhone 11 is poor resulting to a grainy shot and obviously even more less detailed when zoomed in.
Now let’s take a look when we add a flash on our photo capture.
Now the photo is bright, we can see the cleare details but there is still a problem here. Color accuracy. As you can see, the image is white-washed which is very common with flash photography. You will see on the next image a very much closer color definition to the original.
Now this is iPhone 11’s Night Mode. The result is heaven and earth when compared with the normal shot taken from the same phone. Take note, this was taken without using the phone’s flash. You might be wondering how? Computational photography. Combining the advancement of the phone’s camera and the processing power and AI capabilities of the phone’s CPU, the result is this very detailed image. It’s somehow hard to believe how the Lego toy’s face and clothing details are restored when you look at the normal shot, the image is just a pure blur. That’s just one demonstration of what Artificial Intelligence can do nowadays. Making use of the hardware’s futuristic advancements, the software side is also now gaining their own traction.