Smartphone Camera Technology from Google and Nokia

A few days ago Nokia unveiled its new smartphone: the Nokia 9 PureView. It looks kind of weird (or maybe funky?) with its 5 cameras at its rear (see image above). But what’s interesting is how Nokia uses these 5 cameras to give you better quality photos with a technique called High Dynamic Range (HDR) imaging.

HDR has been around in smartphones for a while, though. In fact, Google has had this imaging technique available in some of its phones since at least 2014. And in my opinion it does a much better job than Nokia. 

In this post I would like to discuss what HDR is and then present what Nokia and Google are doing with it to provide some truly amazing results. I will break the post up into the following sections:

  • High Dynamic Range Imaging (what it is)
  • The Nokie 9 PureView
  • Google’s HDR+ (some amazing results here)

High Dynamic Range Imaging

I’m sure you’ve attempted to take photos of high luminosity range scenarios such as dimly lit scenes or ones where the backdrop is brightly radiant. Frequently such photos come out either overexposed, underexposed and/or blurred. The foreground, for example, might be completely in shadow or details will be blurred out because it’s hard to keep the camera still when you have the shutter speed set to low to let in extra light.

HDR attempts to alleviate these high range scenario problems by capturing additional shots of the same scene (at different exposure levels, for instance) and then taking what’s best out of each photo and merging this into one picture.

Photo by Gustave Le Gray (image taken from Wikipedia)

Interestingly, the idea of taking multiple shots of a scene to provide a better single photo goes back to the 1850s. Gustave Le Gray, a highly noted French photographer, rendered seascapes showing both the sky and the sea by using one negative for the sky, and another one with a longer exposure for the sea. He then combined the two into one picture in the positive. Quite innovative for the period. The picture on the right was captured by him using the HDR technique.


The Nokia 9 PureView

As you’ve probably already guessed, Nokia uses the five cameras on the Nokia 9 PureView to take photos of the same scene. However, each camera is different. Two cameras are standard RGB sensors to capture colour. The remaining three are monochrome that capture nearly three times more light as the RGB cameras. These 5 cameras are each 12 megapixels in resolution. There is also an infrared sensor for depth readings.

Depending on the scene and lighting conditions each camera can be triggered up to four times in quick succession (commonly referred to as burst photography).

One colour photo is then selected to act as the primary shot and the other photos are used to improve it with details.

The final result is a photo of up to 240 megapixel in quality. Interestingly, you also have control over how much photo merging takes place and where this merging occurs. For example, you can choose to add additional detail to the foreground and ignore the background. The depth map from the depth sensor undoubtedly assists in this. And yes, you have access to all the RAW files taken by the cameras.

Not bad, but in my opinion Google does a much better job… and with only one camera. Read on!

Google’s HDR+

Google’s HDR technology is dubbed HDR+. It has been around for a while, first appearing in the Nexus 5 and 6 phones. It is now a standard on the Pixel range of phones. It is standard because HDR+ uses the regular single camera on Google’s phones. 

It gets away with just using one camera by taking up to 10 photos in quick succession – much more than Nokia does. Although the megapixel quality of the resulting photos may not match Nokia’s, the results are nonetheless impressive. Just take a look at this:

(image taken from here)

That is a dimly lit indoor scene. The final result is truly astonishing, isn’t it?

Here’s another example:

Both pictures were taken with the same camera. The picture on the left was captured with HDR+ turned off while the picture on the right had it turned on. (image taken from here)

What makes HDR+ standout from the crowd is its academic background. This isn’t some black-box technology that we know nothing about – it’s a technology that has been peer-reviewed by world-class academics and published in a world-class conference (SIGGRAPH Asia 2016).

Moreover, only last month, Google released to the public a dataset of archives of image bursts to help improve this technology.

When Google does something, it (usually) does it with a bang. You have to love this. This is HDR imaging done right.


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

Samsung’s vs Apple’s face recognition technologies – and how they have been fooled

In September 2017 Apple announced iPhone X with a very neat feature called Face ID. This feature is used to recognise your face to allow you to unlock your phone. Samsung, however, has had facial recognition since the release of Android Ice Cream Sandwich way back in 2011. What is the difference between the two technologies? And how can either of them be fooled? Read on to find out.

Samsung’s Face Recognition

Samsung’s Face Unlock feature works by using the regular front camera of your phone to take a picture of your face. It analyses this picture for facial features such as the distance between the eyes, facial contours, iris colour, iris size, etc. This information is stored on your phone so that next time you try to unlock it, the phone takes a picture of you, processes it for the aforementioned data and then compares it to the information it has stored on your phone. If everything matches, your phone is unlocked.

The only problem is that all processing is done using 2D images. So, as you may have guessed, a simple printed photo of your face or even one displayed on another phone will fool the system. Need proof? Here’s a video of someone unlocking a Galaxy Note 8, which was released in April 2017, with a photo shown on another phone. It’s quite amusing.

There was a “liveness check” added to Face Unlock with the release of Android Jelly Bean in 2012. This works by attempting to detect blinking. I haven’t tried this feature but from what I’ve read on forums, it isn’t very accurate and requires a longer time to process your face – hence probably why the feature isn’t turned on by default. And yes, it could also be fooled by a close-up video of you, though this would be much harder to acquire.

Note: Samsung is aware of the security flaws of Face Unlock, which is why it does not allow identity verification for Samsung Pay to be made using it. Instead it advocates for the use of its iris recognition technology. But is that technology free from flaws? No chance, as a security researcher from Berlin has shown. He took a photo of his friend’s eye from a few metres away (!) in infrared mode (i.e. night mode), printed it out on paper, and then stuck a contact lens on the printed eye. Clever.

Apple’s Face ID

This is where the fun begins. Apple really took this feature seriously. In a nutshell, Face ID works by firstly illuminating your face with IR light (IR = infrared light that is not visible to the naked eye) and then projecting a further 30,000 (!) IR points onto your face to build a super-detailed 3D map of your facial features. Quite impressive.

This technology, however, has been in use for a very long time. If you’re familiar with the Kinect camera/sensor (initially released in 2010), it uses the same concept of infrared point projection to capture and analyse 3D motion.

So, how do you fool the ‘TrueDepth camera system’, as Apple calls it? It’s not easy because this technology is quite sophisticated. But successful attempts have already been documented in 2017.

To start off with, here’s a video showing identical twins unlocking each other’s phones. Also quite amusing. How about relatives that look similar? It’s been done! Here’s a video showing a 10-year-old boy unlocking his mother’s phone. Now that’s a little more worrisome. However, it shows that iPhone Xs can be an alternative to DNA paternity/maternity tests 🙂 Finally, in November 2017, Vietnamese hackers posted a video documenting how their 3D-printed face mask fooled Apple’s technology. Some elements, like the eyes, on this mask were printed on a standard colour printer. The model of the face was acquired in 5 minutes using a hand-held scanner.


Apple in September 2017 released a new facial recognition feature with their iPhone X called ‘FaceID’. It works by projecting IR light onto your face to build a detailed 3D map of it. It is hard to fool but successful attempts have been documented in 2017. Samsung’s facial recognition system called Face Unlock has been around since 2011. It, however, only analyses 2D images and hence can be duped easily with printed photos or another phone showing the phone owner’s face.

To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read: