Reflection-in-the-eye

Ever thought about whether you could zoom in on someone’s eye in a photo and analyse the reflection on it? Read on to find out what research has done in this respect!

A previous post of mine discussed the idea of enhancing regions in an image for better clarity much like we often see in Hollywood films. While researching for that post I stumbled upon an absolutely amazing academic publication from 2013.

The publication in question is entitled “Identifiable Images of Bystanders Extracted from Corneal Reflections” (R. Jenkins & C. Kerr, PloS one 8, no. 12, 2013). In the experiments that Jenkins & Kerr performed, passport-style photographs were taken of volunteers while a group of bystanders stood behind the camera watching. The volunteers’ eyes were then zoomed in on and the faces of the onlookers reflected in the eyes were extracted, as shown in the figure below:

Zooming-face
(Image adapted from the original publication)

Freaky stuff, right!? Despite the fact that these reflections comprised only 0.5% of the initial image size, you can quite clearly make out what is reflected in the eye. The experiments that were performed also showed that the bystanders were not only visible but identifiable. Unfortunately, with a small population size for the experiments, this technically makes the results statistically insignificant (the impact factor of the journal in 2016 was 2.8, which speaks for itself) – but who cares?! The coolness factor of what they did is through the roof! Just take a look at a row of faces that they managed to extract from a few reflections captured by the cameras. Remember, these are reflections located on eyeballs:

reflections-eye-row-faces
(Image taken from the original publication)

With respect to interesting uses for this research the authors state the following:

our findings suggest a novel application of high-resolution photography: for crimes in which victims are photographed, corneal image analysis could be useful for identifying perpetrators.

Imagine a hostage taking a photo of their victim and then being recognised from the reflection in the victim’s eye!

But it gets better. When discussing future work, they mention that 3D reconstruction of the reflected scene could be possible if stereo images are combined from reflections from both eyes. This is technically possible (we’re venturing into work I did for my PhD) but you would need much higher resolution and detailed data of the outer shape of a person’s eye because, believe it or not, we each have a differently shaped eyeball.

Is there a catch? Yes, unfortunately so. I’ve purposely left this part to the very end because most people don’t read this far down a page and I didn’t want to spoil the fun for anyone 🙂 But the catch is this: the Hasselblad H2D camera used in this research produces images at super-high resolution: 5,412 x 7,216 pixels. That’s a whopping 39 megapixels! In comparison, the iPhone X camera takes pictures at 12 megapixels. And the Hasselblad camera is ridiculously expensive at US$25,000 for a single unit. However, as the authors state, the “pixel count per dollar for digital cameras has been doubling approximately every twelve months”, which means that sooner or later, if this trend continues, we will be sporting such 39 megapixel cameras on our standard phones. Nice!

Summary

Jenkins and Kerr showed in 2013 that extracting reflections on eyeballs from photographs is not only possible but faces on these reflections can be identifiable. This can prove useful in the future for police trying to capture kidnappers or child sex abusers who frequently take photos of their victims. The only caveat is that for this to work, images need to be of super-high resolution. But considering how our phone cameras are improving at a regular rate, we may not be too far away from the ubiquitousness of such technology. To conclude, Jenkins and Kerr get the Noble Peace Prize for Awesomeness from me for 2013 – hands down winners.

 

To be informed when new content like this is posted, subscribe to the mailing list:

 

Please share what you just read:

Leave a Reply

Your email address will not be published. Required fields are marked *