Recent Controversies in Computer Vision

Computer vision is a fascinating area in which to work and perform research. And, as I’ve mentioned a few times already, it’s been a pleasure to witness its phenomenal growth, especially in the last few years. However, as with pretty much anything in the world, contention also plays a part in its existence.

In this post I would like to present 2 very recent events from the world of computer vision that have recently caused controversy:

  1. A judge’s ruling that Facebook must stand trial for its facial recognition software
  2. Uber’s autonomous car death of a pedestrian

Facebook and Facial Recognition

This is an event that has seemingly passed under the radar – at least for me it did. Probably because of the Facebook-Cambridge Analytica scandal that has been recently flooding the news and social discussions. But I think this is also an important event to mull over because it touches upon underlying issues associated with an important topic: facial recognition and privacy.

So, what has happened?

In 2015, Facebook was hit with a class action lawsuit (the original can be found here) by three residents from Chicago, Illinois. They are accusing Facebook of violating the state’s biometric privacy laws by the firm collecting and storing biometric data of each user’s face. This data is being stored without written notification. Moreover, it is not clear exactly what the data is to be used for, nor how long it will reside in storage, nor was there an opt-out option ever provided.

Facebook began to collect this data, as the lawsuit states, in a “purported attempt to make the process of tagging friends easier”.


In other words, what Facebook is doing (yes, even now) is summarising the geometry of your face with certain parameters (e.g. distance between eyes, shape of chin, etc.). This data is then used to try to locate your face elsewhere to provide tag suggestions. But for this to be possible, the biometric data needs to be stored somewhere for it to be recalled when needed.

The Illinois residents are not happy that a firm is doing this without their knowledge or consent. Considering the Cambridge Analytica scandal, they kind of have a point, you would think? Who knows where this data could end up. They are suing for $75,000 and have requested a jury trial.

Anyway, Facebook protested over this lawsuit and asked that it be thrown out of court stating that the law in question does not cover its tagging suggestion feature. A year ago, a District Judge rejected Facebook’s appeal.

Facebook appealed again stating that proof of actual injury needs to be shown. Wow! As if violating privacy isn’t injurious enough!?

But on the 14th May, the same judge discarded (official ruling here) Facebook’s appeal:

[It’s up to a jury] to resolve the genuine factual disputes surrounding facial scanning and the recognition technology.

So, it looks like Facebook will be facing the jury on July 9th this year! Huge news, in my opinion. Even if any verdict will only pertain to the United States. There is still so much that needs to be done to protect our data but at least things seem to be finally moving in the right direction.

Uber’s Autonomous Car Death of Pedestrian

You probably heard on the news that on March 18th this year a woman was hit by an autonomous car owned by Uber in Arizona as she was crossing the road. She died in hospital shortly after the collision. This is believed to be the first ever fatality of a pedestrian in which an autonomous car was involved. There have been other deaths in the past (3 in total) but all of them have been of the driver.

(image taken from the US NTSB report)

3 weeks ago the US National Transportation Safety Board (NTSB) released its first report (short read) into this crash. It was only a preliminary report but it provides enough information to state that the self-driving system was at least partially at fault.

(image taken from the US NTSB report)

The report gives the timeline of events: the pedestrian was detected about 6 seconds before impact but the system had trouble identifying it. It was first classified as an unknown object, then a vehicle, then a bicycle – but even then it couldn’t work out the object’s direction of travel. At 1.3 seconds before impact, the system realised that it needed to engage an emergency braking maneuver but this maneuver had been earlier disabled to prevent erratic vehicle behaviour on the roads. Moreover, the system was not designed to alert the driver in such situations. The driver began braking less than 1 second before impact but it was tragically too late.

Bottom line is, if the self-driving system had immediately recognised the object as a pedestrian walking directly into its path, it would have known that avoidance measures would have needed to be taken – well before the emergency braking maneuver was called to be engaged. This is a deficiency of the artificial intelligence implemented in the car’s system. 

No statement has been made with respect to who is legally at fault. I’m no expert but it seems like Uber will be given the all-clear: the pedestrian had hard drugs detected in her blood and was crossing in a non-crossing designated area of the road.

Nonetheless, this is a significant event for AI and computer vision (that plays a pivotal role in self-driving cars) because if these had performed better, the crash would have been avoided (as researchers have shown).

Big ethical questions are being taken seriously. For example, who will be held accountable if a fatal crash is deemed to be the fault of the autonomous car? The car manufacturer? The people behind the algorithms? One sole programmer who messed up a for-loop? Stanford scholars have been openly discussing the ethics behind autonomous cars for a long time (it’s an interesting read, if you have the time).

And what will be the future for autonomous cars in the aftermath of this event? Will their inevitable delivery into everyday use be pushed back?

Testing of autonomous cars has been halted by Uber in North America. Toyota has followed suit. And Chris Jones who leads the Autonomous Vehicle Analysis service at the technology analyst company Canalys, says that these events will set the industry back considerably:

It has put the industry back. It’s one step forward, two steps back when something like this happens… and it seriously undermines trust in the technology.

Furthermore, a former US Secretary of Transportation has deemed the crash a “wake up call to the entire [autonomous vehicle] industry and government to put a high priority on safety.”

But other news reports seem to indicate a different story.

Volvo, the make of car that Uber was driving in the fatal car crash, stated only last week that they expect a third of their cars sold to be autonomous by 2025. Other car manufacturers are making similar announcements. Two weeks ago General Motors and Fiat Chrysler unveiled self-driving deals with people like Google to push for a lead in the self-driving car market.

And Baidu (China’s Google, so to speak) is heavily invested in the game, too. Even Chris Jones is admitting that for them this is a race:

The Chinese companies involved in this are treating it as a race. And that’s worrying. Because a company like Baidu – the Google of China – has a very aggressive plan and will try to do things as fast as it can.

And when you have a race among large corporations, there isn’t much that is going to even slightly postpone anything. That’s been my experience in the industry anyway.


In this post I looked at 2 very recent events from the world of computer vision that have recently caused controversy.

The first was a judge’s ruling in the United States that Facebook must stand trial for its facial recognition software. Facebook is being accused of violating the Illinois’ biometric privacy laws by collecting and storing biometric data of each user’s face. This data is being stored without written notification. Moreover, it is not clear exactly what the data is being used for, nor how long it is going to reside in storage, nor was there an opt-out option ever provided.

The second event was the first recorded death of a pedestrian by an autonomous car in March of this year. A preliminary report was released by the US National Transportation Safety Board 3 weeks ago that states that AI is at least partially at fault for the crash. Debate over the ethical issues inherent to autonomous cars has heated up as a result but it seems as though the incident has not held up the race to bring self-driving cars onto our streets.


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

Gait Recognition – Another Form of Biometric Identification

I was watching The Punisher on Netflix last week and there was a scene (no spoilers, promise) in which someone was recognised from CCTV footage by the way they were walking. “Surely, that’s another example of Hollywood BS“, I thought to myself – “there’s no way that’s even remotely possible”. So, I spent the last week researching into this – and to my surprise it turns out that this is not a load of garbage after all! Gait Recognition is another legitimate form of biometric identification/verification. 

In this post I’m going to present to you my past week’s research into gait recognition: what it is, what it typically entails, and what the current state-of-the-art is in this field. Let me just say that what scientists are able to do now in this respect surprised me immensely – I’m sure it’ll surprise you too!

Gait Recognition

In a nutshell, gait recognition aims to identify individuals by the way they walk. It turns out that our walking movements are quite unique, a little like our fingerprints and irises. Who knew, right!? Hence, there has been a lot of research in this field in the past two decades.

There are significant advantages of this form of identity verification. These include the fact that it can be performed from a distance (e.g. using CCTV footage), it is non-invasive (i.e. the person may not even know that he is being analysed), and it does not necessarily require high-resolution images for it to obtain good results.

The Framework for Automatic Gait Recognition

Trawling through the literature on the subject, I found that scientists have used various ways to capture people’s movements for analysis, e.g. using 3D depth sensors or even using pressure sensors on the floor. I want to focus on the use case shown in The Punisher where recognition was performed from a single, stationary security camera. I want to do this simply because CCTV footage is so ubiquitous today and because pure and neat Computer Vision techniques can be used on such footage.

In this context, gait recognition algorithms are typically composed of three steps:

  1. Pre-processing to extract silhouettes
  2. Feature extraction
  3. Classification

Let’s take a look at these steps individually.

1. Silhouette extraction

Silhouette extraction of subjects is generally performed by subtracting the background image from each frame. Once the background is subtracted, you’re left with foreground objects. The pixels associated with these objects can be coloured white and then extracted.

Background subtraction is a heavily studied field and is by no means a solved problem in Computer Vision. OpenCV provides a few interesting implementations of background subtraction. For example, a background can be learned over time (i.e. you don’t have to manually provide it). Some implementations also allow for things like illumination changes (especially useful for outdoor scenes) and some can also deal with shadows. Which technique is used to subtract the background from frames is irrelevant as long as reasonable accuracy is obtained.

Example of silhouette extraction

2. Feature extraction

Various features can be extracted once we have the silhouettes of our subjects. Typically, a single gait period (a gait cycle) is first detected, which is the sequence of video showing you take one step with each of your feet. This is useful to do because your gait pattern repeats itself, so there’s no need to analyse anything more than one cycle.

Features from this gait cycle are then extracted. In this respect, algorithms can be divided into two groups: model-based and model-free.

Model-based methods of gait recognition take your gait period and attempt to build a model of your movements. These models, for example, can be constructed by representing the person as a stick-figure skeleton with joints or as being composed of cylinders. Then, numerous parameters are calculated to describe the model. For example, the method proposed in this publication from 2001 calculates distance between the head and feet, the head and pelvis, the feet and pelvis, and the step length of a subject to describe a simple model. Another model is depicted in the image below:

An example of a biped model with 5 different parameters as proposed in this solution from 2012

Model-free methods work on extracted features directly. Here, undoubtedly the most interesting and most widely used feature extracted from silhouettes is that of the Gait Energy Image (GEI). It was first proposed in 2006 in a paper entitled “Individual Recognition Using Gait Energy Image” (IEEE transactions on pattern analysis and machine intelligence 28, no. 2 (2006): 316-322).

Note: the Pattern Analysis and Machine Intelligence (PAMI) journal is one of the best in the world in the field. Publishing there is a feat worthy of praise. 

The GEI is used in almost all of the top gait recognition algorithms because it is (perhaps surprisingly) intuitive, not too prone to noise, and simple to grasp and implement. To calculate it, frames from one gait cycle are superimposed on top of each other to give an “average” image of your gait. This calculation is depicted in the image below where the GEI for two people is shown in the last column.

The GEI can be regarded as a unique signature of your gait. And although it was first proposed way back in 2006, it is still widely used in state-of-the-art solutions today.

Examples of two calculated GEIs for two different people shown in the far right column. (image taken from the original publication)

3. Classification

Once step 2 is complete, identification of subjects can take place. Standard classification techniques can be used here, such as k-nearest neighbour (KNN) and the support vector machine (SVM). These are common techniques that are used when one is dealing with features. They are not constrained to the use case of computer vision. Indeed, any other field that uses features to describe their data will also utilise these techniques to classify/identify their data. Hence, I will not dwell on this step any longer. I will, however, will refer you to a state-of-the-art review of gait recognition from 2010 that lists some more of these common classification techniques.

So, how good is gait recognition then?

We’ve briefly taken a look at how gait recognition algorithms work. Let’s now take a peek at how good they are at recognising people.

We’ll first turn to some recent news. Only 2 months ago (October, 2017) Chinese researchers announced that they have developed the best gait recognition algorithm to date. They claim that their system works with the subject being up to 50 metres away and that detection times have been reduced to just 200 milliseconds. If you read the article, you will notice that no data/results are presented so we can’t really investigate their claims. We have to turn to academia for hard evidence of what we’re seeking.

Gaitgan: invariant gait feature extraction using generative adversarial networks” (Yu et al., IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 30-37. 2017) is the latest top publication on this topic. I won’t go through their proposed algorithm (it is model-based and uses the GEI), I will just present their results – which are in fact quite impressive.

To test their algorithm, the authors used the CASIA-B dataset. This is one of the largest publicly available datasets for gait recognition. It contains video footage of 124 subjects walking across a room captured at various angles ranging from front on, side view, and top down. Not only this, but walking is repeated by the same people while wearing a coat and then while wearing a backpack, which adds additional elements of difficulty to gait recognition. And the low resolution of the videos (320×240 – a decent resolution in 2005 when the dataset was released) makes them ideal to test gait recognition algorithms on considering how CCTV footage has generally low quality also.

Three example screenshots from the dataset is shown below. The frames are of the same person with a side-on view. The second and third image shows the subject wearing a coat and a bag, respectively.

Example screenshots from the CASIA B dataset of the same person walking.

Recognition rates with front-on views with no bag or coat linger around 20%-40% (depending on the height of the camera). Rates then gradually increase as the angle nears the side-on view (that gives a clear silhouette). At the side-on view with no bag or coat, recognition rates reach an astounding 98.75%! Impressive and surprising.

When it comes to analysing the clips with the people carrying a bag and wearing a coat, results are summarised in one small table that shows only a few indicative averages. Here, recognition rates obviously drop but the top rates (obtained with side-on views) persist at around the 60% mark.

What can be deduced from these results is that if the camera distance and angle and other parameters are ideal (e.g. the subject is not wearing/carrying anything concealing), gait recognition works amazingly well for a reasonably sized subset of people. But once ideal conditions start to change, accuracy gradually decreases to (probably) inadequate levels.

And I will also mention (perhaps something you may have already garnered) that these algorithms also only work if the subject is acting normally. That is, the algorithms work if the subject is not changing the way he usually walks, for example by walking faster (maybe as a result of stress) or by consciously trying to forestall gait recognition algorithms (like we saw in The Punisher!).

However, an accuracy rate of 98.75% with side-on views shows great potential for this form of identification and because of this, I am certain that more and more research will be devoted to this field. In this respect, I will keep you posted if I find anything new and interesting on this topic in the future!


Gait recognition is another form of biometric identification – a little like iris scanning and fingerprints. Interesting computer vision techniques are utilised on single-camera footage to obtain sometimes 99% recognition results. These results depend on such things as camera angles and whether subjects are wearing concealing clothes or not. But much like other recognition techniques (e.g. face recognition), this is undoubtedly a field that will be further researched and improved in the future. Watch this space.


To be informed when new content like this is posted, subscribe to the mailing list:


Please share what you just read:

Thermal Imaging and Lie Detection – A Task for Computer Vision

Can thermal imaging detect if you’re lying or not? It sure can! And are there frightening prospects with respect to this technology? Yes, there are! Read on to find out what scientists have recently done in this area – and all using image processing techniques.

Thermal Imaging

Thermal imaging (aka infrared thermography, thermographic imaging, and infrared imaging) is the science of analysing images captured from thermal (infrared) cameras. The images returned by these cameras capture infrared radiation not visible to the naked eye that are emitted by objects. All objects above absolute zero (-273.15 °C or −459.67°F) emit such radiation. And the general rule is that the hotter an object is, the more infrared radiation it emits.

There has been some amazing work done recently by scientists with respect to thermal imaging and deception detection. These scientists have managed to construct sophisticated lie detectors with their thermal cameras. And what is interesting for us is that these lie detectors work by using many standard Computer Vision techniques. In fact, thermal imaging is a beautiful example of where Computer Vision techniques can be used on images that do not come from “traditional” cameras.

This post is going to analyse how these lie detectors work and underline the Computer Vision techniques that are being used by them. The latter part of the post will extrapolate how thermal imaging might impact us in the future – and these predictions are quite frightening/exciting, to say the least.

Lie Detectors

The idea behind lie detectors is to detect minor physiological changes, such as an increase in blood pressure, pulse or respiration, that can occur when we experience a certain anxiety, shame or nervousness when dropping a fib (aka telling a porky, taking artistic license, being Tony Blair, etc.). These are such slight physiological changes in us that instruments that measure them need to be very precise.

We’ve all seen polygraphs being used in films to detect deception. An expert sits behind a machine and looks at readings from sensors that are connected to the person being interrogated. Although accuracy is said to be at around 90% in detecting lies (according to a few academic papers I studied), the problem is that highly trained experts are required to interpret results – and this interpretation can take hours in post-interview analysis. Moreover, polygraph tests require participants’ cooperation in that they need to be physically connected to these sensors. They’re what’s called ‘invasive’ procedures.

A polygraph test being conducted in 1935 (Image source: Wikipedia)

Thermal imagery attempts to alleviate these problems. The idea behind them is to detect changes in the surface temperature of the skin caused by the effects of lying. Since all one needs is a thermal camera to observe the participant, they’re non-invasive procedures. And because of this, the person being interrogated can be oblivious to the fact that he’s being scrutinised for lying. Moreover, the process can be automated with image/video processing algorithms – no experts required for analysis!

Computer Vision in Thermal Imaging

Note: Although I gloss over a lot of technical details in this section, I still assume here that you have a little bit of computer vision knowledge. If you’re here for the discussion on how thermal imagery could be used in the future, skip to the next section. 

There are some interesting journal papers on deception detection using thermal imagery and computer vision algorithms. For example, “Thermal Facial Analysis for Deception Detection(Rajoub, Bashar A., and Reyer Zwiggelaar. IEEE transactions on information forensics and security 9, no. 6 (2014): 1015-1023) reports results of 87% on 492 responses (249 lies and 243 truths). Techniques such as machine learning were used to build a model of deceptive/non-deceptive responses. The models were then utilised to classify responses.

But I want to look at a paper published internally by the Faculty of Engineering at the Pedagogical and Technological University of Colombia in South America. It’s not a very “sophisticated” publication (e.g. axes are not labelled, statistical significance of results is not presented, the face detection algorithm is a little dubious, etc.) but the computer vision techniques used are much more interesting to analyse.

The paper in question is entitled “Detection of lies by facial thermal imagery analysis” published this year (2017) by Bedoya-Echeverry et al. The authors used a fairly low-resolution thermal camera (320×240 pixels) to obtain comparable results to polygraph tests: 75% success rate in detecting lies and 100% success rate in detecting truths.

I am going to work through a simplified version of the algorithm presented by the authors. The full version involves a few extra calculations but what I present here is the general framework of what was done. I’ll give you enough to show you that implementing a thermal-based lie detector is a trivial task (once you can afford to purchase the $3,000 thermal camera).

This simplified algorithm can be divided into two stages:

  1. Face detection and segmentation
  2. Periorbital area detection and tracking

Let’s work through these two stages one-by-one and see what a fully-automated lie detector based on Computer Vision techniques can look like.

1. Face detection and segmentation

For face detection the authors first used Otsu’s method on their black and white thermal images. Otsu’s method takes a greylevel (intensity) image and reduces it to a binary image, i.e. an image containing only two colours: white and black. A pixel is coloured either white or black depending on whether it falls below or above a certain dynamically calculated threshold. The threshold is chosen such that it minimises the intra-class variance of the intensity values. See this page for a clear explanation on how this is done exactly.

When the binary image has been produced, the face is detected by calculating the largest connected region in the image.

(Image adapted from original publication)

Note: functions for Otsu’s method and finding the largest connected components are all available in OpenCV and are easy to use.

2. Periorbital area detection and tracking

Once the face has been detected and segmented out, the next step is to locate the periorbital region, which is the horizontal, rectangular region containing your eyes and top of your nose. This region has a high concentration of veins and arteries and is therefore ideal for scrutinising for micro-temperature changes.

The periorbital region can be found by dividing the face (detected in step 1) into 4 equally-spaced horizontal strips and then selecting the second region from the top. To save having to perform steps one and two for each frame, the KLT algorithm is used to track the periorbital area between frames. See this OpenCV tutorial page for a decent explanation of how this tracking algorithm works. It’s a little maths intensive – sorry! But you can at least see that it’s also easy to implement.

Temperature readings are then made from the detected region and an average calculated per frame. When the average temperature (i.e. pixel intensity) peaks during the answering of a question, the algorithm can deduce that the person is lying through their teeth!

That’s not that complicated, right? Even though I simplified the algorithm a little (a few additional calculations are performed to assist tracking in step 2), the gist of it is there! Here you have a fully-automated, non-invasive lie detector that uses Computer Vision techniques to get results comparable to polygraph tests.

What the Future Holds in Lie Detection and Thermal Imagery

Now, let’s have a think about what a non-invasive lie detector could potentially achieve in the future.

Lie detectors are all about noticing micro-changes in blood flow, right? Can you imagine such a system tracking you around the store to gauge which products get you a little bit excited? Results can be instantly forwarded to a seller who now has the upper hand over you. Nobody will be safe from second-hand car dealers any more. To confirm anything, all he has to do is ask if you like a certain car or not.

What about business meetings? You can put in a cheeky thermal camera in the corner of a meeting room and get live reports on how clients are really feeling about your pitch. Haggling will be a lot easier to deal with in this way if you know what your opponent is truly thinking.

And what about poker? You will be able to beat (unethically?) that one friend who always cleans up at your “friendly” weekend poker nights.

The potential is endless, really. And who knows?! Maybe we’ll have thermal cameras in our phones one day, too? Computer vision will definitely be a powerful tool in the future 🙂

What other uses of deception detection using thermography can you think of?


Traditionally, lie detection has been performed using a polygraph test. This test, however, is invasive and needs an expert to painstakingly analyse results from the various sensors that are used. Digital thermography is looking like a viable alternative. Scientists have shown that using standard computer vision techniques, deception detection can be non-invasive, automated, and get results comparable to polygraph tests. Non-invasive lie detectors are a scary prospect considering that they could track our every move and analyse all our emotions in real time.

To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

Samsung’s vs Apple’s face recognition technologies – and how they have been fooled

In September 2017 Apple announced iPhone X with a very neat feature called Face ID. This feature is used to recognise your face to allow you to unlock your phone. Samsung, however, has had facial recognition since the release of Android Ice Cream Sandwich way back in 2011. What is the difference between the two technologies? And how can either of them be fooled? Read on to find out.

Samsung’s Face Recognition

Samsung’s Face Unlock feature works by using the regular front camera of your phone to take a picture of your face. It analyses this picture for facial features such as the distance between the eyes, facial contours, iris colour, iris size, etc. This information is stored on your phone so that next time you try to unlock it, the phone takes a picture of you, processes it for the aforementioned data and then compares it to the information it has stored on your phone. If everything matches, your phone is unlocked.

The only problem is that all processing is done using 2D images. So, as you may have guessed, a simple printed photo of your face or even one displayed on another phone will fool the system. Need proof? Here’s a video of someone unlocking a Galaxy Note 8, which was released in April 2017, with a photo shown on another phone. It’s quite amusing.

There was a “liveness check” added to Face Unlock with the release of Android Jelly Bean in 2012. This works by attempting to detect blinking. I haven’t tried this feature but from what I’ve read on forums, it isn’t very accurate and requires a longer time to process your face – hence probably why the feature isn’t turned on by default. And yes, it could also be fooled by a close-up video of you, though this would be much harder to acquire.

Note: Samsung is aware of the security flaws of Face Unlock, which is why it does not allow identity verification for Samsung Pay to be made using it. Instead it advocates for the use of its iris recognition technology. But is that technology free from flaws? No chance, as a security researcher from Berlin has shown. He took a photo of his friend’s eye from a few metres away (!) in infrared mode (i.e. night mode), printed it out on paper, and then stuck a contact lens on the printed eye. Clever.

Apple’s Face ID

This is where the fun begins. Apple really took this feature seriously. In a nutshell, Face ID works by firstly illuminating your face with IR light (IR = infrared light that is not visible to the naked eye) and then projecting a further 30,000 (!) IR points onto your face to build a super-detailed 3D map of your facial features. Quite impressive.

This technology, however, has been in use for a very long time. If you’re familiar with the Kinect camera/sensor (initially released in 2010), it uses the same concept of infrared point projection to capture and analyse 3D motion.

So, how do you fool the ‘TrueDepth camera system’, as Apple calls it? It’s not easy because this technology is quite sophisticated. But successful attempts have already been documented in 2017.

To start off with, here’s a video showing identical twins unlocking each other’s phones. Also quite amusing. How about relatives that look similar? It’s been done! Here’s a video showing a 10-year-old boy unlocking his mother’s phone. Now that’s a little more worrisome. However, it shows that iPhone Xs can be an alternative to DNA paternity/maternity tests 🙂 Finally, in November 2017, Vietnamese hackers posted a video documenting how their 3D-printed face mask fooled Apple’s technology. Some elements, like the eyes, on this mask were printed on a standard colour printer. The model of the face was acquired in 5 minutes using a hand-held scanner.


Apple in September 2017 released a new facial recognition feature with their iPhone X called ‘FaceID’. It works by projecting IR light onto your face to build a detailed 3D map of it. It is hard to fool but successful attempts have been documented in 2017. Samsung’s facial recognition system called Face Unlock has been around since 2011. It, however, only analyses 2D images and hence can be duped easily with printed photos or another phone showing the phone owner’s face.

To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read: