Seeing Around Corners with a Laser

In this post I would like to show you some results of another interesting paper I came across recently that was published last year in the prestigious Nature journal. It’s on the topic of non-line-of-sight (NLOS) imaging or, in other words, it’s about research that helps you see around corners. NLOS could be something particularly useful for use cases such as autonomous cars in the future.

I’ll break this post up into the following sections:

  • The LIDAR laser-mapping technology
  • LIDAR and NLOS
  • Current Research into NLOS

Let’s get cracking, then.


You may have heard of LIDAR (a term which combines “light” and “radar”). It is used very frequently as a tool to scan surroundings in 3D. It works similarly to radar but instead of emitting sound waves, it sends out pulses of infrared light and then calculates the time it takes for this light to return to the emitter. Closer objects will reflect this laser light quicker than distant objects. In this way, a 3D representation of the scene can be acquired, like this one which shows a home damaged by the 2011 Christchurch Earthquake:

(image obtained from here)

LIDAR has been around for decades and I came across it very frequently in my past research work in computer vision, especially in the field of robotics. More recently, LIDAR has been experimented with in autonomous vehicles for obstacle detection and avoidance. It really is a great tool to acquire depth information of the scene.

NLOS Imaging

But what if where you want to see is obscured by an object? What if you want to see what’s behind a wall or what’s in front of the car in front of you? LIDAR does not, by default, allow you to do this:

The rabbit object is not reachable by the LIDAR system (image adapted from this video)

This is were the field of NLOS comes in.

The idea behind NLOS is to use sensors like LIDAR to bounce laser light off walls and then read back any reflected light.

The laser is bounced off the wall to reach the object hidden behind the occluder (image adapted from this video)

This process is repeated around a particular point (p in the image above) to obtain as much reflected light as possible. The reflected light is then analysed and any objects on the other side of the occlusion are attempted to be reconstructed.

This is still an open area of research with many assumptions (e.g. that light is not reflected multiple times by the occluded object but bounces straight back to the wall and then the sensors) but the work on this done so far is quite intriguing.

Current Research into NLOS

The paper that I came across is entitled “Confocal non-line-of-sight imaging based on the light-cone transform“. It was published in March of last year in the Nature journal (555, no. 7696, p. 338). Nature is one of the world’s top and most famous academic journals, so anything published there is more than just world-class – it’s unique and exceptional.

The experiment setup from this paper was as shown here:

The setup of the experiment for NLOS. The laser light is bounced off the white wall to hit and reflect off the hidden object (image taken from original publication)

The idea, then, was to try and reconstruct anything placed behind the occluder by bouncing laser light off the white wall. In the paper, two objects were scrutinised: an “S” (as shown in the image above) and a road sign. With a novel method of reconstruction, the authors were able to obtain the following reconstructed 3D images of the two objects:


(image adapted from original publication)

Remember, these results are obtained by bouncing light off a wall. Very interesting, isn’t it? What’s even more interesting is that the text on the street sign has been detected as well. Talk about precision! You can clearly see how one day, this could come in handy with autonomous cars who could use information such as this to increase safety on the roads.

A computer simulation was also created to ascertain with dexterity the error rates involved with the reconstruction process. The simulated setup was as shown in the above images with the bunny rabbit. The results of the simulation were as follows:

(image adapted from original publication)

The green in the image is the reconstructed parts of the bunny superimposed on the original object. You can clearly see how the 3D shape and structure of the object is extremely well-preserved. Obviously, the parts of the bunny not visible to the laser could not be reconstructed.


This post introduced the field of non-line-of-sight imaging, which is, in a nutshell, research that helps you see around corners. The idea behind NLOS is to use sensors like LIDAR to bounce laser light off walls and then read back any reflected light. The scene behind an occlusion is then attempted to be reconstructed.

Recent results from state-of-the-art research in NLOS published in the Nature journal were also presented in this post. Although much more work is needed in this field, the results are quite impressive and show that NLOS could one day be very useful with, for example, autonomous cars who could use information such as this to increase safety on the roads.


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

The Baidu and ImageNet Controversy

Two months ago I wrote a post about some recent controversies in the industry in computer vision. In this post I turn to the world of academia/research and write about something controversial that occurred there.

But since the world of research isn’t as aggressive as that of the industry, I had to go back three years to find anything worth presenting. However, this event really is interesting, despite its age, and people in research circles talk about it to this day.

The controversy in question pertains to the ImageNet challenge and the Baidu research group. Baidu is one of the largest AI and internet companies in the world. Based in Beijing, it has the 2nd largest search engine in the world and is hence commonly referred to as China’s Google. So, when it is involved in a controversy, you know it’s no small matter!

I will divide the post into the following sections:

  1. ImageNet and the Deep Learning Arms Race
  2. What Baidu did and ImageNet’s response
  3. Ren Wu’s (Ex-Baidu Researcher’s) later response (here is where things get really interesting!)

Let’s get into it.

ImageNet and the Deep Learning Arms Race

(Note: I wrote about what ImageNet is in my last post, so please read that post for a more detailed explanation.) 

ImageNet is the most famous image dataset by a country mile. Currently there are over 14 million images in ImageNet for nearly 22,000 synsets (WordNet has ~100,000 synsets). Over 1 million images also have hand-annotated bounding boxes around the dominant object in the image.

However, when the term “ImageNet” is used in CV literature, it usually refers to the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) which is an annual competition for object detection and image classification organised by computer scientists at Stanford University, the University of North Carolina at Chapel Hill and the University of Michigan.

This competition is very famous. In fact, the deep learning revolution of the 2010s is widely attributed to have originated from this challenge after a deep convolutional neural network blitzed the competition in 2012. Since then, deep learning has revolutionised our world and the industry has been forming research groups like crazy to push the boundary of artificial intelligence. Facebook, Amazon, Google, IBM, Microsoft – all the major players in IT are now in the research game, which is phenomenal to think about for people like me who remember the days of the 2000s when research was laughed at by people in the industry.

With such large names in the deep learning world, a certain “computing arms race” has ensued. Big bucks are being pumped into these research groups to obtain (and trumpet far and wide) results better than other rivals. Who can prove to be the master of the AI world? Who is the smartest company going around? Well, competitions such as ImageNet are a perfect benchmark for questions like this, which makes the ImageNet scandal quite significant.

Baidu and ImageNet

To have your object classification algorithm scored on the ImageNet Challenge, you first get it trained on 1.5 million images from the ImageNet dataset. Then, you submit your code to the ImageNet server where this code is tested against a collection of 100,000 images that are not known to anybody. What is key, though, is that to avoid people fine-tuning the parameters in their algorithms to this specific testing set of 100,000 images, ImageNet only allows 2 evaluations/submissions on the test set per week (otherwise you could keep resubmitting until you’ve hit that “sweet spot” specific to this test set).

Before the deep learning revolution, a good ILSVRC classification error rate was 25% (that’s 1 out of 4 images being classified incorrectly). After 2014, error rates have dropped to below 5%!

In 2015, Baidu announced that with its new supercomputer called Minwa it had obtained a record low error rate of 4.58%, which was an improvement on Google’s error rate of 4.82% as well as Microsoft’s of 4.9%. Massive news in the computing arms race, even though the error rate differences appear to be minimal (and some would argue, therefore, that they’re insignificant – but that’s another story).

However, a few days after this declaration, an initial announcement was made by ImageNet:

It was recently brought to our attention that one group has circumvented our policy of allowing only 2 evaluations on the test set per week.

Three weeks later, a follow up announcement was made stating that the perpetrator of this act was Baidu. ImageNet had conducted an analysis and found that 30 accounts connected to Baidu had been used in the period of November 28th, 2014 to May 13th, 2015 to make on average four times the permitted amount of submissions. 

As a result, ImageNet disqualified Baidu from that year’s competition and banned them from re-entering for a further 12 months.

Ren Wu, a distinguished AI scientist and head of the research group at the time, apologised for this mistake. A week later he was dismissed from the company. But that’s not the end of the saga.

Ren Wu’s Response

Here is where things get really interesting. 

A few days after being fired from Baidu, Ren Wu sent an email to Enterprise Technology in which he denied any wrongdoing:

We didn’t break any rules, and the allegation of cheating is completely baseless

Whoa! Talk about opening a can of worms!

Ren stated that there is “no official rule specify [sic] how many times one can submit results to ImageNet servers for evaluation” and that this regulation only appears once a submission is made from one account. From this he came to understand that 2 submissions per week can be made from each account/individual rather than a whole team. Since Baidu had 5 authors working on the project, he argues that he was allowed to make 10 submission per week.

I’m not convinced though because he still used 30 accounts (purportedly to be owned by junior students assisting in the research) to make these submissions. Moreover, he still admits that on two occasions the 10 submission threshold was breached – so, he definitely did break the rules.

Things get even more interesting, however, when he states that he officially apologised just for those two occasions as requested by his management:

A mistake in our part, and it was the reason I made a public apology, requested by my management. Of course, this was my biggest mistake. And things have been gone crazy since. [emphasis mine]

Whoa! Another can of worms. He apologised as a result of a request by his management and he states that this was a mistake. It looks like he’s accusing Baidu of using him as a scapegoat in this whole affair. Two months later he confirms this to the EE Times, by stating that

I think I was set up

Well, if that isn’t big news, I don’t know what is! I personally am not convinced by Ren’s arguments. But it at least shows that the academic/research world can be exciting at times, too 🙂


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

The Top Image Datasets and Their Challenges

In previous posts of mine I have discussed how image datasets have become crucial in the deep learning (DL) boom of computer vision of the past few years. In deep learning, neural networks are told to (more or less) autonomously discover the underlying patterns in classes of images (e.g. that bicycles are composed of two wheels, a handlebar, and a seat). Since images are visual representations of our reality, they contain the inherent complex intricacies of our world. Hence, to train good DL models that are capable of extracting the underlying patterns in classes of images, deep learning needs lots of data, i.e. big data. And it’s crucial that this big data that feeds the deep learning machine be of top quality.

In lieu of Google’s recent announcement of an update to its image dataset as well as its new challenge, in this post I would like to present to you the top 3 image datasets that are currently being used by the computer vision community as well as their associated challenges:

  1. ImageNet and ILSVRC
  2. Open Images and the Open Images Challenge
  3. COCO Dataset and the four COCO challenges of 2018

I wish to talk about the challenges associated with these datasets because challenges are a great way for researchers to compete against each other and in the process to push the boundary of computer vision further each year!


This is the most famous image dataset by a country mile. But confusion often accompanies what ImageNet actually is because the name is frequently used to describe two things: the ImageNet project itself and its visual recognition challenge.

The former is a project whose aim is to label and categorise images according to the WordNet hierarchy. WordNet is an open-source database for words that are organised hierarchically into synonyms. For example words like “dog” and “cat” can be found in the following knowledge structure:

An example of a WordNet synset graph (image taken from here)

Each node in the hierarchy is called a “synonym set” or “synset”. This is a great way to categorise words because whatever noun you may have, you can easily extract its context (e.g. that a dog is a carnivore) – something very useful for artificial intelligence.

The idea with the ImageNet project, then, is to have 1000+ images for each and every synset in order to also have a visual hierarchy to accompany WordNet. Currently there are over 14 million images in ImageNet for nearly 22,000 synsets (WordNet has ~100,000 synsets). Over 1 million images also have hand-annotated bounding boxes around the dominant object in the image.

Example image of a kit fox from ImageNet showing hand-annotated bounding boxes

You can explore the ImageNet and WordNet dataset interactively here. I highly recommend you do this!

Note: by default only URLs to images in ImageNet are provided because ImageNet does not own the copyright to them. However, a download link can be obtained to the entire dataset if certain terms and conditions are accepted (e.g. that the images will be used for non-commercial research). 

Having said this, when the term “ImageNet” is used in CV literature, it usually refers to the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) which is an annual competition for object detection and image classification. This competition is very famous. In fact, the DL revolution of the 2010s is widely attributed to have originated from this challenge after a deep convolutional neural network blitzed the competition in 2012.

The motivation behind ILSVRC, as the website says, is:

… to allow researchers to compare progress in detection across a wider variety of objects — taking advantage of the quite expensive labeling effort. Another motivation is to measure the progress of computer vision for large scale image indexing for retrieval and annotation.

The ILSVRC competition has its own image dataset that is actually a subset of the ImageNet dataset. This meticulously hand-annotated dataset has 1,000 object categories (the full list of these synsets can be found here) spread over ~1.2 million images. Half of these images also have bounding boxes around the class category object.

The ILSCVRC dataset is most frequently used to train object classification neural network frameworks such as VGG16, InceptionV3, ResNet, etc., that are publicly available for use. If you ever download one of these pre-trained frameworks (e.g. Inception V3) and it says that it can detect 1000 different classes of objects, then it most certainly was trained on this dataset.

Google’s Open Images

Google is a new player in the field of datasets but you know that when Google does something it will do it with a bang. And it has not disappointed here either.

Open Images is a new dataset first released in 2016 that contains ~9 million images – which is fewer than ImageNet. What makes it stand out is that these images are mostly of complex scenes that span thousands of classes of objects. Moreover, ~2 million of these images are hand-annotated with bounding boxes making Open Images by far the largest existing dataset with object location annotations. In this subset of images, there are ~15.4 million bounding boxes of 600 classes of object. These objects are also part of a hierarchy (see here for a nice image of this hierarchy) but one that is nowhere near as complex as WordNet.

Open Images example image with bounding box annotation

As of a few months’ ago, there is also a challenge associated with Open Images called the “Open Images Challenge. It is an object detection challenge and, what’s more interesting, there is also a visual relationship detection challenge (e.g. “woman playing a guitar” rather than just “guitar” and “woman”). The inaugural challenge will be held at this year’s European Conference on Computer Vision. It looks like this will be a super interesting event considering the complexity of the images in the dataset and, as a result, I foresee this challenge to be the de facto object detection challenge in the near future. I am certainly looking forward to seeing the results of the challenge to be posted around the time of the conference (September 2018).

Microsoft’s COCO Dataset

Microsoft is in this game also with their Common Objects in Context  (COCO) dataset. Containing ~200K images, it’s relatively small but what makes it stand out are its challenges that come associated with the additional features it provides for each image, for example:

  • object segmentation information rather than just bounding boxes of objects (see image below)
  • five textual captions per image such as “the a380 air bus ascends into the clouds” and “a plane flying through a cloudy blue sky”.

The first of these points is worth providing an example image of:

Notice how each object is segmented rather than outlined by a bounding box as is the case with ImageNet and Open Images examples? This object segmentation feature of the dataset makes for very interesting challenges because segmenting an object like this is many times more difficult than just drawing a rectangular box around it.

COCO challenges are also held annually. But each year’s challenge is slightly different. This year the challenge has four tracks:

  1. Object segmentation (as in the example image above)
  2. Panoptic segmentation task, which requires object and background scene segmentation, i.e. a task to segment the entire image rather than just the dominant objects in an image:
  3. Keypoint detection task, which involves simultaneously detecting people and localising their keypoints:
  4. DensePose task, which involves simultaneously detecting people and localising their dense keypoints (i.e. mapping all human pixels to a 3D surface of the human body):


Very interesting, isn’t it? There is always something engaging taking place in the world of computer vision!


To be informed when new content like this is posted, subscribe to the mailing list:


Please share what you just read:

Computer Vision in the Fashion Industry – Part 3

In my last two posts I’ve looked at computer vision and the fashion industry. I introduced the lucrative fashion industry and showed what Microsoft recently did in this field with computer vision. I also presented two papers from last year’s International Conference on Computer Vision (ICCV).

In this post, the final of the series, I would like to present to you two papers from last year’s ICCV workshop that was entirely devoted to fashion:

  1. Dress like a Star: Retrieving Fashion Products from Videos” (N. Garcia and G. Vogiatzis, ICCV Workshop, 2017, 2293-2299) [source code]
  2. Multi-Modal Embedding for Main Product Detection in Fashion” (Rubio, et al., ICCV Workshop, 2017, pp. 2236-2242) [source code]

Once again, I’ve provided links to the source code so that you can play around with the algorithms as you wish. Also, as in previous posts, I am going to provide you with just an overview of these publications. Most papers published at this level require a (very) strong academic background to fully grasp, so I don’t want to go into that much detail here.

Dress Like a Star

This paper is impressive because it was written by a PhD student from Birmingham in the UK. By publishing at the ICCV Workshop (I discussed in my previous post how important this conference is), Noa Garcia has pretty much guaranteed her PhD and quite possibly any future research positions. Congratulations to her! However, I do think they cheated a bit to get into this ICCV workshop, as I explain further down.

The idea behind the paper is to provide a way to retrieve clothing and fashion products from video content. Sometimes you may be watching a TV show, film or YouTube clip and think to yourself: “Oh, that shirt looks good on him/her. I wish I knew where to buy it.”

The proposed algorithm works by providing it a photo of a screen that is playing the video content, querying a database, and then returning matching clothing content in the frame, as shown in this example image:

(image source: original publication)

Quite a neat idea, wouldn’t you say?

The algorithm has three main modules: product indexing, training phase, and query phase.

The first two modules are performed offline (i.e. before the system is released for use). They require a database to be set up with video clips and another one with clothing articles. Then, the clothing items and video frames are matched to each other with some heavy computing (this is why it has be performed offline – there’s a lot of computation here that cannot be done in real time).

You may be thinking: but heck, how can you possibly store and analyse all video content with this algorithm!? Well, to save storage and computation space, each video is processed (offline) and divided into shots/scenes that are then summarised into a single vector containing features (features are small “interesting” or “stand-out” patches in images).

Hence, in the query phase, all you need to do is detect features in the provided photo, search for these features in the database (rather than the raw frames), locate the scene depicted in the photo in the video database, and then extract the clothing articles in the scene.

To evaluate this algorithm, the authors set up a system with 40 movies (80+ hours of video). They were able to retrieve the scene from a video depicted in a photo with an accuracy of 87%.

Unfortunately, in their experiments, they did not set up a fashion item database but left this part out as “future work”. That’s a little bit of a let down and I would call that “twisting the truth” in order to get into a fashion-dedicated workshop. But, as they state in the conclusion: “the encouraging experimental results shown here indicate that our method has the potential to index fashion products from thousands of movies with high accuracy”.

I’m still calling this cheating 🙂

Main Product Detection in Fashion

This paper discusses an algorithm to extract the main clothing product in an image according to any textual information associated with it – like in a fashion magazine, for example. The purpose of this algorithm is to extract these single articles of clothing to then be able to enhance other datasets that need to solely work with “clean” images. Such datasets would include ones used in fashion catalogue searches (e.g. as discussed in the first post in this series) or systems of “virtual fitting rooms” (e.g. as discussed in the second post in this series).

The algorithm works by utilising deep neural networks (DNNs). (Is that a surprise? There’s just no escaping deep learning nowadays, is there?) To cut a long story short, neural networks are trained to extract bounding boxes of fashion products that are then used to train other DNNs to match products with textual information.

Example results from the algorithm are shown below.

(image source: original publication)

You can see above how the algorithm nicely finds all the articles of clothing (sunglasses, shirt, necklace, shoes, handbag) but only highlights the pants as the main product in the image according to the textual information associated with the picture.


In this post, the final of the series, I presented two papers from last year’s ICCV workshop that was entirely devoted to fashion. The first paper describes a way to retrieve clothing and fashion products from video content by providing it with a photo of a computer/TV screen. The second paper discusses an algorithm to extract the main clothing product in an image according to any textual information associated with it.

I always say that it’s interesting to follow the academic world because every so often what you see happening there ends up being brought into our everyday lives. Some of the ideas from the academic world I’ve looked at in this series leave a lot to be desired but that’s the way research is: one small step at a time.


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

Computer Vision in the Fashion Industry – Part 1

(image source)

Computer vision has a plethora of applications in the industry: cashier-less stores, autonomous vehicles (including those loitering on Mars), security (e.g. face recognition) – the list goes on endlessly. I’ve already written about the incredible growth of this field in the industry and, in a separate post, the reasons behind it.

In today’s post I would like to discuss computer vision in a field that I haven’t touched upon yet: the fashion industry. In fact, I would like to devote my next few posts to this topic because of how ingeniously computer vision is being utilised in it.

In this post I will introduce the fashion industry and then present something that Microsoft recently did in the field with computer vision.

In my next few posts I would like to present what the academic world (read: cutting-edge research) is doing in this respect. You will see quite amazing things there, so stay tuned for that!

The Fashion Industry

The fashion industry is huge. And that’s probably an understatement. At present it is estimated to be worth US$2.4 trillion. How big is that? If the fashion industry were a country, it would be ranked as the 7th largest economy in the world – above my beloved Australia and other countries like Russia and Spain. Utterly huge.

Moreover, it is reported to be growing at a steady rate of 5.5% each year.

On the e-commerce market, the clothing and fashion sectors dominate. In the EU, for example, the majority of the 530 billion euro e-commerce market is made up of this industry. Moreover, The Economic Times predicts that the online fashion market will grow three-fold in the next few years. The industry appears to be in agreement with this forecast considering some of the major takeovers being currently discussed. The largest one on the table at the moment is of Flipkart, India’s biggest online store that attributes 50% of its transactions to fashion. Walmart is expected to win the bidding war by purchasing 73% of the company that it has valued at US$22 billion. Google is expected to invest a “measly” US$3 billion also. Ridiculously large amounts of money!

So, if the industry is so huge, especially online, then it only makes sense to bring artificial intelligence into play. And since fashion is a visual thing, this is a perfect application for computer vision!

(I’ve always said it: now is a great time to get into computer vision)

Microsoft and the Fashion Industry

3 weeks ago, Microsoft published on their Developer Blog an interesting article detailing how they used deep learning to build an e-commerce catalogue visual search system for “a successful international online fashion retailer” (which one it was has not been disclosed). I would like to present a summary of this article here because I think it is a perfect introduction to what computer vision can do in the fashion industry. (In my next post you will see how what Microsoft did is just a drop in the ocean compared to what researches are currently able to do).

The motivation behind this search system was to save this retailer’s time in finding whether each new arriving item matches a merchandise item already in stock. Currently, employees have to manually look through catalogues and perform search and retrieval tasks themselves. For a large retailer, sifting through a sizable catalogue can be a time consuming and tedious process.

So, the idea was to be able to take a photo from a mobile phone of a piece of clothing or footwear and search for it in a database for matches.

You may know that Google already has image search functionalities. Microsoft realised, however, that for their application in fashion to work, it was necessary to construct their own algorithm that would include some initial pre-processing of images. The reason for this is that the images in the database had a clean background whereas if you take a photo on your phone in a warehouse setting, you will capture a noisy background. The images below (taken from the original blog post) show this well. The first column shows a query image (taken by a mobile phone), the second column the matching image in the database.

Microsoft, hence, worked on a background subtraction algorithm that would remove the background of an image and only leave the foreground (i.e. salient fashion item) behind.

Background subtraction is a well-known technique in the computer vision field and it is by all means still an open area of research. OpenCV in fact has a few very interesting implementations available of background subtraction. See this OpenCV tutorial for more information on these.


Microsoft decided not to use these but instead to try out other methods for this task. It first tried GrabCut, a very popular background segmentation algorithm first introduced in 2004. In fact, this algorithm was developed by Microsoft researchers to which Microsoft still owns the patent rights (hence why you won’t find it in the main repository of OpenCV any more).

I won’t go into too much detail on how GrabCut works but basically, for each image, you first need to manually provide a bounding box of the salient object in the foreground. After that, GrabCut builds a model (i.e. a mathematical description) of the background (area outside of the bounding box) and foreground (area inside the bounding box) and using these models iteratively trims inside the rectangle until it deduces where the foreground object lies. This process can be repeated by then manually indicating where the algorithm went wrong inside the bounding box.

The image below (from the original publication of 2004) illustrates this process. Note that the red rectangle was manually provided as were also the white and red strokes in the bottom left image.

The images below show some examples provided by Microsoft from their application. The first column shows raw images from a mobile phone taken inside a warehouse, the second column shows initial results using GrabCut, and the third column shows images using GrabCut after additional human interaction. These results are pretty good.


But Microsoft wasn’t happy with GrabCut for the important reason of it requiring human interaction. It wanted a solution that would work simply by only providing a photo of a product. So, it decided to move to a deep learning solution: Tiramisu (Yum, I love that cake…)

Tiramisu is a type of DenseNet, which in turn is a specific type of Convolutional Neural Network (CNN). Once again, I’m not going to go into detail on how this network works. For more information see this publication that introduced DenseNets and this paper that introduced Tiramisu. But basically DenseNets connect each layer to every other layer whereas CNN layers have connections with only their nearest layers.

DenseNets work (suprisingly?) well on relatively small datasets. For specific tasks using deep neural networks, you usually need a few thousand example images for each class you are trying to classify for. DenseNets can get remarkable results with around 600 images (which is still a lot but it’s at least a bit more manageable).

So, Microsoft trained a Tiramisu model from scratch with two classes: foreground and background. Only 249 images were provided for each class! The foreground and background training images were segmented using GrabCut with human interaction. The model achieved an accuracy rate of 93.7% at the training stage. The example image below shows an original image, the corresponding labelled training image (white is foreground and black is background), and the predicted Tiramisu result. Pretty good!

How did it fare in the real world? Apparently quite well. Here are some example images. The top row shows the automatically segmented image (i.e. with the background subtracted out) and the bottom row shows the original input images. Very neat 🙂

The segmented images (e.g. top row in the above image) were then used to query a database. How this querying took place and what algorithm was used to detect potential matches is, however, not described in the blog post.

Microsoft has released all their code from this project so feel free to take a look yourselves.


In this post I introduced the topic of computer vision in the fashion industry. I described how the fashion industry is a huge business currently worth approximately US$2.4 trillion and how it is dominating on the online market. Since fashion is a visual trade, this is a perfect application for computer vision.

In the second part of this post I looked at what Microsoft did recently to develop a catalogue visual search system. They performed background subtraction on photos of fashion items using a DenseNet solution and these segmented images were used to query an already-existing catalogue.

Stay tuned for my next post which will look at what academia has been doing with respect to computer vision and the fashion industry.


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

Amazon Go – Computer Vision at the Forefront of Innovation

Where would a computer vision blog be without a post about the new cashier-less store recently opened to the public by Amazon? Absolutely nowhere.

But I don’t need additional motivation to write about Amazon Go (as the store is called) because I am, to put it simply, thrilled and excited about this new venture. This is innovation at its finest where computer vision is playing a central role.

How can you not get enthusiastic about it, then? I always love it when computer vision makes the news and this is no exception.

In this post, I wish to talk about Amazon Go under four headings:

  1. How it all works from a technical (as much as is possible) and non-technical perspective,
  2. Some of the reported issues prior to public opening,
  3. Some reported in-store issues post public opening, and
  4. Some potential unfavourable implications cashier-less stores may have in the future (just to dampen the mood a little)

So, without further ado…

How it Works – Non-Technically & Technically

The store has a capacity of around 90 people – so it’s fairly small in size, like a convenience store. To enter it you first need to download the official Amazon app and connect it to your Amazon Prime account. You then walk up to a gate like you would at a metro/subway and scan a QR code from the app. The gate opens and your shopping experience begins.

Inside the store, if you wish to purchase something, you simply pick it up off the shelf and put it in your bag or pocket. Once you’re done, you walk out of the shop and a few minutes later you get emailed a receipt listing all your purchases. No cashiers. No digging around for money or cards. Easy as pie!

What happens on the technical side of things behind the scenes? Unfortunately, Amazon hasn’t disclosed much at all, which is a bit of a shame for nerds like me. But I shouldn’t complain too much, I guess.

What we do know is that sensor fusion is employed (sensor fusion is when data is combined from multiple sensors/sources to provide a higher degree of accuracy) along with deep learning.

Hundreds of cameras and depth sensors are attached to the ceiling around the store:

The cameras and depth sensors located on the ceiling of the store (image source)

These track you and your movements (using computer vision!) throughout your expedition. Weight sensors can also be found on the shelves to assist the cameras in discerning which products you have chosen to put into your shopping basket.

(Note: sensor fusion is also being employed in autonomous cars. Hopefully I’ll be writing about this as well soon.)

In 2015, Amazon filed a patent application for its cashier-less store in which it stated the use of RGB cameras (i.e. colour cameras) along with facial recognition. TechCrunch, however, has reported that the Vice President of Technology at Amazon Go told them that no facial recognition algorithms are currently being used.

In-Store Issues Prior to Public Opening

Although the store opened its doors to the public a few weeks ago, it has been open to employees since December 2016. Initially, Amazon expected the store to be ready for public use a few months after that but public opening was delayed by nearly a year due to “technical problems“.

We know what some of the dilemmas behind these “technical problems” were.

Firstly, Amazon had problems tracking more than 20 people in the store. If you’ve ever worked on person-tracking software, you’ll know how hard it is to track a crowd of people with similar body types and wearing similar clothes. But it looks like this has been resolved (to at least a satisfactory level for them). It’s a shame for us, though, to not be given more information on how Amazon managed to get this to work.

Funnily enough, some employees of Amazon knew about this problem and in November last year tried to see if a solution had been developed. Three employees dressed up in Pikachu costumes (as reported by Bloomberg here) while doing their round of shopping to attempt to fool the system. Amazon Go passed this thorough, systematic, and very scientific test. Too bad I couldn’t find any images or videos of this escapade!

We also know that initially engineers were assisting the computer vision system behind the scenes. The system would let these people know when it was losing confidence with its tracking results and would ask them to intervene, at least minimally. Nobody is supposedly doing this task any more.

Lastly, I also found information stating that the system would run into trouble when products were taken off the shelf and placed back on a different shelf. This was reported to have occurred when employees brought their children into the store and they ran wild a little (as children do).

This also appears to have been taken care of because someone from the public attempted to do this on purpose last week (see this video) but to no adverse effects, it would seem.

It’s interesting to see the growing pains that Amazon Go had to go through, isn’t it? How they needed an extra year to try to iron out all these creases. This is such a huge innovation. Makes you wonder what “creases” autonomous cars will have when they become more prominent!

In-Store Issues Post Public Opening

But, alas. It appears as though not all creases were ironed out to perfection. Since Amazon Go’s opening a few weeks ago, two issues have been written about.

The first is of Deirdre Bosa of CNBC not being charged for a small tub of yoghurt:

The Vice President of Amazon Go responded in the following way:

First and foremost, enjoy the yogurt on us. It happens so rarely that we didn’t even bother building in a feature for customers to tell us it happened. So thanks for being honest and telling us. I’ve been doing this a year and I have yet to get an error.

The yoghurt manufacturer replied to that tweet also:

To which Dierdre responded: “Thanks Siggi’s! But I think it’s on Amazon :)”

LOL! 🙂

But as Amazon Go stated, it’s a rarity for these mistakes to happen. Or is that only the case until someone works out a flaw in the system?

Well, it seems as though someone has!

In this video, Tim Pool states that he managed to walk out of the Amazon Go store with a bag full of products and was only charged for one item. According to him it is “absurdly easy to take a bag full of things and not get charged”. That’s a little disconcerting. It’s one thing when the system makes a mistake every now and then. It’s another thing when someone has worked out how to break it entirely.

Tim Pool says he has contacted Amazon Go to let them know of the major flaw. Amazon confirmed with him that he did not commit a crime but “if used in practice what we did would in fact be shoplifting”.

Ouch. I bet engineers are working on this frantically as we speak.

One more issue worth mentioning that isn’t really a flaw but could also be abused is that at the moment you can request a refund on any item without returns. No questions asked. Linus Tech Tips shows in this video how easily this can be done. Of course, since your Amazon Go account needs to be linked to your Amazon Prime account, if you do this too many times, Amazon will catch on and will probably take some form of preventative action against you or will even verify everything by looking back at past footage of you.


Cons of Amazon Go

Like I said earlier, I am really excited about Amazon Go. I always love it when computer vision spearheads innovation. But I also think it’s important to in this post also talk about potential unfavourable implications of a cashier-less store.

Potential Job Losses

The first most obvious potential con of Amazon Go is the job losses that might ensue if this innovation catches on. Considering that 3.5 million people in the US are employed as cashiers (it’s the second-most common job in that country), this issue needs to be raised and discussed. Heck, there have already been protests in this respect outside of Amazon Go:

Protests in front of the Amazon Go store (image source)

Bill Ingram, the organiser of the protest shown above asks: “What will all the cashiers do once their jobs are automated?”

Amazon, not surprisingly, has issued statements on this topic. It has said that although some jobs may be taken by automation, people can be relocated to improve other areas of the store by, for example:

Working in the kitchen and the store, prepping ingredients, making breakfast, lunch and dinner items, greeting customers at the door, stocking shelves and helping customers

Let’s also not forget that new jobs have also been created. For example, additional people need to be hired to manage the technological infrastructure behind this huge endeavour.

Personally, I’m not a pessimist about automation either. The industrial revolution that brought automation to so many walks of life was hard at first but society found ways to re-educate into other areas. The same will happen, I believe, if cashier-less stores become a prominent thing (and autonomous cars also, for that matter).

An Increase in Unhealthy Impulse Purchases

Manoj Thomas, a professor of marketing at Cornell University, has stated that our shopping behaviour will change around cashier-less stores:

[W]e know that when people use any abstract form of payment, they spend more. And the type of products they choose changes too.

What he’s saying is that psychological research has shown that the more distance we put between us and the “pain of paying” the more discipline we need to avoid those pesky impulse purchases. Having cash physically in your hand means you can what you’re doing with your money more easily. And that extra bit of time waiting in line at the cashier could be time enough to reconsider purchasing that chocolate and vanilla tub of ice cream :/

Even More Surveillance

And then we have the perennial question of surveillance. When is too much, too much? How much more data about us can be collected?

With such sophisticated surveillance in-store, companies are going to have access to even more behavioural data about us: which products I looked at for a long time; which products I picked up but put back on the shelf; my usual path around a store; which advertisements made me smile – the list goes on. Targeted advertising will become even more effective.

Indeed, Bill Ingram’s protest pictured above was also about this (hence why masks were worn to it). According to him, we’re heading in the wrong direction:

If people like that future, I guess they can jump into it. But to me, it seems pretty bleak.

Harsh, but there might be something to it.

Less Human Interaction

Albert Borgmann, a great philosopher on technology, coined the term device paradigm in his book “Technology and the Character of Contemporary Life” (1984). In a nutshell, the term is used to explain the hidden, detrimental nature and power of technology in our world (for a more in-depth explanation of the device paradigm, I highly recommend you read his philosophical works).

One of the things he laments is how we are increasingly losing daily human interactions due to the proliferation of technology. The sense of a community with the people around us is diminishing. Cashier-less stores are pushing this agenda further, it would seem. And considering, according to Aristotle anyway, that we are social creatures, the more we move away from human interaction, the more we act against our nature.

The Chicago Tribune wrote a little about this at the bottom of this article.

Is this something worth considering? Yes, definitely. But only in the bigger picture of things, I would say. At the moment, I don’t think accusing Amazon Go of trying to damage our human nature is the way to go.

Personally, I think this initiative is something to celebrate – albeit, perhaps, with just the faintest touch of reservation. 


In this post I discussed the cashier-less store “Amazon Go” recently opened to the public. I looked at how the store works from a technical and non-technical point of view. Unfortunately, I couldn’t say much from a technical angle because of the little amount of information that has been disclosed to us by Amazon. I also discussed some of the issues that the store has dealt with and is dealing with now. I mentioned, for example, that initially there were problems in trying to track more than 20 people in the store. But this appears to have been solved to a satisfactory level (for Amazon, at least). Finally, I dampened the mood a little by holding a discussion on the potential unfavourable implications that a proliferation of cashier-less stores may have on our societies. Some of the issues raised here are important but ultimately, in my humble opinion, this endeavour is something to celebrate – especially since computer vision is playing such a prominent role in it.


To be informed when new content like this is posted, subscribe to the mailing list:

Please share what you just read:

Gait Recognition – Another Form of Biometric Identification

I was watching The Punisher on Netflix last week and there was a scene (no spoilers, promise) in which someone was recognised from CCTV footage by the way they were walking. “Surely, that’s another example of Hollywood BS“, I thought to myself – “there’s no way that’s even remotely possible”. So, I spent the last week researching into this – and to my surprise it turns out that this is not a load of garbage after all! Gait Recognition is another legitimate form of biometric identification/verification. 

In this post I’m going to present to you my past week’s research into gait recognition: what it is, what it typically entails, and what the current state-of-the-art is in this field. Let me just say that what scientists are able to do now in this respect surprised me immensely – I’m sure it’ll surprise you too!

Gait Recognition

In a nutshell, gait recognition aims to identify individuals by the way they walk. It turns out that our walking movements are quite unique, a little like our fingerprints and irises. Who knew, right!? Hence, there has been a lot of research in this field in the past two decades.

There are significant advantages of this form of identity verification. These include the fact that it can be performed from a distance (e.g. using CCTV footage), it is non-invasive (i.e. the person may not even know that he is being analysed), and it does not necessarily require high-resolution images for it to obtain good results.

The Framework for Automatic Gait Recognition

Trawling through the literature on the subject, I found that scientists have used various ways to capture people’s movements for analysis, e.g. using 3D depth sensors or even using pressure sensors on the floor. I want to focus on the use case shown in The Punisher where recognition was performed from a single, stationary security camera. I want to do this simply because CCTV footage is so ubiquitous today and because pure and neat Computer Vision techniques can be used on such footage.

In this context, gait recognition algorithms are typically composed of three steps:

  1. Pre-processing to extract silhouettes
  2. Feature extraction
  3. Classification

Let’s take a look at these steps individually.

1. Silhouette extraction

Silhouette extraction of subjects is generally performed by subtracting the background image from each frame. Once the background is subtracted, you’re left with foreground objects. The pixels associated with these objects can be coloured white and then extracted.

Background subtraction is a heavily studied field and is by no means a solved problem in Computer Vision. OpenCV provides a few interesting implementations of background subtraction. For example, a background can be learned over time (i.e. you don’t have to manually provide it). Some implementations also allow for things like illumination changes (especially useful for outdoor scenes) and some can also deal with shadows. Which technique is used to subtract the background from frames is irrelevant as long as reasonable accuracy is obtained.

Example of silhouette extraction

2. Feature extraction

Various features can be extracted once we have the silhouettes of our subjects. Typically, a single gait period (a gait cycle) is first detected, which is the sequence of video showing you take one step with each of your feet. This is useful to do because your gait pattern repeats itself, so there’s no need to analyse anything more than one cycle.

Features from this gait cycle are then extracted. In this respect, algorithms can be divided into two groups: model-based and model-free.

Model-based methods of gait recognition take your gait period and attempt to build a model of your movements. These models, for example, can be constructed by representing the person as a stick-figure skeleton with joints or as being composed of cylinders. Then, numerous parameters are calculated to describe the model. For example, the method proposed in this publication from 2001 calculates distance between the head and feet, the head and pelvis, the feet and pelvis, and the step length of a subject to describe a simple model. Another model is depicted in the image below:

An example of a biped model with 5 different parameters as proposed in this solution from 2012

Model-free methods work on extracted features directly. Here, undoubtedly the most interesting and most widely used feature extracted from silhouettes is that of the Gait Energy Image (GEI). It was first proposed in 2006 in a paper entitled “Individual Recognition Using Gait Energy Image” (IEEE transactions on pattern analysis and machine intelligence 28, no. 2 (2006): 316-322).

Note: the Pattern Analysis and Machine Intelligence (PAMI) journal is one of the best in the world in the field. Publishing there is a feat worthy of praise. 

The GEI is used in almost all of the top gait recognition algorithms because it is (perhaps surprisingly) intuitive, not too prone to noise, and simple to grasp and implement. To calculate it, frames from one gait cycle are superimposed on top of each other to give an “average” image of your gait. This calculation is depicted in the image below where the GEI for two people is shown in the last column.

The GEI can be regarded as a unique signature of your gait. And although it was first proposed way back in 2006, it is still widely used in state-of-the-art solutions today.

Examples of two calculated GEIs for two different people shown in the far right column. (image taken from the original publication)

3. Classification

Once step 2 is complete, identification of subjects can take place. Standard classification techniques can be used here, such as k-nearest neighbour (KNN) and the support vector machine (SVM). These are common techniques that are used when one is dealing with features. They are not constrained to the use case of computer vision. Indeed, any other field that uses features to describe their data will also utilise these techniques to classify/identify their data. Hence, I will not dwell on this step any longer. I will, however, will refer you to a state-of-the-art review of gait recognition from 2010 that lists some more of these common classification techniques.

So, how good is gait recognition then?

We’ve briefly taken a look at how gait recognition algorithms work. Let’s now take a peek at how good they are at recognising people.

We’ll first turn to some recent news. Only 2 months ago (October, 2017) Chinese researchers announced that they have developed the best gait recognition algorithm to date. They claim that their system works with the subject being up to 50 metres away and that detection times have been reduced to just 200 milliseconds. If you read the article, you will notice that no data/results are presented so we can’t really investigate their claims. We have to turn to academia for hard evidence of what we’re seeking.

Gaitgan: invariant gait feature extraction using generative adversarial networks” (Yu et al., IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 30-37. 2017) is the latest top publication on this topic. I won’t go through their proposed algorithm (it is model-based and uses the GEI), I will just present their results – which are in fact quite impressive.

To test their algorithm, the authors used the CASIA-B dataset. This is one of the largest publicly available datasets for gait recognition. It contains video footage of 124 subjects walking across a room captured at various angles ranging from front on, side view, and top down. Not only this, but walking is repeated by the same people while wearing a coat and then while wearing a backpack, which adds additional elements of difficulty to gait recognition. And the low resolution of the videos (320×240 – a decent resolution in 2005 when the dataset was released) makes them ideal to test gait recognition algorithms on considering how CCTV footage has generally low quality also.

Three example screenshots from the dataset is shown below. The frames are of the same person with a side-on view. The second and third image shows the subject wearing a coat and a bag, respectively.

Example screenshots from the CASIA B dataset of the same person walking.

Recognition rates with front-on views with no bag or coat linger around 20%-40% (depending on the height of the camera). Rates then gradually increase as the angle nears the side-on view (that gives a clear silhouette). At the side-on view with no bag or coat, recognition rates reach an astounding 98.75%! Impressive and surprising.

When it comes to analysing the clips with the people carrying a bag and wearing a coat, results are summarised in one small table that shows only a few indicative averages. Here, recognition rates obviously drop but the top rates (obtained with side-on views) persist at around the 60% mark.

What can be deduced from these results is that if the camera distance and angle and other parameters are ideal (e.g. the subject is not wearing/carrying anything concealing), gait recognition works amazingly well for a reasonably sized subset of people. But once ideal conditions start to change, accuracy gradually decreases to (probably) inadequate levels.

And I will also mention (perhaps something you may have already garnered) that these algorithms also only work if the subject is acting normally. That is, the algorithms work if the subject is not changing the way he usually walks, for example by walking faster (maybe as a result of stress) or by consciously trying to forestall gait recognition algorithms (like we saw in The Punisher!).

However, an accuracy rate of 98.75% with side-on views shows great potential for this form of identification and because of this, I am certain that more and more research will be devoted to this field. In this respect, I will keep you posted if I find anything new and interesting on this topic in the future!


Gait recognition is another form of biometric identification – a little like iris scanning and fingerprints. Interesting computer vision techniques are utilised on single-camera footage to obtain sometimes 99% recognition results. These results depend on such things as camera angles and whether subjects are wearing concealing clothes or not. But much like other recognition techniques (e.g. face recognition), this is undoubtedly a field that will be further researched and improved in the future. Watch this space.


To be informed when new content like this is posted, subscribe to the mailing list:


Please share what you just read: