Full self-driving is not yet a reality. To make it ready and safe, we need a huge number of images captured during the drive. Moreover, each video frame requires about an hour of human work to be labeled. The data collection process can be solved using a simulator to produce synthetic labeled data.
We use the Generative Adversarial Network model to
align synthetic data with real data.
Why do we need these data?
This data is used to train a neural network to recognize all the elements captured by the camera mounted on the car (pedestrians, road signs, traffic lights). This task is called Semantic Segmentation.
Demo video of ICNet on cityscapes dataset
Why do we need to align real and synthetic data?
These data can be very different from each other: at a pixel-level, colors, reflexes, brightness may drastically change. This problem is called Domain Shift and makes the model inaccurate. This situation is similar to when a teacher assigns you an exercise to prepare for the final exams, but then you find different types of exercises. Despite the study, you can’t solve them. We want to train a model with data sampled from a source distribution (synthetic images) that is different from the one that the model will use (real-world data).
This task is called Domain Adaptation.
Source: Multi-Layer domain adaptation method for rolling bearing fault diagnosis
How do we align real and synthetic data?
Nowadays, many domain adaptation solutions are based on Generative Adversarial Networks (GANs), one of the hottest topics in AI. They are considered as
“the most interesting idea in the last 10 years of machine learning“
by Y. Le Cun, (one of the fathers of Deep Learning)
We reduced the domain shift, applying the style of real images on top of the synthetic ones. We used a particular version of GAN called Cycle-GAN. In this way, we reduced the domain shift by 20%.
We also did some other experiments like adversarial training or self-supervision tasks. You can find further details in our full article, that is available for free here: