Facing up to Robots
Dr Luo’s aim is not just to enable robots to interpret and understand human behaviour, but to use this information to build human/AI paired systems that outperform their singular counterparts to create the ‘brain’ of what he calls ‘social robots’.
If you were to teach a robot how to behave like a human, what would you focus on? For Dr Ping Luo, Assistant Professor of the Department of Computer Science, the answer is quite literally staring us in the face. Very often, our emotions are shown on our faces. If you can understand the signals being sent by different facial expressions, the underlying emotions become clear. For a robot, understanding these visual cues is the key to mastering the understanding of human behaviour.
Dr Luo’s aim is not just to enable robots to interpret and understand human behaviour, but to use this information to build human/AI paired systems that outperform their singular counterparts to create the ‘brain’ of what he calls ‘social robots’. Social robots understand human behaviour and mimic human behaviour to the extent that they can act like a human.
Dr Luo was named one of the ‘Innovators Under 35 Asia Pacific’ by MIT Technology Review for his work in computer vision and building AI technologies that can understand human behaviour. He has already filed more than 80 patents in different countries and his technology is being used in smart cities, smartphones and autonomous vehicles. One practical application of his work is in Harbin’s metro stations, where the use of his facial recognition technology means that no travel card needs to be scanned for access. Cameras scan passengers’ faces and then deduct the fare from their cards automatically.
His technology is based on deep learning and was developed using celebrity data. By trawling social media, Dr Luo gathered thousands of facial images of celebrities and used these as the data needed for deep learning. The result is a large-scale CelebFaces Attributes (CelebA) Dataset which is the biggest of its kind in the world and the most widely used database for generating face images. “The original purpose was to achieve accuracy of the face recognition system,” he explained.
Dr Luo has also developed DeepFashion, a comprehensive fashion dataset built using 801,000 different fashion images. In technology terms, DeepFashion is similar to face image recognition, but teaching a robot to recognise fashion images is more difficult than facial images, as faces have a generally rigid structure while fabric changes shape as it flows around the body. DeepFashion can enable consumers to virtually try on clothes. In the future, the computer could design clothes for individual users.
Ultimately, he believes social robots could perform important roles in providing care to people in need, for example, by helping elderly people to move around, in a hospital or home setting.
One of Dr Luo’s vision and next challenge will be to build a robot that looks appealing enough to be accepted by humans, which is proving a hard all. “It’s very difficult to build a human-like robot,” he said.