The robots soon ready to decipher our body language



American researchers have developed a system to track down the micro-movements of our body and thus learn from artificial intelligences to recognize our non-verbal emotions and intentions.

The machines are already able to recognize (almost) everything you say. But to communicate, we also use our body, as when we pointed our finger at an object or a direction. Information that robots are unable to grasp. But maybe more for a long time.

A team of researchers at Carnegie Mellon University has undertaken to train them to decipher our non-verbal signals, as revealed by the IEEE Spectrum website. To get more interactions between man and machine, open the way to a virtual and augmented reality more interactive and develop more intuitive user interfaces.

The vocal assistants Alexa or Siri, only equipped with microphones, are content to believe you by word and attach themselves only to the literal meaning of your words. Even Pepper, the humanoid robot of Softbank Robotics that is equipped with a camera to read our emotions, focuses on basic signals to deduce our mood: a smile, volume of voice. But it is not in any way able to apprehend your attitudes finely.


A device with 500 cameras

The team of Carnegie Mellon has therefore set out to develop a computer system, called OpenPose, to track the micro-movements of our whole body. He is able to follow both the head and torso of a person, but also each of his fingers. This requires the use of a device with 500 cameras capturing all poses from different angles. And it has built up a vast database. All images, captured in 2D, are then passed through a “key point detector” to identify the different parts of the body and label them. Thanks to the key points, they are then triangularized to help the tracking algorithms to understand how each pose appears differently. With this data, the system can determine how the entire hand presents itself when it is in a particular position, even if some fingers are masked.

Track player movements … without sensors


With all this knowledge, the device is now able to operate in a lighter configuration with a single camera and a laptop. The researchers published their code to encourage public experiments.



(adsbygoogle = window.adsbygoogle || []).push({});

As far as virtual reality is concerned, this could make it possible to follow the movements of the players in a finer way, without any sensors. An autonomous car would also be able to guess if a pedestrian is about to cross. During sports broadcasts, the exact position of each player would be determined in real time. And in the medical field, certain pathologies such as autism or depression could be better approached. Examples of applications are legion. As for the domestic assistants, they could anticipate your wishes without you needing to verbalize them. What IEE Spectrum says is this ironic remark: “When you cry in silence, the face in your hands because a robot will have taken your work, the latter will be able to hand you a handkerchief.”


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s