The improvement of robots’s speech recognition
The achievements technology has accomplished in the last decade are not small. Sciences such as Artificial Intelligence (AI), Deep Learning or IoT are constantly evolving in order to improve society. It is now normal to see digital and automated objects or even robots when do our daily activities. For example, many industrial factories are starting to use robots in their processes to facilitate work.
In today’s article we will talk about the improvement of robots’ speech recognition by modelling human auditory processing. A major scientific and technological advancement. Are you into Artificial Intelligence? Then you should keep reading!
Human auditory processing in robots
Human have an incredible capacity in isolating audio from places like crowded city squares or busy supermarkets. This is a very complex process that we can achieve with our auditory abilities. Furthermore, we can also separate individual sources of audio from backgrounds, find them in space, and sense its motion patterns. New technological devices such as robots are not capable this yet. However, that is about to change.
This is because a group of researchers conducted a study inspired by this neurophysiology. They shared this study on the papel ‘Enhanced Robot Speech Recognition Using Biomimetic Binaural Sound Source Localization’. Their design is created to test the influence of physiognomy (facial features) on the components of sound recognition. Components such as sound source localization (SSL) or automatic speech recognition (ASR).
How does this process work?
The researchers of this study also explain us how this process works. The torso, head, and pinnae (the external part of the ears) absorb and reflect sound waves when they approach the body. After that, thy modify the frequency of the waves depending on the source’s location. Sound waves travel to the cochlea (the spiral cavity of the inner ear) and the organ of Corti within. This produces nerve impulses that respond to sound vibrations.
Through the auditory nervous system, these nerve impulses are delivered to the cochlear nucleus. Then, information is sent to two structures: the medial superior olive (MSO) and the lateral superior olive (LSO). MSO helps to locate the left or right angle to identify the sound’s source. And the LSO uses intensity to find the sound source. They are both within the brain’s inferior colliculus (IC).
Machine Learning and robotics
The next step for researchers was to replicate this human auditory process with robots. They designed a machine learning setting that processes sound recorded by microphones integrated in robot’s heads. They called them iCub and Soundman. This frameworks has four different parts. First, an SSL element that decomposes audio into sets of frequencies and generates neural impulses. Then, an MSO model sensitive to sounds produced at certain angles. It also has an IC-inspired layer combining signals from the MSO and LSO. And finally, and additional neural network that minimizes the noise that robots’ joints and motors produce.
In order to test its performance, the experts used Soundman to establish SSL and ASR baselines. They also used iCub’s head to determine the effect of resonance. Then, a group of thirteen speakers blasted noise towards the heads in order for them to detect and process it. The results were that the information from SSL could “improve considerably” in some cases. The researchers also found that performance could improve when they removed the pinnae from the robot’s head.
Artificial Intelligence and Deep Learning
The researchers of this study highlight the importance of this investigation. They also claim that their system “can be easily integrated with recent methods to enhance ASR in reverberant environments without adding computational cost.” A huge step forward within the Artificial Intelligence field. Many other different investigations are changing everything we thought we knew and improving people’s lives. Do you want to be a part of the group of experts that accomplish this kind of achievements? We know what you should do.
Join the Master in Artificial Intelligence and Deep Learning from the University of Alcalá! With this master you will completely understand the formal foundations of Machine Learning and its implications in human-machine interactions. What is more, you will also learn how to use high level languages in order to develop real applications based on AI. Could you ask for anything else? Join now and do not regret it!