Researchers at MIT have created the first artificial neuronal network that can recognize sounds like a human ear. Notably, scientists focused on two auditory tasks: speech and music. Their model has “trained” with thousands of two-second clips containing words spoken by a person or musical notes. After many thousands of examples, artificial intelligence has learned to perform the task with the same precision as a human listener.
The study, which appeared in the journal “Neuron” in April, also offers evidence that the human auditory cortex is organized in a hierarchical manner, very similar to the visual cortex: basic sensory information is processed immediately, while more advanced functions such as the meaning of word are “extracted” in later stages.