Scientists fine-tune brain-to-speech translator

David Moses and Edward Chang
Eddie Chang (right), a neuroscientist at the University of California at San Francisco, discusses findings with postdoctoral researcher David Moses. (UCSF Photo / Noah Berger)

Neuroscientists have demonstrated a computerized system that can determine in real time what’s being said, based on brain activity rather than actual speech.

The technology is being supported in part by Facebook Reality Labs, which is aiming to create a non-invasive, wearable brain-to-text translator. But in the nearer term, the research is more likely to help locked-in patients communicate through thought.

“They can imagine speaking, and then these electrodes could maybe pick this up,” said Christof Koch, chief scientist and president of the Seattle-based Allen Institute for Brain Science, who was not involved in the study.

The latest experiments, reported today in the open-access journal Nature Communications, were conducted by a team at the University of California at San Francisco on three epilepsy patients who volunteered to take part. The work built on earlier experiments that decoded brain patterns into speech, but not in real time.

“Real-time processing of brain activity has been used to decode simple speech sounds, but this is the first time this approach has been used to identify spoken words and phrases,” UCSF postdoctoral researcher David Moses, the study’s principal investigator, said in a news release.

Get the full story on GeekWire.

By Alan Boyle

Mastermind of Cosmic Log, contributor to GeekWire and Universe Today, author of "The Case for Pluto: How a Little Planet Made a Big Difference," past president of the Council for the Advancement of Science Writing.

Leave a Reply

%d bloggers like this: