On Monday, scientists announced a novel artificial intelligence system that can non-invasively transform brain activity from listening to or picturing a story or watching a silent film into a continuous stream of text.
A four-member University of Texas Austin research group, including an Indian doctoral student, found that the system can generate comprehensible word sequences from brain activity captured by f-MRI images.
The researchers suggested the device could benefit conscious stroke patients who cannot speak.
OpenAI’s ChatGPT, a conversational AI system, uses similar computational techniques to decode brain activity.
“The goal of language decoding is to take recordings of a user’s brain activity and predict the words the user is hearing, saying, or imagining,” said study leader Jerry Tang, a computer science PhD student. “This proves that non-invasive recordings can decode language.”
Tang and his team became the first to decipher continuous language from non-invasive brain recordings using f-MRI. Other language decoding systems require surgery.
Users need customized systems. Alexander Huth, assistant professor of neurology and computational science at the university, said this only works when a person spends 15 hours in an MRI scanner, lying immobile and paying attention to stories.
Scientists utilize this training to forecast how a user’s brain will react to different stories.
Despite errors, the AI system’s output captures the gist of what is said or thought. A participant hearing “I don’t have my driver’s license yet” thought “She has not even started to learn to drive.”
Nature Neuroscience published the study Monday. Amanda LaBel, a former Huth lab researcher, and Shailee Jain, a graduate student who earned a BTech at the National Institute of Technology, Surathkal, before going to the US, are the other co-authors.
Another experiment heard: “I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back but instead finding only darkness.”
“I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.”
Huth called this a “real leap forward compared to what’s been done before, which is typically single words or short sentences” for non-invasive decoding.
This method decodes continuous discourse for “extended periods with complicated ideas”.
The system is unusable without an MRI machine. Huth said the team is testing the technology with portable brain imaging systems including functional near-infrared spectroscopy (fNIRS).
Scientists warn the technology is vulnerable to “sabotage” and requires substantial user training and collaboration to read thoughts.
However, societies must regulate such technologies in anticipation of future advanced versions that could overcome such mind-reading obstacles.
The decoder need collaboration for now. “We can’t train one and run it on another,” Tang added.
If a user imagines another story while hearing one, they can sabotage the decoder.