Researchers at the University of Texas at Austin developed the system in part by using a transformer model, which is similar to those that support Google’s chatbot Bard and OpenAI’s chatbot ChatGPT.
The study’s participants trained the decoder by listening to several hours of podcasts within an fMRI scanner.
Once the AI system is trained, it can generate a stream of text when the participant is listening to or imagines telling a new story. The resultant text is not an exact transcript, rather the researchers designed it with the intent of capturing general thoughts or ideas.
Participants first listen to podcasts for about 16 hours in an fMRI scanner. This trains the AI to learn the person’s brain activity. Later, they go back in to put their brain to the test. And tell the same story without actually saying anything out loud. Since we already knew which words they were gonna say, approximately, and when those words would occur, we could then compare that to the actual words and see how well we were doing in decoding
it works, but it’s hard to really call it accurate because it doesn't turn thoughts into text, word for word.
It cannot be used on just a random person (without collecting all this training data)
The decoder can’t be used outside of a laboratory setting because it relies on the fMRI scanner. But the researchers believe it could eventually be used via more portable brain-imaging systems. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS). (fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring, resolution will be lower)