Japan scientist develops AI system that turns brain activity into sentences



By Ryan General
A scientist in Japan has created a technique that uses fMRI brain scans and artificial intelligence to generate sentences describing what a person is seeing or recalling.
The system links patterns of visual brain activity with numerical language representations produced by a large language model. In testing, it produced sentences that closely matched the content of short video clips shown to participants.
The method, developed by computational neuroscientist Tomoyasu Horikawa at NTT Communication Science Laboratories in Kanagawa, was based on more than 2,000 silent video clips paired with their written captions. Researchers converted each caption into a numerical meaning signature, then recorded fMRI activity from six participants as they viewed and later remembered the clips. A decoder was trained to match the brain activity to the meaning signatures, allowing the system to generate the sentence whose signature aligned with a new scan.
During trials the model reconstructed detailed visual information from brain activity, including actions, settings and objects. In one example a participant watched a video of a person jumping over a deep waterfall on a mountain ridge, and the system’s outputs evolved from simple fragments to a full descriptive sentence. The researchers noted that the approach reflects direct mappings between visual cortex activity and semantic features of the videos, demonstrating the level of detail that can be extracted from noninvasive brain imaging.
This story is part of The Rebel Yellow Newsletter — a bold weekly newsletter from the creators of NextShark, reclaiming our stories and celebrating Asian American voices.
Subscribe free to join the movement. If you love what we’re building, consider becoming a paid member — your support helps us grow our team, investigate impactful stories, and uplift our community.
Share this Article
Share this Article