
BY Andrew Myers
https://hai.stanford.edu/news/ai-reveals-how-brain-activity-unfolds-over...
Brain monitoring tools like functional MRI (fMRI) and EEG have long allowed neuroscientists to observe the brain at work — thinking, feeling, talking, doing. They can pinpoint where thoughts emerge in the brain. They can measure how strong the activity is. And they can watch as brain activity evolves through the brain over time. What they haven’t been able to do is interpret what it all means.
Now, researchers at Stanford University say they have applied deep learning to decipher such complex brain activity — in two and, in some cases, three dimensions and over long time scales — to provide neuroscientific insights that were once beyond scientists’ reach. The approach could reshape fields from psychology to oncology.
Space and Time
The problem to date has been the data — the brain data are intertwined in spatial and temporal dimensions—there’s too much of it and it’s too complex to comprehend without a reliable analysis tool. Indeed, signals captured spatially across multiple regions of the brain, changing all the while, are overwhelming and unmanageable, even by scientists.
"It’s a four-dimensional problem in the case of fMRI,” says Lei Xing, professor of medical physics in the Department of Radiation Oncology and professor of electrical engineering (by courtesy) in Department of Electrical Engineering at Stanford University, who is the senior author of a study explaining the new model published in the journal Nature Computational Science. “The signal from one point in the brain at a specific moment in time correlates to another in a different place and time in a very complex manner that we have struggled to understand completely, leading to fragmented and confusing outputs.”
With the help of AI’s vast computational powers, however, the new approach, known as Brain-dynamic Convolutional-Network-based Embedding, or BCNE for short, distills and interprets all this complex data into a simpler form. BCNE represents brain activity as trajectories of activity through the brain over time. The researchers feed the measured images or other types of data, such as EEG, through their model, filtering out meaningless noise while spotlighting valuable patterns in the data.
“BCNE uses this continuity of time and space to generate dynamic brain state trajectories. It’s like making movies of brain activity," says Zixia Zhou, a post-doctoral researcher in Xing’s lab and first author of the study that was partially sponsored by a seed grant from the Stanford Institute for Human-Centered Artificial Intelligence (HAI). “One can see not only the brain response but how it evolves and travels over time.”
In fact, in one experiment the researchers recorded the brain activity of people watching movies to note how their brains transition from scene to scene and to evaluate changes in perception, emotion and comprehension as the narrative unfolds. In other experiments with lab monkeys and rats, BCNE captured detailed information about how physical movements are signaled from the brain to the muscles and provided other detailed information about the animals’ brain activity.
Open Questions
Xing specializes in biomedical physics and radiation oncology, a field where he projects that BCNE has vast potential to study brain adaptation after treatments to remove brain tumors. In neuroscience, the researchers think BCNE could be used to study memory, learning, decision-making and other ideation processes. In clinics, they predict BCNE could help diagnose and monitor neurological conditions like Parkinson’s, depression and schizophrenia, or potentially to evaluate the effectiveness of therapeutic and pharmaceutical treatments.
In its initial iteration, Xing notes, BCNE is a promising proof of concept of AI’s interpretive capabilities, but there is still much room to grow. Next up, Xing and team are intent on bringing BCNE to clinical applications and exploring real-time brain monitoring and prediction techniques. They would like to refine the method and apply it to more varied and complex datasets, especially those with irregular or limited sampling. They also hope to integrate additional modes, such as MRI and CT scans, to provide evermore complete and insightful brain-state mappings.
“For now, our approach seems to open more questions than it answers,” Xing says. “But there is much opportunity ahead.”
Contributing Stanford authors include: Junyan Liu, Wei Emma Wu, Sheng Liu, Qingyue Wei, Rui Yan and Md Tauhidul Islam (co-corresponding author).
January 21, 2026






