AI Reveals Hidden Secrets Behind Our Mind’s Silent Focus — and Finds New Kinds of Neurons
Ever wondered how you can focus on something in your surroundings without actually moving your eyes? Whether it’s scanning traffic while you drive or gauging someone’s reaction across a room, this subtle mental shift — known as covert attention — happens constantly. But here’s the catch: scientists have long struggled to explain how it works in the brain. Now, researchers at UC Santa Barbara have made a groundbreaking discovery. Using advanced neural networks, they’ve not only uncovered the underlying mechanisms of covert attention but also stumbled upon previously unknown types of neurons confirmed in real brain data from mice.
“This is one of those rare moments where artificial intelligence is directly advancing our understanding of neuroscience, psychology, and cognition,” said Sudhanshu Srivastava, lead author and postdoctoral researcher at UC San Diego.
Their findings, recently published in the Proceedings of the National Academy of Sciences, bridge artificial and biological intelligence in unexpected ways.
Understanding the Hidden Spotlight of the Mind
Think of attention as a mental spotlight that helps the brain sharpen what it sees. It’s not just poetic — in neuroscience, attention truly acts like turning the focus or volume up on certain parts of the visual field. When scientists study this in the lab, they often show subjects quick flashes or arrows — cues that help them detect a target faster and more accurately. Essentially, the brain’s attention system redirects its resources toward the cued area, fine-tuning perception there.
But here’s where it gets controversial: most of us assume this behavior depends exclusively on brain regions linked to consciousness — like those in primates’ parietal lobes. Yet, in recent years, covert attention has also been spotted in animals far less complex than humans: archerfish, mice, even bees. If that’s true, could covert attention arise naturally — without needing specialized brain structures — just from the way networks of neurons organize themselves?
AI as a Window into the Brain
Mapping how our brains manage attention is no small task. The human brain has billions of neurons, all interacting dynamically. Current imaging tools can’t capture activity at an individual neuron’s level across such massive scales. Enter artificial intelligence. By training convolutional neural networks (CNNs) — simplified digital analogs of the brain — researchers can watch how “artificial neurons” self-organize to solve visual tasks like target detection.
In an earlier 2024 study, Srivastava, Miguel Eckstein, and William Wang showed that CNNs with between 200,000 and one million artificial neurons spontaneously displayed the hallmarks of covert attention — even though the models lacked any built-in system for focusing attention. That alone was startling: it suggested that attention might emerge naturally in both brains and machines simply as they learn to optimize performance.
But the question remained — what inside the CNN made this possible? The team’s new research drills into that mystery, examining the internal structure of the models in search of the neuronal blueprints behind this emergent behavior.
Inside the Digital Mind: Discovering New Neuron Types
“We decided to stop treating AI like a black box,” explained Professor Eckstein. “In neuroscience, you might record from thousands of neurons — never a million. But in AI, you can analyze every single artificial unit. That gives us a whole new level of insight.”
To test this, the scientists analyzed over 1.8 million artificial neurons (from 10 CNNs) using a visual cueing experiment known as the Posner task. The results astonished them: although these networks contained no explicit attention mechanisms, certain units behaved strikingly like neurons observed in real primates and mice. Even more surprising was the emergence of new neuron types never before described in biology.
Among these were ‘cue-inhibitory’ neurons, which reduce their activity when an attention cue appears — the opposite of what most studies focus on. And then there was the truly unexpected ‘location-opponent’ neuron: a type that amplifies activity at one location while suppressing it elsewhere, creating a kind of push-pull effect that strengthens signals exactly where the brain expects the target to be.
“This dual mechanism — turning up one area while quieting another — is something we hadn’t seen in attention studies before,” Eckstein said. “It resembles vision’s color-opponent cells that react to red but inhibit green.”
From Machines to Mice — and Back Again
Could these AI-invented neuron types really exist in living brains? To find out, the researchers mined neural recordings from mice performing similar visual cueing tests. Amazingly, they found direct biological counterparts for several artificial neuron types — including the cue-inhibitory and location-opponent varieties — within the mouse superior colliculus, a key region for visual attention.
One type, however, seemed purely artificial: neurons that combined opponency for cues with summation for targets across multiple locations. Their absence in mice suggested that while AI may mirror the brain in many ways, it can also produce patterns unconstrained by biology. And that’s where things get truly thought-provoking — could machine intelligence help us imagine new forms of cognition, beyond those found in nature?
The Beginning of a New Research Era
While there’s still much to learn about how these findings extend to humans, this research marks a turning point. It shows that both attentional behaviors and neural mechanisms can emerge spontaneously, without any explicit programming. Covert attention, once thought exclusive to conscious primates, may in fact be a universal byproduct of efficient information processing — whether in organic neurons or silicon chips.
“It completely changed how we think about attention,” said Srivastava. “We’re now realizing that these mechanisms might not be designed — they might simply emerge.”
Eckstein and Wang now co-lead the Mind & Machine Intelligence Initiative at UCSB, a program funded by Duncan and Suzanne Mellichamp to explore the deep connections between human and artificial intelligence. As AI continues to illuminate cognitive mysteries, one question looms large:
If machines can independently discover how attention works — what else about the human mind might they soon decode?
Do you think this discovery blurs the line between natural and artificial intelligence — or are we still miles away from understanding true awareness? Let’s discuss in the comments.