Back to Blog
TechnologyApril 3, 20266 min read

The Human Brain Has 86 Billion Neurons. AI Is Catching Up Faster Than Anyone Expected.

A look at what neuroscience is teaching AI researchers — and what AI is teaching neuroscientists — as the two fields converge on the same questions from opposite directions

The Human Brain Has 86 Billion Neurons. AI Is Catching Up Faster Than Anyone Expected.

In 2012, a Google research team trained a neural network on ten million YouTube thumbnails and discovered, unsupervised, that it had developed an internal representation of cats. No one had told it cats existed. No one had labeled cat images. The network, exposed to enough visual data, spontaneously organized its internal representations in a way that corresponded to a concept that humans find salient. Researchers were surprised. Neuroscientists were fascinated. The incident suggested something uncomfortable and interesting: that certain abstractions might emerge naturally from learning systems of sufficient scale, regardless of whether those systems are biological or artificial.

That incident was more than a decade ago, and the relationship between neuroscience and AI research has grown significantly more intertwined since. The two fields are converging on the same fundamental questions — how does cognition arise from physical substrate, what are the computational principles that underlie intelligent behavior, how do representations form and generalize — from opposite directions. Neuroscientists study the brain to understand these questions. AI researchers build systems to instantiate possible answers. Each is finding that the other has things to teach them.

What Neuroscience Gave AI

The intellectual lineage from neuroscience to modern AI is direct. The perceptron — the foundational unit of artificial neural networks — was explicitly modeled on the neuron. The architecture of convolutional neural networks that dominated computer vision in the 2010s was directly inspired by the hierarchical organization of the mammalian visual cortex, where early layers respond to simple features like edges and later layers respond to complex objects. Backpropagation — the training algorithm that underlies virtually all modern deep learning — has no direct biological counterpart, but the broader concept of learning by adjusting connection strengths based on error signals echoes theoretical frameworks for synaptic plasticity.

More recently, the transformer architecture that underlies GPT, Claude, and every major language model draws on attention mechanisms that have rough analogues in how the brain allocates cognitive resources. The brain does not process all sensory input equally — it selectively attends to what is relevant given current goals and context. Artificial attention mechanisms implement a mathematically precise version of this selective processing, and the fact that it works so well across so many domains suggests the biological original is doing something computationally important.

What AI Is Teaching Neuroscience

The influence runs in both directions, and increasingly the more surprising direction is from AI to neuroscience. Trained neural networks have become tools for generating and testing hypotheses about how the brain works.

One striking example comes from visual neuroscience. Researchers discovered that the internal representations developed by deep learning models trained on image recognition tasks are remarkably similar to the representations found in the primate visual cortex — similar enough that the activations of specific layers of an artificial network predict the neural responses of specific regions of the brain better than any model neuroscientists had previously developed. This was not by design. It emerged from training on a large dataset to perform a recognition task — exactly the kind of learning the biological visual system underwent over evolutionary time.

This convergence suggests something important: the visual representations that deep learning systems develop may not be arbitrary computational solutions but something closer to optimal solutions to the problem of visual recognition under natural image statistics — solutions that biological evolution and artificial optimization independently converge on because they are in some sense the right answer to the problem.

The Mysteries That Remain

Despite the genuine convergence, the gap between artificial neural networks and biological brains remains enormous on several dimensions that matter.

Biological brains learn from extraordinarily few examples. A child sees a dog a handful of times and generalizes the concept reliably to every dog they subsequently encounter, regardless of breed, pose, lighting, or context. AI systems require exposure to millions of labeled examples to achieve comparable generalization, and even then they fail on distributions that shift from training data in ways that a human child would handle effortlessly. This sample efficiency gap is one of the deepest unsolved problems in AI research.

Biological brains are also extraordinarily energy efficient. The human brain runs on approximately twenty watts — roughly the power of a dim light bulb — while performing cognitive tasks that require megawatts of server power to approximate in current AI systems. Understanding how biological neural circuits achieve this efficiency is an active area of research with significant implications both for neuroscience and for making AI more practically deployable.

Consciousness — the subjective experience of being a thinking thing — remains the deepest mystery. Whether current AI systems have anything resembling subjective experience is a question that most researchers consider unanswerable with current tools, and many consider it unanswerable in principle given the hard problem of consciousness. The functional capabilities of large language models — their ability to reason, to use language contextually, to exhibit something that resembles creativity — are separable from the question of whether there is anything it is like to be those systems. Most researchers believe there is not. But the honest answer is that nobody knows how to determine this definitively.

The Tools Each Field Is Building for the Other

The practical collaboration between neuroscience and AI has produced tools that neither field could have developed alone. AI-powered analysis of brain imaging data — using deep learning to decode cognitive states from fMRI scans, to identify neural correlates of specific thoughts or perceptions, to reconstruct images that subjects were viewing from their brain activity — has opened research directions that were technically impossible a decade ago.

In the other direction, computational neuroscience is increasingly using AI systems as models to generate precise predictions about neural behavior — predictions that can be tested experimentally. Rather than verbal descriptions of how a brain region might work, researchers can instantiate a specific computational hypothesis in a neural network, train it, and compare its internal representations and behavioral outputs to biological data with quantitative precision.

The field of neuromorphic computing — building AI hardware that mimics the physical architecture of biological neural circuits, including spike-based communication and local learning rules — is one of the most promising directions for closing the energy efficiency gap between artificial and biological intelligence. Intel Loihi, IBM TrueNorth, and a growing cohort of startups are building hardware that operates on principles closer to how biological neurons actually work, with early results suggesting significant efficiency improvements for certain classes of tasks.

The two fields are asking the same questions from different directions. The more they converge, the more likely it is that answers will emerge that neither could have reached alone. What intelligence actually is, computationally — and whether the substrate it runs on matters as much as we intuitively feel it should — may turn out to be questions that AI and neuroscience answer together, with implications that neither field has fully worked out yet.

SA

stayupdatedwith.ai Team

AI education researchers and engineers building the future of personalized learning.

Comments

Loading comments...

Leave a Comment

Enjoyed this article? Start learning with AI voice tutoring.

Explore AI Companions
The Human Brain Has 86 Billion Neurons. AI Is Catching Up Faster Than Anyone Expected. | stayupdatedwith.ai | stayupdatedwith.ai