The AI Divide Isn't What You Think
The global conversation about AI has increasingly been framed as geopolitical competition, with the United States and China racing to control the frontier while other nations choose sides. That divide is real and worth paying attention to.
But while keeping an eye on it, my attention hinges on something closer to home: the divide between AI that optimizes for efficiency and AI that accounts for the full range of how minds actually work. That distinction determines who gets served, who gets missed, and whose needs are silently omitted at scale.
AI, the Great Black Box. Or Is It?
AI has been treated like a great black box, one mysterious thing that either saves us or replaces us. But AI is not one thing. It is a series of distinct systems, including machine learning, natural language processing, generative AI, computer vision, and neural networks, each with different capabilities, different risks, and different implications for who benefits and who does not. Once you understand that, the conversation changes.
A neural network, the architecture behind most of today's AI, was modeled after the human brain: layers, connections, pattern recognition. It is computational neuroscience. But while AI uses one-way, mathematical, and algorithmic layers to process data, real brains use bidirectional, energy-efficient, highly variable, non-linear, and specialized neural circuits. The model is inspired by us, but it is not us. It is a simplification. And when a simplification is deployed at scale without accounting for human variability, what follows is not innovation. It is silent omission.
What That Omission Actually Looks Like
A neural network learns by adjusting numerical weights across static layers. A human brain learns by rewiring itself, forming new connections, pruning old ones, routing information through parallel pathways that shift depending on context, emotion, fatigue, motivation, and lived experience. No two brains do this the same way. That is variability, and it is not a flaw in the system. It is the system.
Most AI was built on a narrow slice of how cognition works, trained on data that reflects institutional norms rather than human range. When deployed at scale without accounting for that variability, it does not just miss edge cases. It encodes a default, treating the statistical average as the design target and everything else as deviation. That means the learner whose processing speed does not match the algorithm's expected pace, the family whose language patterns do not align with the training data, and the mind that works differently, not deficiently, but is never represented in the model at all.
This is not a theoretical concern. This is happening now, in classrooms, in platforms, in systems that touch millions of students. And most of the people building them do not have the language for what they are missing, because they were never trained to see variability as a design constraint rather than noise to be smoothed out.
What Most People Don't Realize They Already Use
Consider this. Parents hand their children iPads loaded with adaptive learning apps every day: DreamBox, IXL, Khan Academy. All of them use machine learning to adjust content based on how a child responds. That is AI, and it has been in classrooms for years.
But say the words "AI in education" and the response shifts to caution, sometimes fear. The disconnect is not about the technology itself. It is about familiarity. One interface looks like a colorful game while the other sounds like a machine making decisions about their child. Same underlying systems, very different reactions.
This is not a failure of judgment. It is a gap in understanding. Recent survey data show that over half of the general population has never used generative AI, and a significant share of non-users cite lack of familiarity as the primary reason for staying away from it.
The reality is that AI extends well beyond the classroom. It selected what appeared on your phone this morning, routed your GPS around traffic, and manages the power grid keeping your lights on. It shapes your shopping recommendations, your news feed, and your voting district maps. None of us opted in. We are already in it.
So the question is not whether AI belongs in education. It is already there. The real opportunity is in shaping who designs it, for whom, and with what understanding of how minds actually work.
As a parent of a neurodivergent child, I have always been fascinated by how different minds are. Not different in the way that suggests something is missing, but different in the way that reveals how much more range exists than any single system was designed to hold. That fascination led me to research, the research led me to building, and the building showed me where the gaps are.
Where the Real Opportunity Lives
AI represents the single biggest equity opportunity neurodivergent learners have ever had. Technology that can meet a mind where it is. Personalized pacing. Translated language. Adaptive goals. For ND learners, this is not a convenience. It is access.
The investment landscape reflects that. Billions are flowing into education technology, Meta is developing AI-powered smart glasses that overlay information in real time, and adaptive platforms are being funded at record pace. The money is there and the interest is there. The question is whether the design thinking will match the scale of the investment, or whether we will fund the same narrow assumptions faster.
Because that opportunity is being narrowed by the gap between the developers who build platforms and the people who use them. Too often, someone is telling families what they need without understanding how their minds engage, what motivates them, or what barriers the system itself has created. Edtech companies are racing to ship AI features without understanding cognitive variability, building for the average mind and calling it "personalized."
Here is the structural reality that most people in this space are unwilling to state plainly: AI trained on historical educational documentation will replicate structural inequities unless it is intentionally counter-designed. The data these systems learn from is saturated with decades of deficit-based language, exclusionary framing, and institutional bias. My own research has shown that the overwhelming majority of IEP documents are written at a reading level most families cannot access, and that those linguistic patterns vary systematically across demographic lines. When you train AI on that history without interrogating it, you do not get innovation. You get automation of the same inequities, deployed faster and at a scale no human system could match.
There is a difference between AI for efficiency and AI for equity. Efficiency asks how we process more students faster. Equity asks how a mind actually works and what it needs. Most edtech is doing the first and marketing it as the second.
The Cost Nobody's Counting
There is another dimension to this divide that receives almost no attention in education circles: the environmental one.
When AI becomes infrastructure, embedded in every classroom platform, every adaptive tool, and every IEP system, its environmental cost becomes invisible. But it is real. The same systems promising personalized learning require energy-intensive compute clusters and water-cooled data centers, with a carbon footprint that grows with every query, every model update, and every scaled deployment.
Equity without sustainability is not equity. It is deferred cost, passed to the same communities least positioned to absorb it. If we are going to build AI systems and call them equitable, we have to account for what they consume, not just what they produce.
It's Not Good or Evil. It's Designed.
AI is not a necessary evil. It is complicated and misunderstood, and the people most affected by it, the families, learners, and communities at the center of these systems, deserve more than a binary choice between embracing it and banning it. They deserve to understand what it actually is, what it can do, what it cannot, and who is building it with their minds in mind.
I built one. Expert IEP started as a personal fight and became the nation's first AI-powered IEP support platform, co-designed through 77 iterations with families and informed by computational research on how institutional language creates barriers. It works not because the AI is smarter, but because the design started with the mind, not the system.
And that is the real divide. Not U.S. versus China. Not pro-AI versus anti-AI. The divide is between systems designed around institutions and systems designed around cognition, between AI that optimizes for efficiency and AI that is aligned to the full variability of the human mind.
The term for what I am arguing for is not inclusion, not accessibility, and not even equity, though it requires all three.
It is cognitive dignity.
Variability is not noise in a system. It is the signal. Every mind has its own architecture, and that architecture is not a deficiency to be corrected but a reality to be designed for. Any AI that treats difference as deviation will always reproduce inequity at scale. The real AI divide is not about who controls the most compute. It is about whether we design for the full range of the human mind, or continue optimizing for an average that never existed.
Antoinette Banks is a computational social scientist studying the variability of the human mind. She is the Founder and CEO of Expert IEP and a PhD candidate at UC Davis. Her research uses AI and computational methods to examine how institutional language shapes engagement across the full range of cognition.
Selected Reading
Winter-Levy & Leicht, "The AI Divide," Foreign Affairs (February 2025)
Stanford AI Index Report 2024
MIT Technology Review, "The Environmental Cost of AI"
Salesforce, Generative AI Snapshot Research (2024)
Banks, A. (forthcoming). Computational Analysis of IEP Language and Family Voice. University of California, Davis.