Curiosity Built AI. Speed Took Over.

Over two thousand years ago, Plato imagined people chained inside a cave, watching shadows on a wall and believing them to be real. They had never seen the light that cast the shadows or the shapes that made them. The structure of the cave determined what they could perceive, and they accepted it as the whole truth because it was all the system allowed them to see.

That image has not stopped being relevant.

What Plato understood, and what every serious thinker about consciousness has understood since, is that the structure we build determines what we are able to see. Change the structure and the shadows change. Leave the structure unexamined and the shadows become truth. This insight launched a philosophical tradition that would stretch across centuries, and it is the tradition that artificial intelligence was actually born from.

These Were Philosophies

In the nineteenth century, Franz Brentano gave us a word for what makes consciousness different from everything else: intentionality. Not intention in the everyday sense, but the idea that consciousness is always about something, always directed, always reaching toward meaning. A mind does not just process. It reaches. That directedness, that aboutness, is what makes a mind a mind. Brentano understood that any serious attempt to study consciousness had to begin with what consciousness does, which is orient itself toward the world with purpose.

Around the same time, Ada Lovelace was imagining machines that could compose music, manipulate symbols, and generate things no one had instructed them to generate. She also drew a line that still holds: a machine "has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform." Machines inherit our logic, our assumptions, our definitions of what counts. Give them curiosity and they explore. Give them categories and they sort. Build the structure as a cave and the machine will show us shadows and call them answers. Lovelace was making a statement about us as much as she was making one about machines.

Decades later, Lev Vygotsky deepened the question by showing that mind does not exist in isolation. In Mind in Society, he demonstrated that cognition is shaped by culture, language, social interaction, and mediation. You cannot understand a mind without understanding the world that mind is embedded in. The structure shapes the thought. The cave shapes what you see. Vygotsky made it clear that any model of intelligence that ignores context is not modeling intelligence at all.

Then in 1950, Alan Turing published "Computing Machinery and Intelligence." Most people remember it as the paper that launched artificial intelligence, but read it again and what you find is not a blueprint for technology. It is a philosophy paper. Turing was not trying to build a machine that thinks. He was asking what thinking is, whether consciousness has one form or many, and whether intelligence can only look the way we have always assumed it looks or whether it might emerge in ways we have not yet imagined. That is not engineering. That is curiosity, and it was curiosity informed by everything that came before him.

John Haugeland carried that tradition forward. His Mind Design, first published in 1981, became a classic in cognitive science and AI precisely because it forced the field to sit with the hard question: what is mind, and can we design it? He was not shipping products. He was curating the philosophical arguments that should have governed everything built after them, and he edited a second edition because the question was still open. It is still open now.

Connectionism emerged from this same tradition as a philosophical position about the nature of mind, proposing that cognition emerges from patterns of distributed connection rather than following a fixed set of rules. Intelligence, in this view, is not an instruction set. It is emergent, variable, and contextual. Neural networks, the architecture behind most of today's AI, were modeled directly on this philosophy.

Daniel Dennett then took Brentano's concept of intentionality and applied it to machines. He accepted that intentionality is real, and argued that in artificial systems it is derivative, borrowed from us, from the designers, from the logic we encode. A machine's intentionality is not its own. It is ours, reflected back. What the system does reveals what we built into it.

This is the part that matters most. The question is not whether AI is conscious. The question is what intentions it inherited from us, and what it does with them tells us the answer. That answer depends entirely on what we prioritize when we build, which is where this philosophical tradition met a very different kind of philosophy.

Speed Became the Philosophy

John Doerr wrote the playbook. Speed and Scale. Move fast, measure what matters, scale what works. It is the gospel of Silicon Valley and it is effective for what it was designed to do. It was not designed for minds.

When speed and scale become the dominant intention encoded into AI, that is exactly what the machine's derivative behavior reflects: not curiosity, not a philosophical commitment to understanding what a mind is, but velocity and volume. Ship it. Scale it. The philosophy became the thing that slowed you down, and so it was left behind.

The architecture itself tells this story. Neural networks were modeled on connectionism, on the philosophy that cognition is distributed, emergent, and pattern-based, a tradition that honored variability. But the culture building on that architecture moved faster than the philosophy could follow. It shipped the structure and left the questions at the door.

Every thinker in this lineage, from Plato to Brentano to Vygotsky to Turing to Lovelace to Haugeland to Dennett, understood that the questions were the work and that the work required patience. There is another way to build, and it has been here the whole time.

Social Impact Is the Return

There is an assumption in the technology world that social impact work is the softer path, the slower one, the one concerned with feelings instead of scale.

It is closer to the origin.

Social impact has to create its own path because the existing one was not built for the people it serves. And that path, the one forged out of necessity, runs closer to where AI originated than the one Silicon Valley is on. The people building AI for families navigating systems, for communities that have been categorized and sorted and overlooked, are the ones still asking the original question. They are closer to where this field actually came from than the people shipping products and calling it innovation.

My own research sits here. I use computational methods to analyze the language inside Individualized Education Programs, the documents designed around how a specific child's mind works. What I have found is that the language often reflects the system's assumptions about the child more than the child themselves, revealing patterns in who gets described with deficit language, who gets framed with possibility, and whose cognitive difference gets treated as something to manage rather than a form of intelligence to understand.

Dennett was right that AI's intentionality is derivative, that it reflects whatever we built into it. Build with curiosity, with philosophy, with an understanding of what consciousness actually is, and the systems behave differently. They ask different questions and surface different possibilities. A system built on curiosity asks: what does this child's mind reveal about how thinking works? A system built on speed asks: how do we process this child's paperwork faster?

The difference between those two questions is the difference between philosophy and product, and it is the reason this tradition matters now more than ever.

The Question

Every thinker who laid the foundation for this field stayed curious. None of them rushed to categorize. They understood that sitting with the questions long enough to build something worthy of them was not a delay. It was the point.

Social impact work is not a detour from the origins of AI. It is the road back.

We built artificial intelligence.

We are living through artificial influence.

The question now is when we will build toward intentional intelligence.

Previous
Previous

Biological Psychology Says Otherwise

Next
Next

Custom Everything, But Not...?