My Data Manifesto

Image generated by Nano Banana

I did not come to artificial intelligence through a lab. I came through a waiting room.

When my daughter was diagnosed with severe autism, I was told what she would likely never become. That statement was framed as prognosis. I heard it as prediction, a forecast shaped by institutional assumptions, professional authority, and constrained imagination.

What I did not yet have language for in that waiting room was statistical conditioning. The prognosis presented as certainty was in fact a probability distribution derived from historical outcomes. It was not destiny. It was data. And data, I would later learn, always carries the assumptions of its collection.

Years later, as I began studying and interrogating predictive systems, I recognized the same architecture: models trained on historical distributions, optimized for statistical averages, deployed without examining who those averages exclude.

I write this as a mother, a computational social scientist who examines how predictive systems redistribute power, and a scholar of learning and mind sciences who studies cognitive variability as the norm, not the exception. These identities are not separate. They shape how I design, question, and deploy technology.

  1. AI Is Not Neutral. It Is Conditional. All large-scale models are trained on historical data. History encodes inequality, bias, and normativity. Outputs are probabilistic inferences conditioned on that past. To treat them as neutral is to misunderstand both statistics and society. Scholars like Ruha Benjamin and Safiya Umoja Noble have shown us this clearly: technology does not transcend the society that builds it. It encodes it. Benjamin's concept of the "New Jim Code" and Noble's research on algorithmic oppression demonstrate that the appearance of neutrality is itself a design choice, one that privileges some and erases others. Neutrality in an unequal system preserves the inequality.

  2. No Model Gets the Final Word. A model output is not truth; it is inference under uncertainty. Systems require epistemic humility. This means:

  • Confidence levels that are surfaced, not buried.

  • Divergence across models that is examined, not ignored.

  • Human oversight preserved in high-stakes contexts.

  • Uncertainty made visible rather than invisible.

  1. Design for the Full Distribution, Not the Average User. The "average user" is a statistical artifact. Cognitive science demonstrates that intra-individual variability often exceeds inter-individual difference. Systems optimized for mean performance systematically fail those at the margins.

Designing for variability produces robustness. Designing for averages produces exclusion. When you design for the full distribution, you trigger the "Curb-Cut Effect": building for the margins ultimately makes the system more resilient and functional for everyone.

  1. If They Can't Read It, They Can't Fight It. Language structures participation. Complexity is often used as a hedge against accountability. If a system's logic cannot be translated for the person it impacts, that system is a black box by design, not by necessity. Complexity that serves the builder but obscures the impact is not a design constraint. It is a design choice. Accessibility is not aesthetic; it is structural.

  2. Their Data Is Not Yours. It's a Promise. Data is relational trust encoded in digital form. When a parent uploads their child's IEP (Individualized Education Program) to my platform, they are handing me something intimate, a document that holds a diagnosis, a set of goals, and a trajectory. That act is not a transaction. It is an extension of trust that took years of systemic disappointment to build.

Data minimization, access controls, and explicit usage constraints are architectural decisions that honor that trust. Trust cannot be retrofitted.

  1. The Best Answer Is Sometimes "I Don't Know." Optimization toward a response at the cost of truth is a systemic failure. In the era of generative AI, the refusal to admit ignorance leads to fabrication, the generation of false but plausible outputs to satisfy a prompt. Systems must be designed with explicit refusal thresholds. Capability without boundary scales harm.

The industry calls this "hallucination." I try to stay away from using it considering my work with neurodiverse and people who experience mental wellness needs. To be clear, a hallucination is a real neurological experience, one that affects people living with schizophrenia, psychosis, epilepsy, and dementia. In a document that centers cognitive variability, I refuse to borrow from disability to name a design limitation.

Calling a model's output a "hallucination" implies something went wrong, a glitch, a break from an otherwise sound mind. Nothing went wrong. The model did exactly what it was trained to do: predict the next most statistically probable token. When it lacked sufficient signal, it didn't stop. It generated the most plausible-sounding answer anyway. That's not a malfunction. That's the architecture.

"Hallucination" pathologizes a real human experience and mystifies a predictable engineering outcome in a single word.

  1. Scale Multiplies Consequence. Scaling a flawed system does not democratize it; it industrializes its errors. Before deployment, we must ask: Who bears risk if this fails? Impact is not measured in downloads. It is measured in trajectory shifts.

  2. AI Should Expand Possibility, Not Inherit Limits. Prediction without imagination becomes prophecy. Systems that rely solely on historical precedent risk encoding structural ceilings into future opportunity. Ethical AI must create room for deviation from the past, especially for those historically mispredicted by it.

People before prediction. Variability before averages. Accountability before scale.

The method behind these principles: “Before You Deploy: Eight Questions for AI Builders”

This is a living document. I came to this work through a waiting room, holding a diagnosis that was supposed to define my daughter's future. It didn't. Because someone refused to accept the prediction.

Previous
Previous

Before You Deploy: Eight Questions for AI Builders

Next
Next

Biological Psychology Says Otherwise