Beyond Autism: The Alienation Machine
How AI is redefining normality and reshaping humanity
Beyond Autism: The Alienation Machine
Keyes in the paper "Automating Autism" argues that AI systems are quietly filtering out autistic voices — constructing them as invalid, as noise. But this same logic applies to anyone who doesn't fit the patterns in the training data.
Think about something many of you have experienced or will experience: applying for a software engineering job. You submit your resume. It goes into an applicant tracking system. A machine learning model scores it. You never get a callback. You don't know why. There's no explanation, no appeals process, no human who looked at your name. You were filtered out by an algorithm trained on historical hiring data — data that encodes what "a good software engineer" looked like in the past.
But here's where it gets worse. Once you know the machine is screening you, what do you do? You optimize. You rewrite your resume to match the patterns the algorithm rewards. You strip out anything unusual — the creative side project, the unconventional career path, the gap year. You start writing like the machine wants you to write. You start presenting yourself as the machine's idea of a good engineer.
And it's not just resumes. If AI systems increasingly determine who gets hired, who gets promoted, who gets published, who gets diagnosed — then everyone, not just autistic people, faces pressure to conform to whatever the algorithm defines as "normal." You learn to speak in ways the machine recognizes. You learn to think in ways the machine rewards. You reshape yourself into the template the training data created.
This is something more disturbing than filtering. The machine is no longer just deciding who counts as human. It is compelling everyone to become the human it counts. It is not just excluding the outliers — it is eliminating the possibility of being an outlier.
You may recognize this structure. This is alienation — but a new form of it. Workers were once alienated from the products of their labor. Now we are being alienated from ourselves — reshaped in the image of a statistical model that was trained on a past we may not want to reproduce.
And this circles back to Keyes's core argument. Autistic people are the canary in the coal mine. They are the first to be filtered, the first to be declared invalid, the first to lose their voice. But the logic doesn't stop with them. If we allow AI systems to define what counts as valid communication, valid knowledge, valid personhood — without perpetual critical scrutiny — then that definition will eventually close around all of us.
Finally, if AI decision systems become dominant in social institutions, we may reach a point where human society no longer needs real humans. It may only require machine-like humans who perfectly conform to algorithmic expectations — or simply machines themselves. In such a system, real humans may increasingly be counted as “invalid.”