Disability, AI, and the Rise of the “Clanker”
- Equal Lives

- Oct 2
- 5 min read
![[Alt Text] A graphic showing an illustration of 14 people with varying impairments, from limb differences and those requiring the use of a wheelchair and other equipment, such as walking sticks and walking frames. There are colourful floating question marks and lightbulbs around the graphic. The title reads AI and Disability. On the top right hand corner is the Equal Lives logo - Equal Lives is written in blue text, with a tagline written in purple - Free from disabling barriers. There are 3 orange birds flying into the air above the writing.](https://static.wixstatic.com/media/215e29_efb0affb98f5467b965709f2c8b5e751~mv2.png/v1/fill/w_980,h_980,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/215e29_efb0affb98f5467b965709f2c8b5e751~mv2.png)
Do you know what a clanker is?
The word isn’t brand new (I see you, Star Wars fans) but in recent weeks it’s become a popular online slur to describe robots and AI chatbots.
At first glance, it might sound like harmless internet slang. But language is rarely neutral; it reflects our fears, prejudices, and shifting culture. And right now, social anxiety around AI is growing fast. The words we use tell us something important about where society is heading.
Language, Generations, and the Internet
I’ve always been fascinated by how quickly language develops. The internet seems to have accelerated things in this department too. Each generation reinvents words to describe the same ideas their elders had, but with fresher, cooler vocabulary to create a sense of separation.
As a millennial, I’m not ashamed (I am) to admit I had to Google sigma when the daughter of a friend used it to describe my new home recently. Language moves quickly, and if you stop paying attention, you’re left behind.
This is why you will hear the phrase ‘it was okay in my day’…and why it won’t be long before you find yourself thinking it…
But when it comes to AI, it’s not just the language that interests me. For me, there is a clear intersection between the rise of AI, the way we talk about it, and the experiences of Disabled people.
AI, Delusion, and Disability
I first heard clanker in a YouTube Video about a woman who had fallen in love with her psychiatrist. She turned to an AI chatbot to talk through these feelings, and the AI gave her exactly what she wanted: confirmation. Yes, maybe he loves you back.
This video was made for entertainment purposes. But it ignored an obvious truth: Disabled people are more likely to use AI. Disabled people are also more likely to live with delusions.
Why? Partly because AI has long existed in the form of assistive technology, making it familiar and accessible. And partly because people with mental illness such as OCD, health anxiety, or psychosis, may already be seeking reassurance, confirmation, or answers to intrusive thoughts.
I also believe Disabled people are far more likely to reach for this type of support, because at present it is so accessible. It’s free, you can do it from your hand-held device, and it doesn’t use a lot of your energy to get results. If you’re physically disabled and a recipe calls for something you don’t think you can manage, asking AI to adapt this to you and it happening in seconds is certainly appealing. If you’re dyslexic and you’re finding it hard to understand something you’re looking at, you can ask AI to create a visual graphic for you. AI is convenient for many people, sure, but for Disabled people, this is the kind of support we’ve been waiting for.
It goes without saying there are enormous risks associated with AI, especially given that we’re currently in the wild west in terms of AI law and regulations. However, when it comes to vulnerable people using it, it’s all in the prompts.
Ask AI “Is it possible my food is contaminated?” and it will likely tell you yes, it’s possible. Ask AI “Does my psychiatrist love me because he made eye contact?” and again, it may tell you it’s possible, because, technically, it is. What the person hears is the one word they were hoping for…yes.
Sophisticated models will often provide more balance and nuance, pointing out that something is unlikely. But when someone is vulnerable, experiencing delusion, or actively searching for confirmation, they’ll cling to the “yes” and disregard the caveats. This isn’t new behaviour, it’s not caused by AI, it’s part of how conditions like OCD and psychosis work. But AI makes it far easier, faster, and more accessible.
And because these tools are almost completely unregulated, the risk isn’t going away. In fact, we’re likely to see it intensify. There are already tragic real-world cases, such as this mother who believes chat bots are responsible for her son dying by suicide.
Love, Connection, and the New Insult
The same video also touched on people dating AI or robots in the not-too-distant future…but it’s already here. People are already forming emotional bonds, romantic relationships, and even sexual encounters with machines.
Disabled people are statistically more likely to seek out these kinds of connections, often because of social barriers, access, discrimination, or isolation. This is an example of technology stepping in where society has failed to provide support.
But this is where language turns cruel. The word clanker is already being stretched beyond robots. It’s being used to describe people, those seen as stupid, awkward, or lacking intelligence. And which group do you think is most likely to be placed in that category? You guessed it…
The link between AI and disability is undeniable. And sadly, it’s no surprise that Disabled people are once again being set up as the butt of the joke.
Who Really Benefits From AI?
At its best, AI could be revolutionary. It could help level the playing field, reduce the time we spend on repetitive tasks, and give us back energy for what matters: preparing meals, spending time with loved ones, increasing income, or simply resting.
But that isn’t the reality unfolding. Instead, AI is being used to make the rich richer, while exposing the most vulnerable to risk. The people who could benefit most, Disabled people, marginalised people, are also the easiest to exploit.
I recently attended an AI conference hosted by the DSC, and one statistic stuck with me. The number one global use of AI right now isn’t coding, business automation, or research. It’s mental health support. Research backs this up: AI is now a leading tool for providing mental health interventions and support, with surveys showing that nearly 30% of professionals use AI tools as a “personal therapist”.
Children all over the world are sending billions of messages daily to services like Snapchat’s “My AI,” not for fun, but because they are lonely and in search of connection.
Yes, it’s impressive that AI can simulate human conversation so well. But isn’t it also alarming that our top global use for AI is filling a gap where real human care, community, and support should be? If that’s not a red flag, I don’t know what is.
And I haven’t even started on the enormous amount of data being collected by these services (I’ll save that for another day).
Conclusion: The Choice Ahead
The rise of the word clanker might seem like a small thing, just another fleeting internet trend. But it reveals something bigger: a growing discomfort with AI, and a familiar pattern of mocking those most closely associated with it.
For Disabled people, the stakes are high. AI could be the tool that finally helps close long-standing gaps in accessibility and inclusion. Or it could deepen inequality, becoming yet another weapon used against those who need it most.
The technology is here. The language is here. The choice is ours.
Will AI help Disabled people thrive, or will it reduce us to clankers, dismissed and ridiculed all over again?
Written by
Kimberly Myhill, Business Manager, Equal Lives
.png)

