BACK TO ALL ARTICLES
AI is not your friend
Image credit: Kilworth Simmonds from UK, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons
The idea that you could form a meaningful relationship with an AI tool would have seemed fanciful a few years ago, before apps powered by Large Language Models (LLMs) were everywhere from our phones to our fridges. But now, growing numbers of people see an AI chatbot as an assistant, friend, or even partner.
This isn’t accidental. The anthropomorphism of applications such as ChatGPT, Siri, Grok, Cortana, or any number of similar products is deliberate and sustained. Not just because it makes the technology addictively engaging, but also because it helps the people at the forefront of designing those technologies to deflect responsibility for how they operate.
What’s the problem with anthropomorphism?
Anthropomorphism is nothing new. From Peter Rabbit to Henry Hoovers, we have long lent human attributes to non-humans. It helps us create emotional attachments and increases our level of engagement. But AI anthropomorphism goes far beyond sticking a coat on a rabbit, or a face on a hoover.
We know on an intellectual level that when we input a prompt into a chatbot, they aren’t responding to us as a human would. We know that it’s really just an algorithm predicting the most likely answer. But, as Mitja Kara puts it in this blog post: “when an LLM makes statements such as “I understand” or “I’m here for you”, it’s hard not to apply more meaning to the response.”

ChatGPT’s response to a prompt entered on 30 Jan 2025.
Tech companies build and market their products in this way to encourage AI use in place of human connection; so that customers see AI as a confidant, therapist, travel agent, dating guru, and workplace assistant. When I enter a prompt into Canva’s AI design tool, it tells me: “I’m thinking.” When I pose a tricky question to ChatGPT, it tells me, “Hmm. That’s a big question.” Companies are not simply hoping we relate to AI as a person, they are designing it in. As a result, we increasingly hear of people becoming reliant on their AI ‘friends’ and ‘partners’ rather than speaking to real people, of robots used in healthcare in place of human carers, and of AI therapists, pastors, and even ‘AI Jesus’ being marketed as sources of personal and spiritual growth. As Rebecca Solnit writes: “The stories are horrific: of people abandoning their relationships with other human beings, of growing estranged, sometimes encouraged to grow suspicious.”
There’s no such thing as an ethical robot – but unethical people abound
And the risk is that the more AI responds to us like a human would, the more likely we are to read agency and ethics into it. The late Margaret Boden, an AI pioneer who passed away last year, told an ECLAS conference several years ago, “there’s no such thing as an ethical robot.” LLMs have no moral dimension – and indeed, are more likely to be biased than neutral.
Safiya Umoja Noble, in Algorithms of Oppression, writes: “While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.”
By promoting ideas of their products as ‘intelligent’, independent, or neutral, it is easier for tech firms to deflect blame for how their products are used. Last month, there was outrage in the UK when Grok (the X social media platform chatbot) was being used to alter photographs of women and children to make them sexually explicit. Afterwards, the account ‘apologised’, claiming it ‘deeply regretted the incident.’ This approach reinforces the idea that Grok is responsible for how it is used, despite having no capacity to do so. The people who can make amends are the leadership of X – but hiding behind the idea of an intelligent Grok allows them to deflect accountability for allowing this feature.

What’s the alternative?
There are parallels here to another tool: the car. Road deaths are often reported in the media as if the car, not the driver, is responsible – ‘Car kills pedestrian’ – distancing the reader and removing agency from where it belongs. There are now, rightly, attempts to address this in media reports through voluntary guidelines.
We can do something similar when we talk about AI.
Firstly, we can use terms such as tools, machine automation, and machine learning system rather than the more general ‘AI’ to challenge the idea that AI tools are truly intelligent or independent. As Kate Crawford points out in Atlas of AI, the very term ‘AI’ is problematic, suggesting as it does that “with sufficient training, or enough resources, humanlike intelligence can be created from scratch, without addressing the fundamental ways in which humans are embodied, relational, and set within wider ecologies.”
We can also reinforce that these tools are used by individuals, for specific ends. “Rather than saying a model is “good at” something (suggesting the model has skills) we can talk about what it is “good for”. Who is using the model to do something, and what are they using it to do?” write Emily Bender and Nanna Inie. This way, the operator’s intent and agency are explicit.
And at a structural level, legislation can ensure that tech companies are held responsible for the ways their tools are used. China’s government recently passed the strictest laws in the world addressing AI anthropomorphism, with additional safeguards for people deemed to be vulnerable (thanks to Luiza Jarovsky for presenting them here). Among other things, this legislation recognises that emotional manipulation is often designed into these technologies, and requires that users are frequently reminded that whatever chatbot they are speaking to is not real.
Measures such as these stand to become even more important as AI technologies become, dare I say it, even more ubiquitous. When tech companies are using their products to influence politics, manipulate the vulnerable, and exacerbate inequality, it is vital that we have a clear understanding of where accountability really lies.