All The Words We Cannot Use: Why Talking About AI Feels Impossible
How Borrowed Language Warps Our Understanding of AI—and Why We Need New Words
One of the biggest challenges in talking about AI is that we don’t have the right words for it. We keep borrowing language from human relationships—conversation, collaboration, even trust—trying to make sense of what happens when a person interacts with AI. But these words were never meant for machines. They stretch, they warp, and ultimately, they fail to capture what’s really going on.
I have written before that we have no choice but to use these imperfect analogies to help new users (especially) understand that they are not working with traditional technology. And I have even written before that new users should anthropomorphize AI — to an extent. Approaching AI like a human means you won’t approach it like your toaster oven, and that’s a good thing. We don’t want people “pressing the button” and walking away.
On the flip side, when we try to describe AI in purely technological terms, the language falls just as flat. We are left grasping for words that don’t quite fit, struggling to name something we barely understand.
In human-to-human interactions, words like conversation, persuasion, and trust carry weight because they assume intent, agency, and mutual understanding. When we say someone is persuading us, we assume they have a goal, a reason, and an awareness of how we think. But what happens when an AI generates a response that shifts our thinking? It has no intent, no goals, no awareness—yet it can still nudge us in new directions. Further, it is “processing vast amounts of data and tailoring content to individual susceptibilities,” as philosopher Luciano Floridi wrote when he coined the term hypersuasion to describe how large language models (LLMs) influence human users. The personalization of AI’s suggestions creates an environment where the persuadable are increasingly disempowered. The term hypersuasion gives us the vocabulary to talk about the phenomenon, but in most other areas of human language, we lack the words to describe similar developments.
In the context of the debate around anthropomorphizing AI, a new vocabulary would allow us to develop a “middle ground” between treating AI like pure technology and treating it like a friend or a collaborative colleague. It would create an entirely new landscape on which philosophers, technologists, writers, and mathematicians could come together to align on the best approach to working with (and talking about) AI.
Easier said than done, you are likely thinking. And you are right. I tried crafting some new words to describe my interactions with AI last summer, only to fall woefully short.
Rob Nelson’s “On Confabulation” reminded me of this shortchanged project. His deep dive into the term helps us understand why it is a better term than “hallucination” to describe AI’s fabrications of the truth.
Another example: When we talk about AI decision-making, for instance, we risk overestimating its autonomy. AI does not “decide” as humans do—it follows statistical patterns and optimization functions. Yet companies market AI as if it thinks, and policymakers debate its ethics as if it has moral reasoning. The language we use is not only inaccurate but it also actively misleads us into attributing abilities, responsibilities, and risks to AI that don’t exist in the way we assume.
If we want to develop a clearer, more honest understanding of AI, we need a new vocabulary—one that neither humanizes nor reduces AI to cold machinery, but accurately captures the strange, in-between nature of these interactions. The words we reach for shape the way we think, and until we find better ones, we will keep misunderstanding what AI is—and what it is not.
Which Words Don’t Apply?
Here is a list of 10 “Human-Human interaction” words that seem like they apply to Human-AI interactions (sometimes) but miss the mark - either by a wide margin or in subtle ways. Using them is dangerous because it doesn’t accurately convey what is happening “under the hood” of AI and can mislead people into believing things about AI that are not true. Feel free to leave your feedback and ideas in the comments.
1. Empathy
Human-Human: Empathy involves understanding and sharing the feelings of others. It's an essential part of human relationships that helps build trust and emotional connection.
Human-AI: AI cannot truly feel or understand emotions, so it cannot experience or express empathy, even though it often seems as if it understands us. While AI can be designed to respond to emotions or simulate empathy through natural language, it lacks the genuine emotional depth that comes with human empathy. Therefore, the term doesn’t fully apply in the AI context.
2. Trust
Human-Human: Trust in a human context is built over time through shared experiences, emotional reliability, and moral integrity.
Human-AI: Trust in AI is more about functional reliability, transparency, and consistency of outcomes, not about emotional or ethical integrity. You don't trust an AI in the emotional sense, but you trust its performance or algorithmic accuracy. The meaning of trust shifts from emotional to technical.
3. Collaboration (in its traditional sense)
Human-Human: Collaboration implies a mutual, dynamic process where humans contribute ideas, emotions, and experiences, building on each other’s strengths and compensating for weaknesses.
Human-AI: AI doesn’t “contribute” in the same way as a human; it processes data and follows instructions. There’s no mutual give-and-take of ideas, creativity, or subjective experiences. Human-AI interactions may feel more like coordination or augmentation than genuine collaboration.
4. Sympathy
Human-Human: Sympathy involves acknowledging and caring about the suffering or emotions of others, often leading to emotional support.
Human-AI: Sympathy doesn’t apply because AI lacks emotions and cannot truly acknowledge or care about human suffering. Even though AI can simulate sympathetic responses in language, it is ultimately an algorithmic function, not genuine emotional understanding.
5. Betrayal
Human-Human: Betrayal involves breaking trust, loyalty, or expectations in a relationship, often involving emotions like deceit, dishonesty, or malice.
Human-AI: While AI can malfunction or produce unintended outcomes, it doesn’t "betray" in the emotional or moral sense. There is no intent or malice behind AI actions, so the concept of betrayal doesn’t apply as it would between humans.
6. Compassion
Human-Human: Compassion involves recognizing another person's suffering and taking action to alleviate it, often motivated by emotional connection.
Human-AI: AI can’t feel or act out of compassion. While AI may be programmed to assist in ways that reduce human suffering (e.g., healthcare bots), the lack of emotional awareness or intention behind these actions means the word doesn’t apply in the same way.
7. Deception
Human-Human: Deception between humans typically involves intentionally misleading someone to gain an advantage, rooted in moral and ethical choices.
Human-AI: While an AI may produce incorrect information, its actions are not intentional, nor does it have the capability for deceit in the moral sense. Mistakes or biases may arise from the data or algorithms, but they are not acts of deception.
8. Mentorship
Human-Human: Mentorship is a relationship where an experienced person guides and supports a less experienced individual through emotional, professional, or personal development.
Human-AI: While AI can offer guidance or suggestions (e.g., educational tools), it cannot engage in mentorship. Mentorship requires understanding individual aspirations, emotions, and nuances that go beyond data processing.
9. Collusion
Human-Human: Collusion refers to a secret agreement between people, usually for deceitful or illegal purposes.
Human-AI: AI cannot form conspiracies or secret agreements. Although AI can be misused by humans for unethical purposes, the AI itself is not capable of intentional collusion.
10. Sacrifice
Human-Human: Sacrifice involves giving up something valuable for the benefit of others or for a greater cause.
Human-AI: AI doesn’t make choices or sacrifices in the emotional or moral sense. Its operations are based on programming and algorithms, devoid of personal or emotional stakes.
Possible New Words to describe Human-AI Interactions
I am not smart enough to craft new vocabulary out of thin air. However, for amusement, I brainstormed with ChatGPT a list of words that might replace some of the ones above when describing a Human-AI interaction. I present to you the confabulated term “cognidoubt” as a replacement for the concept of “trust.”
Cognidoubt: (noun) Combining cognition and doubt, this term embodies the idea that trust in AI is built through intellectual assessment, but doubt remains a rational companion to this trust. It suggests that no matter how intelligent AI may seem, some doubt must always be applied to its conclusions.
What do you think? Does it fit? Would you use it?
Share your review in the comments below.
Submit a Word
So, let’s fix this. If the words we have don’t work, let’s invent better ones. If hypersuasion can capture AI’s ability to persuade through personalized, data-driven strategies, what other words do we need to describe the weird, slippery nature of AI interaction? What do we call that eerie moment when AI seems to "get" us but obviously doesn’t? Or the way it makes a suggestion that feels original but is just a remix of a remix? If you’ve got a word for one of these phenomena—or if you want to take a stab at naming something we haven’t even articulated yet—drop it in this Google Form. Best submissions get featured in a follow-up post, and who knows? Maybe we’ll actually create the vocabulary that we humans need and deserve when it comes to working with AI.
This is a fabulous piece, thank you! I love "hypersuasion"--had not heard that. I just watched the first episode of Mrs. Davis because someone recommended it as the most possibly relevant vision of AI in current sci fi. They're trying to showcase a more rudimentary form of hypersuasion there... the AI in charge of civilization and sort of hypnotizes people to be its mouthpiece through earbuds, but also mysteriously knows everything about everyone and promises to fulfill wishes. I'd like to see a film exploring more subtle forms of hypersuasion and questions of human agency, autonomy, self-determination...
I love the idea of coining new terms. It would be fun to play around with Claude to help with that, too.
How would cognidoubt be different from skepticism? I wonder if there's a philosopher we could ressuscitate that would be relevant... I've been talking a lot about cultivating skeptical habits of mind in students in relation to AI.
If you want a good reminder that you are interacting with a machine, just repeat the exact same back and forth set of prompts with 3 different models. While the combination of words change somewhat and the structured outputs may be organized differently, the responses generally converge around the same basic ideas and feedback. I like the focus of this post. Humans have a tendency to anthropomorphize almost anything (i.e., cars) so it's really not all that surprising that we are going to gravitate to using this kind of language when we are literally interacting with something that speaks to us in the way chatbots do. It would actually be surprising if we didn't. But in many ways all the chatbot is doing is reconfiguring trillions of word combinations back at us in ways that we can make meaning from. And it does this far better and more clearly than many people can do with language. But it's not an alien intelligence and it certainly does not have it's own internal motivations. Thoughtful post.