Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Does AI Really “Know” Anything? Why Our Words Matter More Than We Think in Simple Termsand what it means for users..
A new study explores how human-like language shapes the way we talk about artificial intelligence.
Think, know, understand, remember.
These are the kinds of mental verbs people use every day to describe what goes on in someone’s mind. But when the same words are applied to artificial intelligence, they can unintentionally make a computer system seem more human than it is.
“We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines – it helps us relate to them,” said Jo Mackiewicz, professor of English at Iowa State. “But at the same time, when we apply mental verbs to machines, there’s also a risk of blurring the line between what humans and AI can do.”
Mackiewicz and Jeanine Aune, teaching professor of English and director of the advanced communication program at Iowa State, are part of a research team that investigated how writers use anthropomorphizing language – or words that give human traits to non-human things – when describing AI systems. Their study, was recently published in the journal Technical Communication Quarterly.
The team also included Matthew J. Baker, an associate professor of linguistics at Brigham Young University, and Jordan Smith, an assistant professor of English at the University of Northern Colorado. Both Baker and Smith previously graduated from Iowa State University.
How mental verbs can be misleading
Mackiewicz and Aune said mental verbs can be misleading in AI coverage because they imply a human-like inner experience. Terms such as “think,” “know,” “understand” and “want” can signal beliefs, desires or consciousness. AI systems, however, do not have these qualities. They produce responses by drawing on patterns in data rather than feelings or intentions.
They also warned that this language can overstate what AI can do. Phrases like “AI decided” or “ChatGPT knows” can make a tool sound more independent or intelligent than it really is, which may skew expectations about how safely or reliably it performs. Describing AI as if it has intentions can also shift attention away from the actual decision-makers: the people who build, train, deploy and supervise these systems.
“Certain anthropomorphic phrases may even stick in readers’ minds and can potentially shape public perception of AI in unhelpful ways,” Aune said.
Words on words
To measure how common this kind of wording is, the researchers turned to the News on the Web (NOW) corpus, a 20-billion-word-plus dataset that is continually updated with English-language news stories from 20 countries. They used it to track how often news writers connect anthropomorphizing mental verbs – like learns, means and knows – with the terms AI and ChatGPT.
The results, Mackiewicz and Aune said, surprised the research team.
In their analysis, the team identified three key findings:
1. The terms AI and ChatGPT are infrequently paired with mental verbs in news articles.
Mackiewicz noted that there is no single definitive comparison of anthropomorphism across spoken and written language, but existing research offers useful context. “Anthropomorphism has been shown to be common in everyday speech, but we found there’s far less usage in news writing,” she said.
Within the dataset, “needs” appeared most often alongside AI, with 661 instances. For ChatGPT, the most frequent pairing was “knows,” which appeared 32 times.
The researchers also pointed to Associated Press guidance that discourages linking human emotions to the capabilities of AI models, noting that these recommendations may have influenced how often news coverage used mental verbs with AI and Chat GPT in recent years.
2. When the terms AI and ChatGPT were paired with mental verbs, they weren’t necessarily anthropomorphized.
The research team’s analysis found that writers used the mental verb “needs,” for example, in two main ways when discussing AI. In many instances, “needs” simply described what AI requires to function, such as “AI needs large amounts of data” or “AI needs some human assistance.” These uses weren’t anthropomorphic because they treated AI the same way we talk about other non‑human systems – “the car needs gas” or “the soup needs salt.”
Second, writers sometimes used “needs” in a way that suggested an obligation to do or be something – “AI needs to be trained” or “AI needs to be implemented.” Aune said many of these instances were written in passive voice, which shifted responsibility back to humans, not AI.
3. Anthropomorphization with mental verbs exists on a spectrum.
Mackiewicz and Aune said the research team also discovered there were times the usage of “needs” edged into more human‑like territory. Some sentences – “AI needs to understand the real world,” for example – implied expectations or qualities associated with people, such as fairness, ethics or a personal understanding of the world we live in.
“These instances showed that anthropomorphizing isn’t all‑or‑nothing and instead exists on a spectrum,” Aune said
Writing the future
“Overall, our analysis shows that anthropomorphization of AI in news writing is far less common – and far more nuanced – than we might think,” Mackiewicz said. “Even the instances that did anthropomorphize AI varied widely in strength.”
The study’s findings, Mackiewicz and Aune said, underscore the importance of looking beyond surface-level verb counts and considering how meaning comes from context.
“For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them,” Mackiewicz said.
“Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI,” the research team wrote in the published study.
And as AI technologies continue to evolve, writers will continually need to consider how word choices may frame those technologies, Mackiewicz and Aune said.
Future research, the team concluded, “could examine the anthropomorphizing impact of different words and their senses” and “look at whether or not infrequent usage has an outsized effect on how people, including news writers and other professional communicators, think about AI.”
Reference: “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT” by Jeanine Elise Aune, Matthew J. Baker, Jo Mackiewicz and Jordan Smith, 29 November 2025, Technical Communication Quarterly.
DOI: 10.1080/10572252.2025.2593840
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
