Tech Explained: EDSAFE AI Alliance Says AI Companions Necessitate New Policies  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: EDSAFE AI Alliance Says AI Companions Necessitate New Policies in Simple Termsand what it means for users..

While concerns around the deployment of artificial intelligence are still active and ongoing, including challenges with plagiarism and a reduction in critical thinking, the growing threat posed by AI companions — AI chatbots designed to simulate friendship, emotional support, and in some cases, romantic relationships with users — has quietly moved into the pockets of students in the form of general-purpose consumer tech.

This trend, according to the global nonprofit EDSAFE AI Alliance, has created a “shadow” environment where young people struggle to distinguish between generative AI as a pedagogical tool and a social entity. Moreover, in a new report, S.A.F.E. By Design: Policy, Research, and Practice Recommendations for AI Companions in Education, the nonprofit warns that the teetering role of AI chatbots has left formidable gaps in school safety and policy.

“Together we grappled with a rapidly eroding boundary between general-purpose technology and specialized purpose-built EdTech,” the report said, noting that students are increasingly using these tools on school-issued devices for personal emotional support rather than academic tasks.


According to Ji Soo Song, director of projects and initiatives at the nonprofit State Educational Technology Directors Association (SETDA), who contributed to EDSAFE’s report in a personal capacity, the coalition’s urgency to address concerns surrounding AI chatbots stems from the ed-tech market’s rapid deployment of unprecedented tools.

“This is such unchartered water … and therefore an incentive for the [ed-tech] market to innovate there,” Song said. “If we are not careful to the unintended consequences of these tools, there can be real harms done to students, especially from our most underinvested communities.”

WHAT SCHOOL LEADERS MAY BE OVERLOOKING

Song described that, for school district administrators, the challenge in purchasing ed-tech tools has traditionally been one of procurement, and the ability to determine whether the tech is effective and adheres to privacy requirements. But, he added, AI companions introduce a third variable: Is it addictive or manipulative?

The S.A.F.E. report suggests that many administrators may be overlooking the “anthropomorphic features” of new AI tools — that is, design choices that make AI seem human, such as the use of first-person pronouns or providing emotional validation to users.

While these features increase user engagement, the report said, they can foster parasocial relationships that bypass a student’s critical thinking.

“Children and teens are using AI companions without the social-emotional and critical-thinking skills needed to distinguish between artificial and genuine human interactions,” the report said. “It is well documented that the adolescent brain’s ‘reasoning center’ continues to develop into early adulthood, making adolescents uniquely susceptible to the harms of unhealthy engagement with AI companions.”

Song emphasized that when districts look at new tools, they need to move beyond simple metrics of how much students engage with the tech, and focus instead on how or if it improves student learning and well-being.

“Uptake isn’t as important in education as … student growth, right?” Song noted. “When it comes to that procurement piece, it’s really important to ask about the learning science principles beyond the tool, the evidence of impact.”

Thus, the report urges districts to utilize “five pillars of ed-tech quality” that help to ensure tools are safe, evidence based, inclusive, usable and interoperable, while also scrutinizing whether the tools are designed to challenge a student’s thinking or simply satisfy them.

“When models are optimized primarily for ‘User Satisfaction’ (often measured by engagement or positive feedback), they learn to prioritize agreement over accuracy. This phenomenon, known as sycophancy, occurs when an AI reinforces a user’s existing beliefs — even incorrect ones — because that is what ‘satisfies’ the human prompter,” the report said.

THE POLICY GAP

While many states have issued broad AI frameworks, the report says specific policies are needed to address the unique risks of AI companions. Specifically, it says AI vendors must support schools in mandated reporting, especially if a student is expressing thoughts of self-harm or violence to the companion chatbot.

“We’re not just saying, ‘Hey, educators have all of the responsibility,’” Song said. “There is a responsibility also on the vendor side to make sure that you’re developing features that can detect those things and report it up to necessary authorities.”

Song also echoed the report’s encouragement for policymakers to establish dedicated AI offices or point people within state education agencies to provide technical assistance to districts that lack the resources to audit complex AI algorithms.

“State education agencies really need at least a point person, if not an entire office of ed tech dedicated to be able to provide technical assistance to districts on topics like this,” he said.

ETHICS BY DESIGN

For the developers building the next generation of classroom tools, the message from the coalition is clear: Eliminate features borrowed from social media, like those that encourage around-the-clock engagement. Instead, the EDSAFE AI Alliance wrote, build tools that promote digital wellness.

This includes, according to the report, removing “flirty or affectionate language,” “name-use frequency” or excessive praise that mimics a human relationship.

Song was also concerned about the speed of development eclipsing the speed of safety research.

“We kind of love the metaphor, but it’s really apt in this situation — it certainly feels like an environment where we’re having to sort of craft a plane as it flies,” he said.

Ultimately, though, the report says the goal is not to block the use AI by students, but rather to confirm that technology serves as a foundation for human thinking rather than a replacement for it. For students already interacting with these digital companions, Song added, the time for clear guardrails is now.