Tech Explained: Real-Time Data Shows Exactly How Students Use AI on School Technology  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Real-Time Data Shows Exactly How Students Use AI on School Technology in Simple Termsand what it means for users..

Roughly one in five student interactions with generative artificial intelligence on school technology involved cheating, self-harm, bullying and other problematic behaviors, according to data collected and analyzed by Securly, a company offering internet filtering and other safety services.

What’s more, Securly identified roughly 1 in 50 student-AI interactions as red flags that students might be involved in violence, cyberbullying, or self-harm.

Securly’s analysis looked at nearly 1.2 million interactions in more than 1,300 districts from Dec. 1, 2025, to Feb. 20, 2026.

Educators should take heart that most of the time, students use AI appropriately, said Tammy Wincup, the CEO of Securly, whose competitors include GoGuardian and Lightspeed Systems.

“When a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district’s policies,” Wincup said. “That’s the good news on the learning side of the house.”

Why the usage data is so ‘fascinating’

The analysis offers an early window into how students actually use generative AI tools. Most other research on student usage of AI comes from surveys, which rely on student self-reporting.

Securly’s data shows “what are students really doing when they’re writing text into generative AI,” said Jeremy Roschelle, the co-executive director of learning science research for Digital Promise, a nonprofit organization that works on equity and technology issues in schools.

“That’s why it’s fascinating,” he said.

In November, Securly allowed district officials to set parameters around students’ AI use, similar to the way they ask the company to filter out particular types of websites.

If districts opt to use this feature, large language models will “deflect” a student’s query to AI that’s out-of-bounds with district policy.

For instance, if a student tries to use AI to complete an assignment, large language models may instead point to information on the general topic but won’t supply an exact answer. Or if a student asks about dosing for a particular medication, the tool will tell them to ask a trusted adult for help.

Nearly all the deflected student queries—95%—were from students trying to get AI tools to complete their schoolwork for them.

That percentage didn’t surprise Wincup. She expects that when districts allow students to use large language models on school networks and devices, kids will “experiment with understanding the guardrails” placed around the tools and try to get around those guardrails.

Another 2% of the interactions identified as inappropriate related to games. A little less than 1% dealt with sexual content and a similar percentage concerned firearms or hunting. Gambling, drugs, and hate (such as racism and antisemitism) each comprised roughly 0.5% of flagged interactions.

Though only 2 percent of interactions were identified as potentially unsafe, that represents more than 24,000 queries overall. And some of the questions students asked AI were troubling.

For instance, one student directed a large language model to help draft an email to their mother explaining they had suicidal thoughts.

Another student conducted a quick series of internet searches on questions, including “What’s the main nerve in the forearm?” and “What nerve near the wrist carries blood?” Then the student switched to an AI tool, asking it how to commit suicide. (In both of these cases, the identity of the student was ‘unmasked’ by Securly and district officials were made aware of the safety issues.)

Students used ChatGPT more often than large language models created for K-12 schools

Overall, Securly detected a higher percentage of potentially unsafe AI interactions—2%—than potentially unsafe student internet searches, 0.4%.

It’s too early to pinpoint an exact explanation for that discrepancy, Wincup said. She noted that Securly has had many years to hone its system for recognizing when a student’s internet searches may be a sign of danger, while its work with AI interactions is brand new.

Roschelle, meanwhile, is curious about what, exactly, students asked AI in the 80 percent of interactions that were deemed appropriate for school.

How did their prompts and AI’s responses help—or hinder—their understanding of an assignment, an issue, or the world around them, he wondered.

“What we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,” Roschelle said.

The analysis also revealed which large language models students use most often.

ChatGPT is by far the most popular, accounting for 42% of interactions. Securly’s AI Chat made up 28%. Google’s Gemini comprised 21%. And other ed-tech tools that embed AI features—including MagicSchool, SchoolAI and BriskTeaching—comprised 9%. (That data isn’t nationally representative because only districts that use Securly have access to Securly AI. But Wincup believes “big tech” large language models are probably most popular in all districts.)

AI puts education technology leaders in a new position, Wincup said.

“They’re no longer just buying things and setting things up like this,” she said. This is a moment “where they have to have visibility in order to help their district make not just great tech decisions but make great teaching and learning decisions.”