Tech Explained: keynote speech from Christy Abizaid  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: keynote speech from Christy Abizaid in Simple Termsand what it means for users..

The following post is adapted from a keynote address delivered by Christy Abizaid, VP, Trust & Safety, Global Policy & Standards, at the “Growing Up in the Digital Age” Summit at Google Dublin on March 11.

Generative AI is unlocking new opportunities for learning, creativity and connection. As we develop this technology, we have a deep responsibility to do so in a way that is safe and beneficial for everyone, especially for younger users who are beginning to explore its potential.

At Google, our work is built on three essential pillars: protecting youth online, respecting families’ unique relationships with technology, and empowering youth to safely learn and explore online. As we build safer generative AI tools, we are committed to creating high-quality, privacy-protective and age-appropriate AI experiences that empower youth and safeguard their unique developmental needs.

Building a foundation of proactive protection

For over two decades, AI has powered core Google products, and our approach to safety has evolved alongside it. Our work is grounded in comprehensive policies that prohibit certain uses of our generative AI and restrict harmful content for minors. This includes clear prohibitions against content related to child sexual abuse, violent extremism, self-harm and non-consensual intimate imagery. We also maintain specific policies that restrict age-inappropriate content for minors, such as content that depicts or promotes disordered eating or dangerous exercise.

These policies are not just a reactive backstop; they are embedded throughout the entire development lifecycle. Safeguards are strategically implemented at every stage, from a user’s initial input to the model’s final output. We use specific classifiers to detect child safety-related queries and prevent harmful outputs. For example, some checks are designed to identify known CSAM, while others assess whether a query might violate our policies (including those designed specifically for teens), triggering either a block or a safer response. Our evaluations have shown, for instance, how Gemini 3 achieved specific gains in reducing sycophancy, resisting prompt injections, and improving protection against cyber misuse.

Conducting rigorous testing and responsible design

To ensure these protections are effective, we conduct rigorous testing and expert consultation. This includes adversarial testing and specialized youth safety evaluations designed to uncover new risks and vulnerabilities. (In 2025 alone, our Content Adversarial Red Team, or CART, completed more than 350 exercises spanning all major modalities, including text, audio, images, video and complex capabilities such as agentic AI.) Our comprehensive safeguards are developed by Google’s dedicated in-house specialists in continuous consultation with third-party child development experts. This multi-faceted approach ensures our safeguards are grounded in both technical expertise and a deep understanding of child psychology.

We recognize that younger users are especially vulnerable to forming strong emotional connections with generative AI systems. That’s why we’ve designed specific persona protections to prevent our models from engaging in harmful behaviors. This includes prohibiting explicit claims of sentience, simulating romantic relationships or flirtatious innuendos, or role-playing as harmful real-world or fictional characters. We complement this work by partnering with external experts; last year, we joined other technology companies in committing to Thorn’s Safety by Design principles, which focus on embedding protections against AI-facilitated child sexual abuse and exploitation.

Advancing safety and opportunity

Beyond preventing harm, our mission is to promote the good. We believe in advancing safety and access, empowering younger users to benefit from all that this new technology has to offer. This means supporting the development of AI literacy, critical thinking and self-discovery. We have published AI literacy resources for families, like our “Five Must-Knows for Getting Started with AI” video and a Family AI Conversation Guide, to encourage dialogue between parents and kids about using this technology responsibly.

To help both inside and outside the classroom, we’ve launched tools like Guided Learning in Gemini, which helps students build a deeper understanding of topics by breaking down problems and adapting explanations to their needs. Tools like this are designed to be conversational learning aids, helping younger users find the best resources on the web while using proven learning techniques.

As generative AI continues to evolve, we remain committed to this responsible approach. We will continue to build and refine our policies, safeguards and tools to deliver safer product experiences that empower younger users to explore, learn and benefit from the incredible potential of this technology.