Tech Explained: How Ukraine regulates AI in education during the Russian invasion  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: How Ukraine regulates AI in education during the Russian invasion in Simple Termsand what it means for users..

In the fast-moving landscape of higher education policy, the debate often oscillates between two extremes: the “wait and see” approach of cautious legislators and the “move fast and break things” ethos of technological evangelists.

However, for a nation where the “fog of war” is a daily reality, the luxury of waiting has evaporated. By the time a formal statute is debated and passed, the technology has already pivoted from text to multimodal video and autonomous agents. Ukraine does not yet have a formal law on AI but it has something perhaps more effective for the moment: a resilient soft law ecosystem.

Ukraine’s experience in governing AI in education offers a compelling blueprint for the global sector – not as a dry manifesto, but as a practical, tiered playbook designed for agility. Central to this experience are two landmark documents: the Recommendations for the responsible implementation and use of AI technologies in higher education institutions and the Instructional and methodological recommendations on the introduction and use of AI technologies in secondary education institutions. Together, they represent a shift from viewing AI as a disruption to viewing it as critical national infrastructure that must be governed by design.

A tiered governance ecosystem

The central challenge of AI governance is the widening gap between formal law and daily practice. While national legislatures struggle to draft comprehensive statutes, students and staff are already treating AI as basic infrastructure. Ukraine’s strategic choice has been to lean into soft law. This approach is not “soft” in its impact. Instead, it is a steering wheel that can iterate at the speed of the technology itself.

This ecosystem operates across four distinct layers to ensure that high-level ethical principles manifest as specific instructions in a student’s syllabus:

  • National guidance: High-level principles and shared language produced by the Ministry of Digital Transformation and the Ministry of Education and Science.
  • Sector-level codes: Quality assurance guidance that translates principles into shared templates for the wider academic community
  • Institutional policies: Specific rules on roles, procurement, and data rules
  • Course-level rules: Where learning outcomes meet reality through syllabus AI clauses and assignment-specific instructions

For higher education Institutions, this governance is built on five operational pillars: Rules, Roles, Workflows, Training, and Review. This ensures that AI adoption is an institutional strategy rather than a random set of personal experiments.

HE and schools playbooks

The recommendations for higher education are explicitly framed not as a manifesto, but as an “implementation kit.” The goal is to move beyond the binary of “ban vs. allow” and toward a sophisticated model of institutional governance.

In line with the EU AI Act, the recommendations classify AI applications into risk categories. Any tool used for high-stakes decision-making, such as student admissions, grading, or behavioral monitoring, is flagged as high risk. Rather than a blanket ban, which is explicitly labeled as “short-sighted and harmful,” the focus has shifted to a robust risk-screening algorithm. Universities are encouraged to evaluate tools based on a “red flag” system. Tools that present risks to human rights or data privacy are flagged as Unacceptable/High Risk and rejected or strictly monitored. Tools flagged as Medium/Low Risk are approved for use with specific safeguards.

This vetting includes a “suitability checklist” that assesses functional capabilities and compliance with learning goals. It also includes vital local details, such as checking for Ukrainian-language interfaces and the ability to pay in local currency, a critical requirement under wartime financial constraints.

The document encourages a “problem-pilot-training-review” sequence. Universities are advised to identify a specific pedagogical or administrative problem, pilot a tool in a controlled environment, train staff on its nuances, and set a regular review cycle. This acknowledges that any policy drafted today will likely be obsolete in six months as autonomous agents become more prevalent.

While the HE guidelines focus on professional autonomy, the recommendations for general secondary education are more focused on protecting minors and developing foundational cognitive skills. Aligning with global standards and the UNESCO Guidance for Generative AI, the recommendations mandate that AI tools be used only by students aged 13 and above, with explicit parental consent required for those aged 13-18. This is a vital guardrail in an environment where AI tutors are often marketed directly to children without adequate privacy oversight.

The school playbook identifies specific “pedagogical substrates” – the underlying support layer that AI can provide to a teacher. AI is explicitly defined as a supporting tool, never the sole source of information. High-value substrates include inclusion, such as using speech-to-text and visual generation to support students with special educational needs or those displaced by conflict; teacher productivity, where AI can automate the generation of lesson plans and “mission-based quests” to reduce the administrative burden on teachers; and gamification, which is used to create personalised learning paths that keep students engaged in hybrid or remote learning environments necessitated by the war.

Resilience through design

The value of the Ukrainian model lies in its emphasis on operationalising governance immediately. Paralysed by the search for a perfect law or organisational policy, many sectors fail to act. The Ukrainian experience can be used as an action plan for immediate implementation:

  • Create a one-page data rule: Clearly define which sensitive data (grades, health information, confidential research) must never be pasted into public AI tools.
  • Move from literacy to verification fluency: Staff and students need “verification habits” – the technical and critical skills to verify the outputs of LLMs and recognise hallucinations.
  • Adopt lightweight risk checklists: Every department should have a simple “red flag” document to vet new software before it becomes part of the curriculum.
  • Set a review cycle: Acknowledging that the transition to autonomous agents and multimodal video requires a policy that is as dynamic as the technology itself.

Regulation is often seen as a brake on innovation. In the Ukrainian context, it has become a necessary shield. By adopting a tiered, soft law approach, the country has provided a framework that supports university autonomy while ensuring that AI implementation is not just a collection of experiments, but an organisational ecosystem built on rules, roles, and review.

The lesson for the global sector is clear: Don’t wait for the formal statute. Don’t outsource your pedagogical decisions to tech vendors. Instead, design and govern your own learning ecosystem. Ukraine’s playbook demonstrates that even in the most challenging circumstances, it is possible to lead with both innovation and institutional responsibility.