Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Europe’s artificial intelligence strategy should be built on European strengths in Simple Termsand what it means for users..
The European Union’s main artificial intelligence goal is to support European companies in developing cutting-edge AI models by facilitating their access to data and encouraging investment in data centres, semiconductors and network connectivity. The plan is to rival the United States, home to most foundational AI models, and is driven by worries about becoming dependent on countries that are capable of developing advanced AI models and which will thus shape the next general-purpose technology to drive the future economy.
The strategy, however, is risky. Relaxing constraints on the use of sensitive data to train AI – as proposed by the European Commission – could weaken European tech-user rights. Meanwhile, in striving to preserve Europe’s future autonomy and international bargaining power, any ‘buy European’ rule would subvert the European commitment to market competition.
The strategy is also incoherent in that it de-facto recognises the irrelevance of the EU’s own model. It risks abandoning core EU principles to embrace a model championed by its international competitors, showing that the US especially has succeeded in shaping Europe’s strategy, by tricking it into becoming like them.
In fact, protecting Europe’s autonomy does not require European champions at the forefront of AI development, for two reasons. First, the geographical origin of an AI developer is less important than the regulations it complies with. Second, Europe can gain more from AI through adoption, rather than domestic development.
On the first point, supporting European champions through protectionist measures, such as public-procurement requirements and relaxed merger control, offers neither prosperity nor security. Just like a foreign competitor, a European company could develop AI foundational models that encode biases that are incompatible with European values. Being European is no guarantee a company will not act in ways that go against European standards (as France’s Capgemini arguably did in a contract with US Immigration and Customs Enforcement). Companies are driven by profits and will follow the rules if breaking them becomes too costly.
The best way to ensure that what companies do in relation to AI aligns with European goals and reflects European standards is not to favour domestic companies, but to uphold European law and apply it evenly. The EU AI Act (Regulation (EU) 2024/1689), for example, already provides tools to mitigate the risk of bias encoded during development. Instead of retreating from regulation, the EU should enforce more and better as the best shield against the whims of trade or internal populism.
This might include new institutional arrangements, such as separating digital enforcement powers from the Commission, ensuring independence of enforcement to improve accuracy. Importantly, the EU must not abandon its philosophy of regulation as the basis of a just digital society. Protectionism should continue to be opposed with openness, fairness and competition, ensuring that success in technology benefits the entire downstream economy. Regulated access to critical upstream inputs enabled industries to thrive on the back of general-purpose technologies such as electricity and telecommunications. AI should not be treated differently.
On whether to prioritise AI development or adoption, being at the AI development frontier likely provides only a temporary advantage. Exponential increases in computational efficiency and the inability of foundational AI models to prevent free riding (Chinese Deep-Seek’s success in early 2025 was reportedly built on distilling OpenAI’s pre-trained data) imply rapid convergence at the frontier. It thus makes little sense to obsess over a race nobody can dominate for long.
Instead, Europe could divert more of its efforts towards downstream applications, optimising AI integration within European industrial production processes, ultimately addressing a puzzle the US, which is leading in development, has not yet been able to solve: how to translate AI adoption into actual productivity gains. Europe can fine tune models that rely on small, specific datasets and are tailored to the needs of the businesses in which they are deployed. The EU can build a system of incentives, based on regulation and support policies, to encourage the use of AI to complement rather than replace the workforce, thereby fostering widespread acceptance and shared prosperity.
Focusing on competition, regulation and efficient AI deployment can go hand-in-hand with ensuring self-determination. Investment in computational and cloud capabilities, for example, should be limited to what is necessary to ensure the viability of the European economy in the event of disruption. Advocating open-source as a systemic approach to AI can deliver greater security, accessibility and quality, as shown by promising experiences in France, the Netherlands and Germany.
Instead of yielding to foreign influence or retreating into protectionism, Europe can build an open and competitive AI system that, by design, will be resilient to international or domestic capture.
