Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Wikipedia Bans AI-Generated Content, With Only Two Exceptions in Simple Termsand what it means for users..

For 25 years, Wikipedia has been an open-source online encyclopedia where anyone can contribute knowledge, so long as it’s grounded in reliable, verifiable sources. But as artificial intelligence tools rapidly reshape how content is created, the platform is drawing a firm line: You cannot use AI tools to create or rewrite content for Wikipedia. 

“Text generated by large language models (LLMs) often violates several of Wikipedia’s core content policies,” Wikipedia’s editing policy reads. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.

Wikipedia cites ChatGPT and Google Gemini as examples in a footnote. 

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 

It’s unclear when the policy went into effect. A representative for Wikipedia did not immediately respond to a request for comment.

Wikipedia’s exceptions to using AI

Wikipedia lists a few exceptions for editors and when translating articles. 

AI Atlas

Wikipedia says that editors can use AI to make basic article edits, such as typos and formatting, to articles they wrote after a Wikipedia volunteer reviewer or administrator reviews the article. 

However, even if you’re using AI to edit, Wikipedia urges caution because AI can change the meaning of some content, which may not be accurate or align with the source’s intent.

Wikipedia lets you use AI to translate articles from other language Wikipedias into English. However, translation must still follow Wikipedia’s policies, and the translator must be fluent in both English and the language of the original article to ensure accuracy. 

Enforcement is unclear

It’s no surprise that Wikipedia added this language to its policy, considering that it’s an open-source project and AI is prone to errors and plagiarism. 

Last year, the Wikimedia Foundation asked that AI companies stop scraping data from Wikipedia and use its Enterprise API, which will allow them to “use Wikipedia content at scale and sustainably without severely taxing Wikipedia’s servers, while also enabling them to support our nonprofit mission.”

No mention is made of how the rules will be enforced or how users will be disciplined if they use AI in violation of the rules. 

Wikipedia’s policy comes at a time when AI is becoming a part of our day-to-day lives. Apple Intelligence and Galaxy AI are now available on smartphones, and there are built-in AI features in the apps, websites and services we use regularly. Yet, there are mounting concerns about AI’s accuracy and the risk of hallucinations. 

Wikipedia’s decision would seem to reflect a broader tension across the internet: balancing the speed and convenience of AI-generated content with the need for human judgment and verifiable, accurate knowledge.