Tech Explained: Anthropic donates $20 million to AI education and policy organization Public First Action — EdTech Innovation Hub  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Anthropic donates $20 million to AI education and policy organization Public First Action — EdTech Innovation Hub in Simple Termsand what it means for users..

Anthropic says that despite its “enormous benefits” for science, technology and medicine, AI is already being used to automate cyberattacks, and could one day help produce “dangerous weapons”.

The donation arrives amid ongoing uncertainty over whether federal legislation, state action, or executive authority should be the primary mechanism for setting frontier safety standards for AI in the US.

“AI models are increasing in their capabilities at a dizzying, increasing pace, from simple chatbots in 2023 to today’s “agents” that complete complex tasks,” the company warned in a statement. “At Anthropic, we’ve had to redesign a notoriously difficult technical test for hiring software engineers multiple times as successive AI models defeated each version. This rate of progress will not be confined to software engineering; indeed, many other professions are already seeing an impact.”

The company shared it is donating $20 million to Public First Action, a non-profit that describes itself as “dedicated to educating Americans on key AI issues and advancing an AI policy agenda in Washington D.C. and across the country that prioritizes the public interest”.

Public First Action says it will prioritize safeguards for children, workers, and the public and will support state and local legislation addressing AI issues and oppose federal attempts to stop progress without adequate safeguards. 

US “not doing enough to regulate” AI

Citing a Quinnipiac University poll that found 69 percent of Americans believe the Government is “not doing enough to regulate the use of AI”, Anthropic said: “We agree.”

“The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests. Our contribution to Public First Action is part of our commitment to governance that enables AI’s transformative potential and helps proportionately manage its risks,” Anthropic added.

Last year, OpenAI’s Chief Global Affairs Officer Chris Lehane took to LinkedIn to weigh in on the national debate over how frontier AI models should be regulated. He suggested that national safety standards for frontier AI models should be set at federal level rather than through emerging state laws.

Anthropic’s CEO Dario Amodei has previously outlined the company’s position on U.S. AI leadership, calling for a unified national framework, addressing claims about policy bias, and reaffirming restrictions on services to China.