South Africa pulled its AI policy after fake citations broke the draft
Original: South Africa withdraws AI policy due to fake AI-generated sources View original →
A national AI strategy is supposed to set rules for trust. South Africa’s first draft instead failed the most basic test: whether the references were real. Reuters reported on April 27 that the government withdrew the document after fictitious sources in its bibliography appeared to be AI-generated, turning what should have been a policy milestone into a credibility crisis.
The draft had much bigger ambitions than a routine consultation paper. It was meant to position South Africa as a continental leader in AI while setting up the institutions that would shape how the technology is governed. According to the Reuters report, the proposal included a National AI Commission, an AI Ethics Board and an AI Regulatory Authority, alongside incentives such as tax breaks, grants and subsidies to spur private-sector collaboration.
That is why the failure matters. Communications and Digital Technologies Minister Solly Malatsi said the most plausible explanation was that AI-generated citations had been included without proper verification. He also said the lapse was not merely technical and had damaged the integrity and credibility of the draft. The document was pulled not because of a disputed policy detail, but because the process behind it no longer looked reliable.
There is also a sharp symbolic lesson here. Governments around the world are trying to write rules for AI deployment, safety and accountability. If a national AI framework cannot clear basic source verification, every claim inside it becomes harder to defend. That is especially costly when the same draft is asking the public to trust new regulatory bodies and public incentives.
Malatsi said there would be consequences for those responsible and did not provide a date for a replacement draft. For policymakers elsewhere, the message is blunt: generative tools can speed up drafting, but they cannot replace human verification when the document is meant to govern everyone else’s use of AI.
Related Articles
Washington is no longer treating model distillation as a lab-level abuse problem. The White House says foreign actors, chiefly China, are using tens of thousands of proxies and jailbreaking techniques to copy US frontier AI systems and ship cheaper models that can look comparable on select benchmarks.
Anthropic announced on February 13, 2026 that Chris Liddell has joined its board of directors. Liddell previously served as CFO at Microsoft, General Motors, and International Paper, and as Deputy White House Chief of Staff.
Anthropic has launched The Anthropic Institute, a new public-interest effort focused on the social challenges posed by powerful AI. The company says the group will combine technical, economic, and social-science expertise to inform the broader public conversation.
Comments (0)
No comments yet. Be the first to comment!