Washington is no longer treating model distillation as a lab-level abuse problem. The White House says foreign actors, chiefly China, are using tens of thousands of proxies and jailbreaking techniques to copy US frontier AI systems and ship cheaper models that can look comparable on select benchmarks.
#ai-policy
RSS Feedr/artificial did not just react to a scary headline. The thread centered on whether HB1455/SB1493’s “knowingly training artificial intelligence” language could reach ordinary chatbot UX, not only companion apps. Public bill trackers show the proposal is still moving, with House Judiciary recommending passage on April 14, 2026.
Anthropic said on Mar 11, 2026 that it is launching The Anthropic Institute to study the biggest economic, security, legal, and societal questions raised by frontier AI. The effort is meant to turn observations from inside a model builder into public research and external dialogue.
Anthropic has launched The Anthropic Institute, a new public-interest effort focused on the social challenges posed by powerful AI. The company says the group will combine technical, economic, and social-science expertise to inform the broader public conversation.
Anthropic says a March 4 Department of War letter designates it as a supply chain risk, but argues the scope is narrow and will challenge the action in court.
OpenAI said on February 28, 2026 that it reached an agreement with the Department of War for classified AI deployments, and posted a March 2 update adding explicit domestic-surveillance limitation language. The company highlights cloud-only deployment, retained safety-stack control, and cleared personnel-in-the-loop safeguards.
Anthropic announced it will legally challenge the Pentagon's supply chain risk designation, issued after the company refused to assist with military surveillance programs. It's an unprecedented move for an AI company against the federal government.
The US Treasury Department announced it will terminate all use of Anthropic AI products following Trump's executive order designating Anthropic as a supply chain risk after the company refused military surveillance assistance.
A King's College London study tested ChatGPT, Claude, and Gemini in Cold War-style nuclear crisis simulations. AI models chose nuclear escalation in 95% of scenarios and left all eight de-escalation options entirely unused across 21 games.
After the Trump administration ordered federal agencies to immediately stop using Anthropic AI and the Pentagon designated Anthropic as a 'supply chain risk', Anthropic announced it will fight the designation in court. Meanwhile, OpenAI struck a deal with the Pentagon.
Following President Trump's order barring federal agencies from using Anthropic products, Claude surged to the top of the US App Store's free apps chart, with daily signups hitting all-time records and free users growing over 60% since January.
President Trump ordered all federal agencies to stop using Anthropic products after the company refused Pentagon demands. OpenAI signed a deal with similar but accepted guardrails within hours.