Anthropic announced the Anthropic Institute on March 11, 2026 as a new effort focused on the societal, economic, legal, and governance challenges created by more powerful AI systems. The institute will be led by Jack Clark, combine Frontier Red Team, Societal Impacts, and Economic Research, and launch alongside an expanded Public Policy organization and a planned Washington, D.C. office.
#policy
RSS FeedGoogle said it signed the Industry Accord Against Online Scams and Fraud at the UN Global Fraud Summit in Vienna alongside companies including Adobe, Amazon, Meta, Microsoft and OpenAI. The move pairs shared threat intelligence and coordinated defenses with Google's own AI-driven scam detection and policy work planned for 2026.
AUTOMATON reports Japan's METI has opened IP360 startup support to individuals and unincorporated indie teams, offering up to 10 million yen at a 50% subsidy rate for new IP development, localization, and promotion aimed at overseas rollout.
Anthropic has launched The Anthropic Institute as a dedicated effort to study how powerful AI could affect jobs, law, and governance. The new unit combines Frontier Red Team, Societal Impacts, and Economic Research under Jack Clark while Anthropic also expands its Washington policy footprint.
Anthropic said on March 5, 2026 that it had received a supply-chain risk designation letter from the Department of War. The company says the scope is narrow, plans to challenge the action in court, and will continue transition support for national-security users.
Anthropic published a Frontier Safety Roadmap that outlines dated goals across security, safeguards, alignment, and policy. The document pairs current ASL-3 protections with milestone targets through 2027, including policy proposals and expanded internal oversight.
Anthropic published Responsible Scaling Policy Version 3.0 on February 24, 2026. The update keeps the ASL framework but retools how commitments are managed when capability thresholds are hard to measure unambiguously.
Anthropic says distillation attacks against Claude are increasing and calls for coordinated industry and policy action. In an accompanying post, the company reports campaign-level abuse patterns and outlines technical and operational countermeasures.
OpenAI’s February 2026 safety report says it banned accounts linked to seven operations originating in China. The company says abuse covered cyber activity, covert influence, and scams, while overall malicious use remained low versus legitimate use.
Sam Altman announced OpenAI reached an agreement with the U.S. Department of War to deploy AI models on classified networks, with core safety principles including bans on domestic mass surveillance and autonomous weapon systems.
Anthropic has officially rejected the Pentagon's latest proposal, stating 'We cannot in good conscience accede to their request.' The move underscores Anthropic's position on AI safety principles and the tension between powerful AI capabilities and military applications.
OpenAI CEO Sam Altman announced a Pentagon deal to deploy AI models in classified networks just hours after Anthropic was blacklisted by the Trump administration. The agreement explicitly includes prohibitions on mass domestic surveillance and autonomous weapons.