Banks are ahead of watchdogs on AI, and the oversight gap is widening
Original: Global regulators trail banks in AI as Mythos raises oversight concerns, report finds View original →
The more unsettling number in AI finance this week is not a valuation or a capex target. It is the gap between the firms deploying AI and the authorities meant to supervise them. In a Reuters report published April 28, research from the Cambridge Centre for Alternative Finance found that financial institutions are adopting AI at more than twice the rate of their supervisors. Only two in 10 regulators reported advanced AI adoption.
The details make the gap harder to wave away. Reuters says the study covered 350 traditional financial institutions and fintechs, more than 140 AI vendors, and 130 central banks and financial authorities across 151 countries. Yet only 24% of authorities said they collect data on industry AI adoption, while 43% said they have no plans to start within the next two years. That means a large share of the people writing rules or assessing systemic risk may be operating with little direct visibility into how quickly frontier tools are entering live financial systems.
The report uses Anthropic's Mythos as a concrete example of why that matters. Frontier models that can uncover or exploit software weaknesses at scale pose a different kind of supervisory challenge from ordinary automation tools. The study also flags concentration risk. Reuters says 69% of all respondents rely on OpenAI, rising to 76% among industry participants, while just over half use Google's models and a little more than a third use Anthropic. If a small set of vendors becomes deeply embedded in banking workflows, resilience and pricing risk stop being abstract competition-policy topics and start looking like financial stability issues.
The implication is uncomfortable but clear. Regulators may need agentic AI tools of their own, not as a branding exercise, but as a basic capability for oversight. If banks are already learning faster than watchdogs can measure, the next phase of AI risk in finance will not come only from the models. It will come from a supervision gap that keeps widening while the systems underneath it become more autonomous.
Related Articles
President Trump ordered all federal agencies to stop using Anthropic products after the company refused Pentagon demands. OpenAI signed a deal with similar but accepted guardrails within hours.
At the India AI Impact Summit held in New Delhi February 19–21, tech commitments surpassed $400B. OpenAI partnered with TCS for data center capacity, and Anthropic teamed with Infosys to deploy Claude across Indian enterprises.
After Trump ordered federal agencies to stop using Anthropic AI, the Pentagon designated the firm a national security supply chain risk—and OpenAI secured a competing Defense Department agreement within hours.
Comments (0)
No comments yet. Be the first to comment!