Hacker News Spotlights AI-Specific SQL Injection That Exposed McKinsey's Lilli Platform
Original: How we hacked McKinsey's AI platform View original →
What the disclosure claims
A March 9, 2026 post from CodeWall described a severe compromise of McKinsey's internal AI platform, Lilli, and the story quickly drew interest on Hacker News, where it reached 498 points and 195 comments at crawl time. The report says an autonomous offensive agent, operating without credentials or a human operator, mapped Lilli's public attack surface and found more than 200 documented API endpoints, including 22 that did not require authentication.
The critical bug was not described as an exotic AI exploit. Instead, CodeWall says one unauthenticated endpoint safely parameterized values but concatenated JSON field names directly into SQL. By probing how those keys appeared in database error messages, the agent inferred query structure, iterated through blind SQL injection steps, and escalated to read and write access in under two hours. That detail is why the post resonated on HN: it ties a very old web vulnerability class to a modern AI application stack that many organizations now treat as strategic infrastructure.
Why the impact looks different in an AI system
CodeWall's reported exposure goes well beyond user profiles. The post says the reachable data included 46.5 million chat messages, 728,000 files, 57,000 user accounts, 384,000 AI assistants, 94,000 workspaces, and 3.68 million RAG document chunks. It also says the same path exposed 95 model and prompt configurations across 12 model types, plus storage metadata tied to the document-ingestion pipeline.
That is the most technically important part of the story. In a classic SaaS breach, database access is already bad. In an internal AI platform, database write access can become prompt-layer control. CodeWall argues that the same injection path could have modified system prompts, guardrails, and retrieval behavior without a code deploy. For an enterprise assistant used in strategy, finance, and client work, that creates a different risk profile: not only data theft, but the possibility of silently poisoning the advice employees trust.
Why the HN thread mattered
The Hacker News discussion treated the disclosure as evidence that AI security is still inheriting the weakest habits of traditional web software. The post's technical lesson is not that AI created SQL injection, but that AI products multiply the blast radius of ordinary application flaws. When prompts, routing rules, RAG metadata, and model configs sit beside operational data, an old injection bug can become a control-plane compromise.
CodeWall says it disclosed the issue to McKinsey on March 1, 2026, received acknowledgement on March 2, and verified that unauthenticated endpoints were patched the same day before public disclosure on March 9. Whether every reported detail is independently reproducible or not, the writeup captures a real design problem for enterprise AI teams: prompts, retrieval state, and model policy are now crown-jewel assets and need the same integrity controls that companies already apply to code and secrets.
Related Articles
Cloudflare said on March 11, 2026 that AI Security for Apps is now generally available. The company also made AI endpoint discovery free across Free, Pro, and Business plans while adding custom topic detection and expanded policy controls.
Google said on March 11, 2026 that it has closed its acquisition of Wiz. Wiz will join Google Cloud, but Google says the platform will continue working across major cloud providers, including AWS, Azure, and Oracle Cloud.
Anthropic says distillation attacks against Claude are increasing and calls for coordinated industry and policy action. In an accompanying post, the company reports campaign-level abuse patterns and outlines technical and operational countermeasures.
Comments (0)
No comments yet. Be the first to comment!