Hacker News Spotlights AI-Specific SQL Injection That Exposed McKinsey's Lilli Platform
Original: How we hacked McKinsey's AI platform View original →
What the disclosure claims
A March 9, 2026 post from CodeWall described a severe compromise of McKinsey's internal AI platform, Lilli, and the story quickly drew interest on Hacker News, where it reached 498 points and 195 comments at crawl time. The report says an autonomous offensive agent, operating without credentials or a human operator, mapped Lilli's public attack surface and found more than 200 documented API endpoints, including 22 that did not require authentication.
The critical bug was not described as an exotic AI exploit. Instead, CodeWall says one unauthenticated endpoint safely parameterized values but concatenated JSON field names directly into SQL. By probing how those keys appeared in database error messages, the agent inferred query structure, iterated through blind SQL injection steps, and escalated to read and write access in under two hours. That detail is why the post resonated on HN: it ties a very old web vulnerability class to a modern AI application stack that many organizations now treat as strategic infrastructure.
Why the impact looks different in an AI system
CodeWall's reported exposure goes well beyond user profiles. The post says the reachable data included 46.5 million chat messages, 728,000 files, 57,000 user accounts, 384,000 AI assistants, 94,000 workspaces, and 3.68 million RAG document chunks. It also says the same path exposed 95 model and prompt configurations across 12 model types, plus storage metadata tied to the document-ingestion pipeline.
That is the most technically important part of the story. In a classic SaaS breach, database access is already bad. In an internal AI platform, database write access can become prompt-layer control. CodeWall argues that the same injection path could have modified system prompts, guardrails, and retrieval behavior without a code deploy. For an enterprise assistant used in strategy, finance, and client work, that creates a different risk profile: not only data theft, but the possibility of silently poisoning the advice employees trust.
Why the HN thread mattered
The Hacker News discussion treated the disclosure as evidence that AI security is still inheriting the weakest habits of traditional web software. The post's technical lesson is not that AI created SQL injection, but that AI products multiply the blast radius of ordinary application flaws. When prompts, routing rules, RAG metadata, and model configs sit beside operational data, an old injection bug can become a control-plane compromise.
CodeWall says it disclosed the issue to McKinsey on March 1, 2026, received acknowledgement on March 2, and verified that unauthenticated endpoints were patched the same day before public disclosure on March 9. Whether every reported detail is independently reproducible or not, the writeup captures a real design problem for enterprise AI teams: prompts, retrieval state, and model policy are now crown-jewel assets and need the same integrity controls that companies already apply to code and secrets.
Related Articles
Hacker News liked the promise of model-agnostic memory, but the real energy in the thread came from one immediate question: how does this avoid context pollution? Skepticism arrived faster than praise.
Cloudflare made AI Security for Apps generally available on March 11, 2026 and opened AI endpoint discovery to all customers, including Free, Pro, and Business plans. The launch adds custom topic detection and folds AI-specific controls into the company’s existing reverse-proxy and WAF stack.
HN cared less about a clean open-versus-closed slogan than about what happens when AI makes vulnerability discovery cheaper for everyone. The Strix post argued that closing source does not remove the attack surface, while the thread split over noisy AI reports, SaaS economics, and whether obscurity can still raise attacker costs.
Comments (0)
No comments yet. Be the first to comment!