The important detail is not just that Vercel had an incident, but that a third-party AI tool's Google Workspace OAuth app opened the door. Vercel says the investigation widened to additional compromised accounts and that the broader app compromise may have affected hundreds of users across many organizations.
#security
RSS FeedHacker News treated the Bitwarden CLI compromise as the sort of GitHub Actions failure that becomes far more serious when the package sits near secrets, tokens, and password-manager workflows. By crawl time on April 25, 2026, the thread had 855 points and 416 comments.
HN did not read this as a simple cleanup patch. The thread blew up because maintainers are removing old networking code to escape AI-generated security-report overload, and commenters split over whether the real scandal is spam or years of pretending dead code was maintained.
Privacy tooling usually breaks at scale or forces raw text onto a server. OpenAI’s 1.5B open-weight Privacy Filter runs locally, handles 128,000-token inputs, and posts 97.43% F1 on a corrected PII-Masking-300k benchmark.
Hacker News treated this as the kind of privacy bug users fear most: no cookies, no login, just a browser implementation detail that could keep sessions linkable. The post says Mozilla fixed it in Firefox 150 and ESR 140.10.0, but the Tor angle is what drove the discussion.
Why it matters: the same model Anthropic framed as too dangerous for public release was reportedly exposed twice in quick succession. The Verge says Mythos was first revealed through an unsecured data trove, then reached by unauthorized users from day one through guessed infrastructure and contractor access.
The important shift is architectural: teams can mask sensitive text before it ever leaves the machine. OpenAI’s 1.5B-parameter Privacy Filter supports 128,000 tokens and scored 97.43% F1 on a corrected version of the PII-Masking-300k benchmark.
Vercel's breach no longer looks like a one-off employee compromise. TechCrunch reports some customer data theft predates the company's April incident disclosure, widening the timeline and pushing teams to treat this as a credential exposure problem.
Axios reports the NSA is using Anthropic's Mythos Preview even as Pentagon officials call the company a supply-chain risk. The clash puts AI safety limits, federal cyber demand, and procurement politics in the same room.
HN’s argument was not that every CVE deserves equal attention; it was that teams now need to decide whose severity and product metadata they trust when NVD enrichment becomes selective.
HN reacted less to the “limited subset” language and more to the OAuth shape of the incident: one third-party AI tool’s Google Workspace app may have reached users across many organizations.
Vercel says a third-party AI tool's Google Workspace OAuth app led to unauthorized access to internal systems, with a limited subset of customers affected. The detail matters because AI-era SaaS permissions are now part of production security.