Databricks said on March 24, 2026 that Lakewatch is a new open, agentic SIEM built to ingest multimodal telemetry, unify it with business data, and automate threat detection and response with AI agents. In its launch post, Databricks said Lakewatch enters private preview with customers including Adobe and Dropbox and argued that defenders now need machine-speed systems against AI-driven attacks.
#security
RSS FeedA March 2026 Hacker News thread pushed Stanford SCS’s `jai` to 604 points and 313 comments. The tool aims to contain AI agents on Linux by keeping the current working directory writable while placing the rest of the home directory behind an overlay or hiding it entirely.
OpenAIDevs pointed developers to Codex Security on March 29, 2026, positioning it as a way to find, validate, and remediate likely vulnerabilities in connected GitHub repositories. OpenAI's docs say the system scans commit by commit, uses repo-specific threat models, validates high-signal findings in an isolated environment, and can move reviewed findings toward GitHub pull requests.
Anthropic said Claude Opus 4.6 found 22 Firefox vulnerabilities during a two-week collaboration with Mozilla, including 14 rated high severity. The companies framed the project as an example of AI-assisted security research moving into real product workflows.
NIST said on February 17, 2026 that its Center for AI Standards and Innovation is launching the AI Agent Standards Initiative. The effort focuses on technical standards, open protocols, and research on agent security and identity to support broader adoption of autonomous AI systems.
George Larson's post stood out on Hacker News less as a demo and more as a deliberate agent architecture: tiny runtime, public/private separation, tiered inference, and explicit blast-radius control.
A FutureSearch incident transcript moved quickly through Hacker News because it showed, minute by minute, how a poisoned LiteLLM package reached a workstation and was isolated within 72 minutes.
OpenAI says threat actors usually combine AI with traditional web and social infrastructure rather than operating inside one model. The company framed the new report as guidance for detecting and disrupting cross-platform abuse.
Google said on March 25, 2026 that it is now targeting 2029 for post-quantum cryptography migration. The company argues recent progress in quantum hardware, error correction, and factoring estimates makes authentication and signature upgrades more urgent.
Meta on March 11, 2026 rolled out new anti-scam protections across Facebook, Messenger, and WhatsApp and later added a March 16 update on broader industry coordination. The program pairs AI-based detection with user alerts, advertiser verification, and law-enforcement partnerships after Meta reported removing 159 million scam ads in 2025.
NVIDIA introduced OpenShell on March 23, 2026. The company says the open source runtime isolates each autonomous agent in its own sandbox and keeps policy enforcement at the infrastructure layer instead of relying only on model or application safeguards.
OpenAI introduced Lockdown Mode and Elevated Risk labels for ChatGPT on February 13, 2026. The changes are designed to give high-risk users stronger controls and make security tradeoffs more explicit as AI products connect to the web and external apps.