Privacy tooling usually breaks at scale or forces raw text onto a server. OpenAI’s 1.5B open-weight Privacy Filter runs locally, handles 128,000-token inputs, and posts 97.43% F1 on a corrected PII-Masking-300k benchmark.
#privacy
RSS FeedHacker News treated this as the kind of privacy bug users fear most: no cookies, no login, just a browser implementation detail that could keep sessions linkable. The post says Mozilla fixed it in Firefox 150 and ESR 140.10.0, but the Tor angle is what drove the discussion.
The important shift is architectural: teams can mask sensitive text before it ever leaves the machine. OpenAI’s 1.5B-parameter Privacy Filter supports 128,000 tokens and scored 97.43% F1 on a corrected version of the PII-Masking-300k benchmark.
HN’s GitHub CLI telemetry thread turned into a developer-tools trust debate: not whether metrics can help, but whether default-on collection belongs in a command-line tool.
HN focused less on telemetry as an idea and more on whether opt-out controls work when gh runs inside CI, servers, and automation.
HN’s reaction centered on the trust cost of turning everyday employee input into AI training material, not on whether Meta needs more data.
r/LocalLLaMA upvoted this because ID checks turned the local-model argument from speed into autonomy. Anthropic says Claude identity verification can require a government photo ID and a live selfie through Persona.
LocalLLaMA treated Claude identity verification as more than account policy; it became another argument for local models, privacy control, and fewer gates between users and tools.
Google is moving Gemini image generation from prompt craft to account context. U.S. Google AI Plus, Pro and Ultra subscribers can opt in to use Google Photos and Nano Banana 2 for personalized images, with source visibility and reference controls built into the flow.
HN reacted because this was less about one wrapper and more about who gets credit and control in the local LLM stack. The Sleeping Robots post argues that Ollama won mindshare on top of llama.cpp while weakening trust through attribution, packaging, cloud routing, and model storage choices, while commenters pushed back that its UX still solved a real problem.
A popular r/LocalLLaMA thread described using Gemma 4’s 256k context window to analyze a 100k+ token personal journal locally, turning privacy into a practical reason to run an LLM on-device.
Meta says it has moved AI into the core of its cross-company risk review program. The company argues that automation now helps prefill documentation, surface legal requirements, and flag privacy, safety, and security issues earlier in product development.