HN likes the idea of Chrome's Prompt API, then hits the 22 GB reality check

Original: The Prompt API View original →

Read in other languages: 한국어日本語
AI Apr 27, 2026 By Insights AI (HN) 2 min read 1 views Source

The Prompt API landed on Hacker News as one of those ideas that sounds obvious the moment you read it. Let a web app send natural-language requests straight to Gemini Nano inside Chrome, keep the work on the device, skip external API keys, and make privacy less of an afterthought. HN got the appeal immediately. It also got to the catch immediately.

Chrome's documentation lays out a fairly practical feature set. The API is meant for things like page-aware search, article classification, custom content filtering, contact extraction, and event parsing. This is not positioned as a giant general-purpose cloud model inside the browser. It is a built-in local model path for lightweight tasks, with desktop-only support and a long list of requirements. Today that means Windows, macOS, Linux, or Chromebook Plus devices on supported Chrome versions, plus at least 22 GB of free storage, more than 4 GB of VRAM for GPU use or a CPU with 16 GB RAM and 4 cores, and an unmetered network for the model download.

That requirements table became the real HN hook. One of the standout reactions boiled the entire product story down to a single brutal line: what website is going to tell users they need 22 GB of free disk space just to continue? Another thread worried about abuse in the other direction, asking whether rogue JavaScript could quietly borrow visitors' machines for token generation. At the same time, several builders argued the tradeoff is worth it for the right class of feature. Community discussion pointed out that a local model path can feel like a poor person's Ollama inside the browser: free, private by default, and transparent enough for small search or classification tasks.

That tension is what makes this more interesting than a routine API launch. Browser AI has been talked about for long enough that many people assumed it would stay demo-ware. The Prompt API shows it is becoming concrete, but concrete does not mean frictionless. Storage budgets, device support, model quality, and the limits of small local models all still shape what developers can build. Even supportive HN comments noted that the current model is better suited to compact tasks than deep multi-turn reasoning.

The HN reaction was neither dismissal nor hype. It was more useful than that. Readers basically said: yes, privacy-preserving local inference in the browser is real and attractive, but the UX bill is still large enough that developers cannot pretend it disappears into the background. If browser AI is going to matter, it will have to win on honest product fit, not on the novelty of being on-device.

Share: Long

Related Articles

AI sources.official Apr 16, 2026 2 min read

Google is rolling out Skills in Gemini in Chrome so users can save prompts and rerun them on the current page or selected tabs. The feature starts on Mac, Windows, and ChromeOS for English-US desktop users, with confirmations before actions like adding calendar events or sending email.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.