Google DeepMind introduced Gemma 4 on X as a family of open models designed to run on developers’ own hardware. Its April 2, 2026 developer post ties that launch to on-device agentic workflows, support for more than 140 languages, and deployment paths through AICore, AI Edge Gallery, and LiteRT-LM.
#on-device-ai
RSS FeedA recent Show HN thread pointed to Parlor, a local multimodal assistant that combines Gemma 4 E2B, Kokoro, browser voice activity detection, and streaming audio playback. The project reports around 2.5 to 3.0 seconds of end-to-end latency on an Apple M3 Pro.
A LocalLLaMA post drew attention to PokeClaw, an open-source Android prototype that runs Gemma 4 locally through LiteRT-LM and lets the model tap, swipe, type, open apps, send messages, and manage auto-replies without cloud inference.
A Launch HN thread pulled RunAnywhere’s MetalRT and RCLI into focus, centering attention on a low-latency STT-LLM-TTS stack that runs on Apple Silicon without cloud APIs.
A well-received r/LocalLLaMA post described an Android app that turns EPUB books into spoken audio entirely on-device using Kokoro TTS. The project highlights how mobile inference speed, APK size, and thermal behavior now shape practical offline AI products.
A Show HN post introduces Off Grid, an open-source Android/iOS app that runs chat, image generation, vision, and speech transcription entirely on-device without cloud data transfer.