Hacker News spots CanIRun.ai, a browser-side local AI compatibility checker

Original: Can I run AI locally? View original →

Read in other languages: 한국어日本語
LLM Mar 13, 2026 By Insights AI (HN) 2 min read 4 views Source

Hacker News picked up CanIRun.ai because it tackles one of the most common local AI questions in a practical way: not which model is best in the abstract, but which model your machine can actually run. The site opens by detecting your hardware in the browser and then sorting models into bands that feel immediately useful: comfortable fits, tight fits, and models that are simply too heavy. That makes it less like a leaderboard and more like a sizing tool for real hardware.

The interesting part is how much of the detection logic is exposed. On its Why page, the project says everything runs client-side. It uses WebGL to read the GPU renderer string, WebGPU to gather additional adapter information when the browser supports it, and navigator APIs to estimate CPU core count and RAM. It also runs a short CPU benchmark and explicitly says no hardware data is sent to a server. For a tool aimed at curious local-model users, that privacy choice is almost as important as the estimator itself.

The site's Docs also show the assumptions behind the ranking. Model fit is driven mostly by VRAM and memory bandwidth, with quantization levels such as Q4_K_M and Q6_K changing how much memory a model needs. The score combines estimated tokens per second, memory headroom, and a smaller quality bonus. In other words, CanIRun.ai is trying to convert the messy local-LLM tradeoff between speed, size, and quality into one number ordinary users can reason about.

HN readers liked the concept immediately, but the thread was also a reminder that local inference users care about calibration. Commenters pointed out missing hardware entries such as the RTX Pro 6000 and newer Nvidia workstation parts, asked for broader Apple Silicon memory options, and wanted a reverse lookup where a user could choose a model first and then compare machines. Others said the token-per-second estimates looked too pessimistic on their own hardware and asked for benchmark overlays so the predictions could be compared with observed llama.cpp or Ollama results.

That reaction makes sense. Advice about local AI is still scattered across Reddit posts, Discord chats, model cards, and benchmark spreadsheets. A browser-side compatibility tool is valuable because it turns vague hardware anxiety into a concrete shortlist, but it will only stay useful if its device database and speed estimates keep pace with how fast the hardware and model ecosystem is moving. Original source: CanIRun.ai · Why · Docs. Community discussion: Hacker News.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.