Tinybox – A powerful computer for deep learning

Original: Tinybox – A powerful computer for deep learning View original →

Read in other languages: 한국어日本語
AI Mar 22, 2026 By Insights AI (HN) 2 min read 1 views Source

The Hacker News discussion had 556 points and 323 comments at crawl time, which is unusually strong traction for a hardware product page. The linked target was tinybox, a line of deep-learning workstations from the tinygrad team that tries to package GPU-heavy AI compute as something developers can order directly instead of assembling from server parts.

According to the product table, the red v2 configuration ships with 4x 9070XT GPUs, 64 GB of GPU RAM, 778 TFLOPS of FP16 performance with FP32 accumulation, and 2 TB of fast NVMe storage. The green v2 Blackwell system steps up to 4x RTX PRO 6000 Blackwell GPUs, 384 GB of GPU RAM, 3086 TFLOPS, and a 4 TB RAID plus 1 TB boot drive. The listed prices are $12,000 for red v2 and $65,000 for green v2 Blackwell, and both entries are marked "IN STOCK."

The operating details are almost as important as the raw specs. tinygrad says the factory is already running, orders ship within one week of payment, pickup is available in San Diego, and worldwide shipping is supported. The company also says it does not offer customization, explicitly trading flexibility for a simpler ordering process, and that wire transfer is the only accepted payment method.

In the FAQ, tinygrad describes tinybox as a very powerful computer for deep learning and claims strong performance per dollar. The same page says the system was benchmarked in MLPerf Training 4.0 against machines costing 10x more, while also making the straightforward point that hardware capable of training can obviously be used for inference. Those are vendor claims, so buyers still need to validate framework support, power draw, thermals, acoustics, and real workload fit before treating tinybox as a drop-in alternative to a conventional server build.

  • red v2: 4x 9070XT, 64 GB GPU RAM, 778 TFLOPS, $12,000
  • green v2 Blackwell: 4x RTX PRO 6000 Blackwell, 384 GB GPU RAM, 3086 TFLOPS, $65,000
  • Availability: both systems are shown as in stock with one-week shipping after payment

What makes tinybox interesting is not only that it is fast, but that it treats AI hardware as an orderable appliance with a public spec sheet, explicit logistics, and a concrete price ladder. That is a different pitch from hyperscale GPU clusters and even from many boutique workstation vendors. The Hacker News response suggests there is still a lot of developer curiosity around smaller-scale, immediately deployable AI hardware that sits between hobbyist rigs and datacenter infrastructure.

Share: Long

Related Articles

AI sources.twitter 5d ago 2 min read

Vercel used X on March 12, 2026 to show how Notion Workers runs agent-capable code on Vercel Sandbox. Vercel's write-up says Workers handle third-party syncs, automations, and AI agent tool calls, while Sandbox provides isolation, credential management, network controls, snapshots, and active-CPU billing.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.