Hacker News Debates Reco's AI Rewrite of JSONata After the Team Claims a $500K Infrastructure Win

Original: We rewrote JSONata with AI in a day, saved $500k/year View original →

Read in other languages: 한국어日本語
AI Mar 28, 2026 By Insights AI (HN) 2 min read Source

The problem Reco was paying to avoid

In a March 25, 2026 engineering post, Reco described how its SaaS security pipeline had been evaluating JSONata expressions through a Node.js RPC fleet because the reference implementation is JavaScript while the company's main pipeline is in Go. According to the post, billions of events and thousands of distinct expressions were crossing that language boundary, with each RPC trip costing around 150 microseconds before the real evaluation even started. Reco says that overhead alone had grown into roughly $300,000 per year in compute and forced clusters past 200 replicas in some environments.

The company says prior optimizations, including expression tuning, caching, and even embedding V8 into Go, only produced incremental gains. The turning point came after reading Cloudflare's account of rebuilding Next.js API surface with AI. Reco copied the same basic pattern: port the official test suite, then iterate with AI until the new implementation passes the spec.

What the team says AI actually produced

Reco's result is gnata, a pure-Go implementation of JSONata 2.x that the company says was bootstrapped in about seven hours for roughly $400 in token cost. The post claims the project landed at about 13,000 lines of Go and 1,778 passing official test cases, then was wrapped with another 2,107 integration tests. The design uses a two-tier model: a fast path that evaluates simple lookups and certain built-in functions directly on raw JSON bytes with zero heap allocations, and a full path that parses only the needed subtrees for complex expressions.

Reco says the bigger gain came from integrating the evaluator into its existing Go services instead of paying repeated serialization and RPC costs. In the company's telling, simple lookups became about 1,000x faster, complex expressions still improved by roughly 25x to 90x, and the dedicated JSONata RPC fleet dropped to zero.

Why the number became $500K, not $300K

The post describes a second-order effect after gnata became viable. Because JSONata no longer forced a one-expression-at-a-time RPC path, Reco says it could simplify the surrounding rule engine, reduce goroutine explosion, and introduce better micro-batching and grouped enrichment queries. That, according to the company, cut another roughly $18,000 per month, bringing the total claim to around $500,000 per year saved in under two weeks of work. The rollout sequence is also notable: shadow mode in preproduction, mismatch logging, and promotion only after three consecutive days of zero mismatches on real workloads.

The Hacker News thread reached 256 points and 237 comments at crawl time. The reaction split along a useful boundary. Some readers focused on the impressive economics and spec-driven methodology. Others stressed that the real story is not magical code generation, but using AI as a force multiplier against an existing spec, test suite, and review process. Reco itself makes that reading hard to dismiss: the company also notes that AI agents reviewing AI-generated code created their own noise problem, forcing the team to tune what counted as a meaningful review finding.

Primary source: Reco engineering post. Community discussion: Hacker News.

Share: Long

Related Articles

AI Hacker News Mar 2, 2026 1 min read

The open-source project Memento sparked a heated debate on Hacker News: as AI writes more code, should the AI session itself become part of the commit history? It raises fundamental questions about code provenance in the age of AI-assisted development.

AI Hacker News Mar 2, 2026 1 min read

The open-source project Memento sparked a heated debate on Hacker News: as AI writes more code, should the AI session itself become part of the commit history? It raises fundamental questions about code provenance in the age of AI-assisted development.

AI Hacker News Mar 8, 2026 2 min read

A front-page Hacker News thread drew attention to SWE-CI, an arXiv benchmark that evaluates coding agents on 100 real repository evolution tasks rather than one-shot bug fixes. The paper frames software maintainability as a CI-loop problem and reports that even strong models still struggle to avoid regressions over long development arcs.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.