Hacker News Highlights a Case Where TypeScript Beat Rust WASM
Original: We rewrote our Rust WASM parser in TypeScript and it got faster View original →
The surprising result was not about language speed, but data movement
OpenUI's March 13, 2026 engineering note explains why its openui-lang parser ended up faster after a full rewrite from Rust compiled to WASM into TypeScript. The parser turns a custom DSL emitted by an LLM into a React component tree and runs on every streaming chunk, so the real performance question was not raw parsing throughput in isolation. It was end-to-end latency inside the browser.
The team mapped the pipeline as autocloser -> lexer -> splitter -> parser -> resolver -> mapper -> ParseResult, then measured where time was actually going. Their conclusion was that Rust parsing speed was never the bottleneck. The expensive part was crossing the JavaScript and WASM boundary on every call: copying the input string into WASM memory, serializing the result to JSON inside Rust, copying the JSON string back out, and then deserializing it again in V8. In other words, the system was paying a fixed interop tax even when the parser logic itself was already fast.
Why the obvious optimization failed
OpenUI also tried skipping the JSON round-trip by returning a JavaScript object directly with serde-wasm-bindgen. That sounded cleaner, but it measured 9% to 29% slower depending on the fixture. The post's explanation is useful for browser engineers: turning Rust structures into live JavaScript objects still requires many fine-grained conversions across runtime boundaries, while a single JSON string transfer lets Rust serialize in one environment and lets V8 parse in one optimized pass.
The bigger win came from the algorithm
Once the team moved the parser fully into TypeScript, one-shot parsing became 2.2x to 4.6x faster across its sample documents. But the more meaningful production improvement came from fixing the streaming algorithm. The naive approach re-parsed the full accumulated string on every chunk, which turns a 1000-character response delivered in 20-character chunks into roughly 25,000 characters of total parsing work. OpenUI replaced that O(N^2) behavior with statement-level incremental caching so only the trailing incomplete statement gets re-parsed while completed statements stay cached.
The published full-stream numbers show why that matters. On the contact-form fixture, total parse cost dropped from 316 microseconds to 122. On the dashboard fixture, it fell from 840 to 255. The article's broader lesson is that WASM still makes sense for compute-heavy, low-interop tasks like media processing or cryptography, but it can be the wrong tool when the real workload is frequent parsing of structured text into JavaScript objects inside a streaming AI UI.
Source: OpenUI engineering note. Hacker News discussion: item 47461094.
Related Articles
A March 13, 2026 Show HN post presented GitAgent as a git-native agent specification built around files like `agent.yaml`, `SOUL.md`, and `SKILL.md`, with portability, versioning, and auditability as the core pitch.
Vercel used X on March 12, 2026 to show how Notion Workers runs agent-capable code on Vercel Sandbox. Vercel's write-up says Workers handle third-party syncs, automations, and AI agent tool calls, while Sandbox provides isolation, credential management, network controls, snapshots, and active-CPU billing.
Cloudflare said on March 11, 2026 that it now returns RFC 9457-compliant Markdown and JSON error payloads to AI agents instead of heavyweight HTML pages. In a same-day blog post, the company said the change cuts token usage by more than 98% on a live 1015 rate-limit response and turns error handling into machine-readable control flow.
Comments (0)
No comments yet. Be the first to comment!