Show HN: Rudel brings analytics to Claude Code sessions and exposes early failure patterns
Original: Show HN: Rudel – Claude Code Session Analytics View original →
Rudel is an attempt to treat Claude Code as an observable workflow rather than a black box that only emits outputs. In the Show HN thread, the team says it built the product after realizing it had almost no visibility into its own coding sessions. By collecting transcripts and analyzing them, it says it assembled a dataset of 1,573 real sessions, more than 15M tokens, and more than 270K interactions.
The most interesting claim is about early-session behavior. The post says skills appeared in only 4% of sessions, 26% of sessions were abandoned, most of those drop-offs happened within the first 60 seconds, and session success varied sharply by task type, with documentation doing best and refactoring doing worst. It also says error cascades in the first two minutes can predict abandonment with reasonable accuracy. That makes Rudel notable not as another coding assistant, but as an analytics layer for how coding assistants are actually used.
The README fills in the mechanics. Users install the CLI, run rudel enable, and register a Claude Code hook that uploads transcripts when a session ends. The system stores session IDs, timestamps, project and package context, git metadata, full prompt-and-response transcripts, and sub-agent usage, then processes the data in ClickHouse. In other words, Rudel is trying to become observability infrastructure for coding agents rather than just a prettier history viewer.
HN reaction was mixed in a useful way. Some readers immediately asked whether it works for tools beyond Claude Code, including Codex. Others wanted a locally hosted alternative, better example runs, or more evidence behind the quoted dataset. Several commenters were simply uncomfortable with automatic transcript uploads. That tension is probably the real market test: coding-agent analytics is becoming necessary, but products in this category will only land if they can make privacy boundaries and data handling feel trustworthy. Original source: GitHub · rudel.ai. Community discussion: Hacker News.
Related Articles
A high-engagement Reddit post surfaced TechCrunch reporting that Spotify engineers are using Claude Code and an internal system called Honk to accelerate coding and deployment.
Boris Tane, engineering lead at Cloudflare, shares a research-plan-implement workflow for Claude Code where the AI never writes a single line of code until a written plan has been approved.
Anthropic's Claude Code Cowork (multi-agent collaboration) feature was found to create a ~10GB VM bundle on macOS using Apple's Virtualization Framework without warning users. The GitHub issue garnered 200+ points on Hacker News.
Comments (0)
No comments yet. Be the first to comment!