Show HN: Rudel brings analytics to Claude Code sessions and exposes early failure patterns
Original: Show HN: Rudel – Claude Code Session Analytics View original →
Rudel is an attempt to treat Claude Code as an observable workflow rather than a black box that only emits outputs. In the Show HN thread, the team says it built the product after realizing it had almost no visibility into its own coding sessions. By collecting transcripts and analyzing them, it says it assembled a dataset of 1,573 real sessions, more than 15M tokens, and more than 270K interactions.
The most interesting claim is about early-session behavior. The post says skills appeared in only 4% of sessions, 26% of sessions were abandoned, most of those drop-offs happened within the first 60 seconds, and session success varied sharply by task type, with documentation doing best and refactoring doing worst. It also says error cascades in the first two minutes can predict abandonment with reasonable accuracy. That makes Rudel notable not as another coding assistant, but as an analytics layer for how coding assistants are actually used.
The README fills in the mechanics. Users install the CLI, run rudel enable, and register a Claude Code hook that uploads transcripts when a session ends. The system stores session IDs, timestamps, project and package context, git metadata, full prompt-and-response transcripts, and sub-agent usage, then processes the data in ClickHouse. In other words, Rudel is trying to become observability infrastructure for coding agents rather than just a prettier history viewer.
HN reaction was mixed in a useful way. Some readers immediately asked whether it works for tools beyond Claude Code, including Codex. Others wanted a locally hosted alternative, better example runs, or more evidence behind the quoted dataset. Several commenters were simply uncomfortable with automatic transcript uploads. That tension is probably the real market test: coding-agent analytics is becoming necessary, but products in this category will only land if they can make privacy boundaries and data handling feel trustworthy. Original source: GitHub · rudel.ai. Community discussion: Hacker News.
Related Articles
A March 29 Hacker News post pushed a GitHub issue alleging that Claude Code was running `git fetch origin` plus `git reset --hard origin/main` every 600 seconds against a user repo. The root cause is still unresolved, but the report sharply reopens the repo-safety question for agentic coding tools.
A Hacker News-favored essay looks back from ChatGPT's November 2022 launch to Claude Code, vibe coding, and local LLMs, arguing that AI's real value is useful but still harder to measure than the hype suggests.
HN focused less on telemetry as an idea and more on whether opt-out controls work when gh runs inside CI, servers, and automation.
Comments (0)
No comments yet. Be the first to comment!