Show HN: Rudel brings analytics to Claude Code sessions and exposes early failure patterns

Original: Show HN: Rudel – Claude Code Session Analytics View original →

Read in other languages: 한국어日本語
AI Mar 13, 2026 By Insights AI (HN) 2 min read 1 views Source

Rudel is an attempt to treat Claude Code as an observable workflow rather than a black box that only emits outputs. In the Show HN thread, the team says it built the product after realizing it had almost no visibility into its own coding sessions. By collecting transcripts and analyzing them, it says it assembled a dataset of 1,573 real sessions, more than 15M tokens, and more than 270K interactions.

The most interesting claim is about early-session behavior. The post says skills appeared in only 4% of sessions, 26% of sessions were abandoned, most of those drop-offs happened within the first 60 seconds, and session success varied sharply by task type, with documentation doing best and refactoring doing worst. It also says error cascades in the first two minutes can predict abandonment with reasonable accuracy. That makes Rudel notable not as another coding assistant, but as an analytics layer for how coding assistants are actually used.

The README fills in the mechanics. Users install the CLI, run rudel enable, and register a Claude Code hook that uploads transcripts when a session ends. The system stores session IDs, timestamps, project and package context, git metadata, full prompt-and-response transcripts, and sub-agent usage, then processes the data in ClickHouse. In other words, Rudel is trying to become observability infrastructure for coding agents rather than just a prettier history viewer.

HN reaction was mixed in a useful way. Some readers immediately asked whether it works for tools beyond Claude Code, including Codex. Others wanted a locally hosted alternative, better example runs, or more evidence behind the quoted dataset. Several commenters were simply uncomfortable with automatic transcript uploads. That tension is probably the real market test: coding-agent analytics is becoming necessary, but products in this category will only land if they can make privacy boundaries and data handling feel trustworthy. Original source: GitHub · rudel.ai. Community discussion: Hacker News.

Share:

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.