DeepClaude: Run Claude Code's Agent Loop with DeepSeek V4 Pro at 17x Less Cost
Original: DeepClaude – Claude Code agent loop with DeepSeek V4 Pro View original →
The Concept
DeepClaude is an open-source tool that surgically replaces the AI brain inside Claude Code's agent loop while leaving the body intact. Everything developers rely on — file reading, editing, bash execution, subagent spawning, autonomous multi-step loops — continues to work as before. Only the model powering those decisions changes.
The project hit nearly 600 points on Hacker News and generated significant discussion around the Claude Code cost barrier.
Cost Breakdown
| Backend | Input/M | Output/M |
|---|---|---|
| DeepSeek (default) | $0.44 | $0.87 |
| OpenRouter | $0.44 | $0.87 |
| Fireworks AI | $1.74 | $3.48 |
| Anthropic (original) | $3.00 | $15.00 |
DeepSeek V4 Pro scores 96.4% on LiveCodeBench and includes automatic context caching that makes repeated turns up to 120x cheaper. For heavy coding workloads, the savings compound quickly.
How It Works
DeepClaude sets environment variables (ANTHROPIC_BASE_URL, ANTHROPIC_AUTH_TOKEN, model name overrides) per-session without permanently altering your configuration. On exit, original settings are restored. A --switch flag lets you change backends mid-session without restarting.
Caveats
DeepSeek's servers are in China — for enterprise or sensitive environments, OpenRouter or Fireworks AI (US-based) are recommended alternatives. Model behavior will differ from native Claude, and some Anthropic-specific capabilities may not translate.
Related Articles
HN did not read EvanFlow as another shiny agent wrapper so much as a set of brakes for agentic coding. Checkpoints, integration contracts, and explicit no-auto-commit rules drew more attention than the TDD label itself.
HN did not treat this as abstract legal trivia. Once the Claude Code leak became the hook, the thread turned into a practical question for every team shipping AI-assisted software: if the model wrote the bulk of it, what is actually yours?
This was not just another “local models are bad” rant. The thread blew up because it mixed a blunt reality check with a serious counterargument: some of the pain comes from small models, but a lot of it may come from the harness wrapped around them.
Comments (0)
No comments yet. Be the first to comment!