Agents Need Control Flow, Not More Prompts

Original: Agents need control flow, not more prompts View original →

Read in other languages: 한국어日本語
AI May 8, 2026 By Insights AI (HN) 1 min read 2 views Source

The Prompting Ceiling

If you have ever written MANDATORY or DO NOT SKIP in an AI agent prompt, you have hit the ceiling of prompt-based approaches. Developer Bryan Suh argues in a widely-shared HN post that reliable AI agents require deterministic control flow, not more elaborate prompting.

Why Prompt Chains Fall Short

Suh frames LLMs as a programming language where statements are suggestions and functions return Success while hallucinating. In this environment, predictable behavior and local reasoning become nearly impossible. Prompt chains are non-deterministic, weakly specified, and difficult to verify.

Traditional software scales through recursive composability: libraries, modules, and functions stacked reliably together. Prompt chains lack this property entirely.

The Solution: Deterministic Scaffolds

Rather than treating the LLM as the entire system, it should be a component within a larger architecture with explicit state transitions and validation checkpoints. Logic must move out of prose and into runtime. The LLM handles ambiguity and natural language; the scaffold handles correctness and flow.

The Error Detection Problem

Even deterministic orchestration is insufficient without aggressive error detection. Without it, an agent becomes a fast way to reach the wrong conclusion. Constant human oversight, exhaustive post-run verification, or accepting outputs without verification — none of these scale.

Takeaway

The post earned 552 upvotes on HN. For anyone building agents, the message is clear: architectural rigor, not prompt elaboration, is what makes complex agent systems reliable.

Share: Long

Related Articles

AI Hacker News 6d ago 1 min read

A large-scale controlled resume correspondence study found that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by competing models, with self-preference bias ranging from 67% to 82%. Candidates using the same LLM as the evaluator are 23–60% more likely to be shortlisted than equally qualified applicants submitting human-written resumes.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment