From Prompt Tricks to Process: HN Spotlights Agentic Engineering Patterns
Original: Agentic Engineering Patterns View original →
Why this HN thread mattered
The Hacker News discussion for "Agentic Engineering Patterns" drew strong engagement, signaling that the conversation around coding agents is maturing from "which model is best" to "which engineering practices survive in production." The linked guide by Simon Willison is a living index of techniques for getting reliable outcomes from tools such as Claude Code and OpenAI Codex.
Instead of presenting a single magic prompt, the guide groups practices into a reusable structure:
- Principles: including "Writing code is cheap now," "Hoard things you know how to do," and explicit anti-patterns.
- Testing and QA: chapters like "Red/green TDD," "First run the tests," and "Agentic manual testing."
- Understanding code: walkthrough and explanation patterns that keep humans in control of evolving systems.
- Annotated prompts and an appendix of practical prompt assets.
This taxonomy is useful because teams can standardize behavior around it. You can ask an agent to operate inside specific constraints, then evaluate output against known QA checkpoints. That moves the workflow away from ad-hoc generation and toward repeatable delivery.
Agentic engineering as a professional practice
In his introduction post, Willison defines agentic engineering as software development with agents that can generate and execute code, test their own changes, and iterate with less turn-by-turn supervision. He contrasts this with "vibe coding" in which code inspection can be minimal. The practical implication is clear: production teams still need deliberate review, test hygiene, and change traceability, even when generation speed increases dramatically.
For engineering leaders, the guide’s strongest idea is portability. Model providers and tools will keep changing, but habits like small scoped tasks, tests before merge, and explicit anti-pattern checklists can stay stable. That gives organizations a way to absorb rapid model change without letting quality or security drift.
Source guide: Agentic Engineering Patterns
Related Articles
HN read Kimi K2.6 as a test of whether open-weight coding agents can last through real engineering work. The 12-hour and 13-hour coding cases drew attention, while commenters immediately pressed on speed, provider accuracy, and benchmark realism.
HN latched onto a pain every heavy coding-tool user knows: the bug is tiny, but the diff balloons anyway. A new write-up turns that annoyance into a measurable benchmark and argues that better prompting and RL can make models edit with more restraint.
r/LocalLLaMA pushed this post up because the “trust me bro” report had real operating conditions: 8-bit quantization, 64k context, OpenCode, and Android debugging.
Comments (0)
No comments yet. Be the first to comment!