Hacker News Revisits the First 40 Months of the AI Era Through Claude Code and Local LLMs

Original: The first 40 months of the AI era View original →

Read in other languages: 한국어日本語
AI Mar 29, 2026 By Insights AI (HN) 2 min read Source

The Hacker News thread around "The first 40 months of the AI era" landed because it captures a practitioner view that feels familiar in early 2026: AI is already useful, but the hard question is still how much durable productivity it actually creates. The original post looks back from ChatGPT's November 2022 launch to a present shaped by Claude Code, vibe coding, and fast-improving local models.

The author describes the arc many developers followed. Early interactions with ChatGPT were about surprise: coherent text, working code snippets, and a sense that the technology had crossed out of toy status. But the next phase was more complicated. In small projects, AI could generate a working first version quickly, yet repeated prompting often drifted, and large parts of the generated code were eventually rewritten by hand. That is why the post refuses the easy "AI saves time" narrative. The practical value may come less from raw speed and more from scope expansion, prototyping, and reducing friction at the start of a task.

The most positive section is the author's review of Claude Code. Rather than treating it as a generic chatbot, the post describes it as a new input layer for the computer, somewhere alongside the keyboard, mouse, and terminal. That framing matters. The claimed win is not perfect autonomy. It is natural-language control that can edit files, search, and perform routine developer work with less copy-paste overhead than web chat interfaces. For readers on Hacker News, that matches the part of AI that feels immediately real.

At the same time, the post is skeptical about the surrounding hype. It questions whether vibe coding truly saves effort once rework is counted, notes the risk of overly flattering or confidence-boosting AI advice, and argues that AI-generated prose still feels generic enough that the author avoids publishing it verbatim. The essay also points to two pressures that may reshape usage this year: rumored rate limits on hosted assistants and the steady improvement of local LLM stacks.

  • Useful today: code scaffolding, research replacement for common tasks, and conversational control of developer tools.
  • Still unresolved: measuring real productivity gains after revisions, debugging, and oversight are included.
  • Strategic takeaway: if local models keep improving, some of the current subscription economics may get harder to justify.

That mix of enthusiasm and restraint is why the essay traveled on Hacker News. It does not deny the step change introduced after November 2022. It argues that the lasting story of the first 40 months is not magic, but renegotiation: which parts of thinking, coding, and writing people are actually willing to hand over to a model, and which parts they still want to keep for themselves.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.