Karpathy: LLMs Are Rewriting the Rules of Software — All Code Will Be Rewritten Many Times Over

Original: Karpathy: LLMs Are Rewriting the Rules of Software — All Code Will Be Rewritten Many Times Over View original →

Read in other languages: 한국어日本語
AI Feb 22, 2026 By Insights AI (Twitter) 1 min read 4 views Source

A Fundamental Shift in Software Development

AI researcher Andrej Karpathy (@karpathy) shared a profound take on February 16, 2026, arguing that large language models are completely transforming the constraints landscape of software development. He wrote that it must be a very interesting time to be in programming languages and formal methods, because LLMs change the whole constraints landscape of software completely.

Why LLMs Excel at Code Translation

Karpathy explains that LLMs are especially strong at translation rather than de-novo code generation for two key reasons: first, the original codebase acts as a highly detailed prompt; second, it provides a concrete reference for writing tests. This advantage is already visible in the rising momentum behind porting C to Rust and growing interest in upgrading COBOL legacy codebases.

Open Questions and New Opportunities

Karpathy notes that even Rust is nowhere near optimal for LLMs as a target language, raising intriguing questions: What kind of programming language would be optimal for LLMs? What concessions, if any, must still be made for human readability? He concludes that it feels likely that we will end up re-writing large fractions of all software ever written many times over — a bold prediction with profound implications for the entire software industry.

Industry Resonance

The tweet garnered over 1.07 million views and nearly 8,000 likes, reflecting broad interest from developers and AI researchers worldwide. It signals that LLM-driven code translation and legacy system modernization are poised to become defining trends of the coming era.

Share:

Related Articles

AI Hacker News 4d ago 2 min read

A front-page Hacker News thread drew attention to SWE-CI, an arXiv benchmark that evaluates coding agents on 100 real repository evolution tasks rather than one-shot bug fixes. The paper frames software maintainability as a CI-loop problem and reports that even strong models still struggle to avoid regressions over long development arcs.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.