Karpathy at Sequoia Ascent 2026: Three New Frontiers LLMs Open Beyond Speed

Original: Karpathy at Sequoia Ascent 2026: Three New Frontiers LLMs Open Beyond Speed View original →

Read in other languages: 한국어日本語
LLM May 3, 2026 By Insights AI (Twitter) 1 min read 1 views Source

LLMs as More Than Accelerants

Andrej Karpathy shared highlights from a fireside chat at Sequoia Ascent 2026. His central argument: LLMs are not just tools for doing what we already do faster, they unlock categories of functionality that either should not need to exist anymore or were previously impossible.

Three New Horizons

1. LLM-native apps (e.g., menugen)
Apps where the LLM handles all computation natively, input an image and output an image, with no classical code required. The app becomes just a prompt, not software.

2. .md skills instead of .sh scripts
Why write a complex bash install script when you can write the installation out in natural language and hand it to an LLM? The LLM reads English as a high-level interpreter, targets your specific setup, and debugs inline.

3. LLM knowledge bases
Computation over unstructured data from arbitrary sources and formats was fundamentally impossible with classical code. LLMs make this a first-class capability.

Explaining LLM Jaggedness

Karpathy addressed why the same model can coherently refactor a 100,000-line codebase and also give nonsensical answers. The answer lies in verifiability and economics: RL training distributions follow revenue and TAM, so models excel on tasks well-represented in training data and struggle off-distribution. Understanding this is key to practically harnessing LLMs.

The Agent-Native Economy

Karpathy's third theme: products and services decomposing into sensors, actuators, and logic across all computing paradigms. The emerging core skill in agentic engineering is making information maximally legible to LLMs. His longer-term vision: mostly-neural computing with classical CPUs as coprocessors.

Share: Long

Related Articles

LLM Reddit Apr 12, 2026 2 min read

A r/LocalLLaMA thread quickly elevated MiniMax M2.7 because the Hugging Face release is framed less as a chat model and more as an agent system with tool use, Agent Teams, and ready-made deployment guides. Early interest is as much about operational packaging as about the benchmark numbers themselves.

LLM Apr 16, 2026 2 min read

Cloudflare is trying to make model choice less sticky: AI Gateway now routes Workers AI calls to 70+ models across 12+ providers through one interface. For agent builders, the important part is not the catalog alone but spend controls, retry behavior, and failover in workflows that may chain ten inference calls for one task.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment