A LocalLLaMA post pointed to a new Hugging Face dataset of human-written code reviews, pairing before-and-after code changes with inline reviewer comments and negative examples across 37 languages.
#llm-training
LLM Reddit 2d ago 2 min read
LLM sources.twitter 3d ago 2 min read
Andrej Karpathy has published autoresearch, a minimal repo that lets AI agents iterate on a stripped-down nanochat training loop overnight. The project turns agent evaluation into a closed-loop research workflow with fixed 5-minute runs, Git branches, and validation-loss-based selection.
LLM Hacker News Mar 5, 2026 2 min read
A March 4, 2026 Hacker News thread elevated Q Labs’ Slowrun benchmark, which fixes training data at 100M FineWeb tokens and optimizes for data efficiency under large compute budgets.
LLM Reddit Feb 21, 2026 2 min read
A high-engagement r/singularity post pointed to arXiv 2602.15322, which reports that masked adaptive updates and the proposed Magma optimizer can improve 1B-model perplexity versus Adam and Muon with minimal overhead.