Together Research says LLMs can repair bad database query plans
Original: Together Research says LLMs can repair bad database query plans View original →
On April 3, 2026, Together AI’s X account promoted new research claiming that LLMs can repair query plans when a database optimizer misses semantic correlations. The post highlights DBPlanBench, a system that hands the LLM a database’s physical operator graph and asks it to patch the plan directly instead of rewriting the full execution strategy from scratch.
What the research claims
The team says DBPlanBench works on Apache DataFusion plans and uses localized edits plus an evolutionary search loop to refine candidates. In the X post, Together reports up to 4.78x speedups on TPC-H and TPC-DS, says 60.8% of tested queries improved by more than 5%, and cites a build-memory reduction from 3.3 GB to 411 MB in one of its examples. The related arXiv paper describes the motivation in conventional database terms: cost estimators can miss semantic correlations in data, which in turn leads to bad join order, bad access paths, and cascading planning errors.
Why it matters
This is a useful example of LLMs being applied below the application layer, inside systems infrastructure that normally depends on handwritten heuristics. The notable design choice is not to have the model generate a brand-new plan, but to let it inspect an already-optimized physical plan and suggest bounded changes that can be executed and evaluated. If the approach holds up beyond benchmark settings, it could open a path to narrower, higher-confidence uses of LLMs in database engines and other optimization stacks.
Source materials include Together AI’s X post and the paper “Making Databases Faster with LLM Evolutionary Sampling”.
Related Articles
Mistral announced Mistral Small 4 on March 16, 2026 as a single open model that combines reasoning, multimodal input, and agentic coding. Key specs include 119B total parameters, 6B active parameters per token, a 256k context window, Apache 2.0 licensing, and configurable reasoning effort.
Mistral introduced Leanstral on March 16, 2026 as an open-source code agent built specifically for Lean 4. The release combines 6B active parameters, an Apache 2.0 license, a new FLTEval benchmark, and immediate availability in Mistral Vibe, API form, and downloadable weights.
A new r/MachineLearning post pushes TurboQuant beyond KV-cache talk and into weight compression, with a GitHub implementation that targets drop-in low-bit LLM inference.
Comments (0)
No comments yet. Be the first to comment!