PaperQA3 pushes Edison's science agent across 150M papers and patents

Original: ICYMI 🔬 @edisonsci is redefining scientific discovery. PaperQA3 can now reason across 150M+ research papers & patents and achieved industry-leading accuracy on the LABBench2 benchmark. See how AI accelerates deep research for science: https://edisonscientific.com/articles/edison-literature-agent View original →

Read in other languages: 한국어日本語
Sciences Mar 27, 2026 By Insights AI 2 min read 1 views Source

What NVIDIA AI Dev posted on X

On March 27, 2026, NVIDIA AI Dev spotlighted Edison Scientific's PaperQA3 and said it can reason across 150M+ research papers and patents while achieving industry-leading accuracy on the LABBench2 benchmark. Even in a crowded field of research assistants, that is a meaningful claim because the hardest scientific workflows rarely depend on plain text alone.

Researchers need systems that can find, compare, and interpret figures, tables, methods, and claims across huge literatures. A paper-reading agent that cannot see those artifacts is limited in exactly the places many scientific questions become difficult.

What Edison's article adds

Edison's own write-up introduces PaperQA3 as a frontier multimodal deep research agent for science. The company says Edison Literature and Kosmos can now read figures and tables from more than 150M research papers and patents, and can inspect hundreds of visual elements before responding. Edison presents this as a major step beyond PaperQA2, which was limited to text pulled from search results.

The article is also careful about performance framing. Edison says the upgraded system is among the strongest deep-research agents on relevant LABBench2 subsets and on two Humanity's Last Exam variants, beating current frontier deep-research agents in those evaluations. The company also says the PaperQA3-backed versions are already available on its platform and API, which turns the story from a lab preview into a deployable product update.

Why this matters

This is high-signal for science and AI tooling because multimodal reading changes what a research agent can actually do. Many critical scientific details live in charts, microscopy images, benchmark tables, ablation plots, or supplementary visuals. A system that can reason over those artifacts alongside text is much closer to real literature review work.

The larger implication is that deep-research products are shifting from broad web synthesis toward domain-specialized reasoning systems. If Edison's benchmarks and scale claims hold up in practice, PaperQA3 points to a new baseline for scientific assistants: not only finding relevant papers, but extracting evidence from the visual structure of the literature itself.

Sources: NVIDIA AI Dev X post · Edison Scientific article

Share: Long

Related Articles

Sciences sources.twitter 3d ago 1 min read

Google DeepMind said on X on March 12, 2026 that a new podcast for AlphaGo’s tenth anniversary explores how methods first sharpened in games now feed into scientific discovery. The post lines up with DeepMind’s March 10 essay arguing that AlphaGo’s search, planning, and reinforcement ideas now influence work in biology, mathematics, weather, and algorithms.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.