A March 2026 Hacker News thread with 120 points and 33 comments pushed a deep technical explainer on the Hamilton-Jacobi-Bellman equation. The post argues that continuous-time reinforcement learning and diffusion models can be understood through the same control-theory structure rather than as separate ML tricks.
Sciences
RSS FeedMeta said on March 26, 2026 that TRIBE v2 can predict high-resolution fMRI brain activity with zero-shot generalization across new subjects, languages, and tasks. The company is also releasing the model, code, paper, and demo for researchers.
Hacker News surfaced a CERN story about pushing ultra-compact AI into the LHC trigger path, where collision data must be filtered at 40 MHz and within roughly 50 ns latency. The notable point is not generative AI, but highly specialized anomaly detection running in CMS Global Trigger test-crate FPGAs.
Google DeepMind said on February 11, 2026 that Gemini Deep Think is being used on professional research problems across mathematics, physics, and computer science. The company highlighted its Aletheia math agent, up to 90% on IMO-ProofBench Advanced, and collaborations on 18 research problems as evidence that AI is moving from benchmark performance toward real scientific workflow support.
Google Research said on March 12, 2026 that it is rolling out urban flash flood forecasts with up to 24 hours of advance notice. The system uses a Groundsource dataset built from public reports and extends Google’s flood coverage beyond river flooding toward rapid-onset urban disasters.
Google said on March 10, 2026 that research with Imperial College London and the NHS found its mammography system identified 25% of interval cancers that conventional screening had missed. The company also said a second study suggests AI could reduce screening workload by about 40% when used as a second reader.
NVIDIA AI Dev highlighted on March 27, 2026 that Edison's PaperQA3 can reason over more than 150 million research papers and patents and posted strong LABBench2 results. Edison's article says the multimodal system can now read figures and tables, compare hundreds of visual elements before answering, and rank among the strongest deep-research agents on relevant LABBench2 subsets.
Anthropic said on March 23, 2026 that not every long-horizon task benefits from splitting work across many agents, and pointed to a sequential setup for modeling the early universe. In the linked research post, Anthropic describes using Claude Opus 4.6 with persistent memory, orchestration patterns, and test oracles to implement a differentiable cosmological Boltzmann solver.
Google Research and Google DeepMind published a real-world feasibility study of AMIE in ambulatory primary care with Beth Israel Deaconess Medical Center. The study found AMIE roughly on par with primary care physicians on overall management plans and differential diagnoses, while also documenting important practical limitations.
Google Research introduced S2Vec on March 24, 2026 as a self-supervised way to turn built-environment data into general-purpose embeddings. The framework aims to predict socioeconomic and environmental patterns from how cities are physically organized.
Google Research said on March 16, 2026 that its superconductivity case study found curated-source systems outperforming open-web LLMs. NotebookLM and a custom RAG setup scored highest on expert-written questions about high-temperature superconductors.
r/singularity amplified Google's decision to add neutral-atom research alongside superconducting quantum hardware, framing it as a hedge between qubit count, circuit depth, and commercialization timelines.