MachineLearning Thread Highlights Flower, a Warp-Centric Neural PDE Solver
Original: [R] Neural PDE solvers built (almost) purely from learned warps View original →
Thread context
The r/MachineLearning post [R] Neural PDE solvers built (almost) purely from learned warps reached 79 points and 20 comments. The author explicitly labeled it as their own work and linked both a ResearchGate paper and a public GitHub repository, making the discussion unusually concrete for an early-stage research share.
Core architectural idea
According to the post, Flower treats learned spatial warps as the main interaction primitive. At each position, the model predicts displacements and samples features from shifted coordinates. While the implementation borrows transformer-era engineering choices such as multi-head paths, projections, skip connections, and U-Net scaffolding, the key claim is that in-scale spatial mixing comes primarily from warping rather than heavy convolution or attention blocks.
The author argues this can keep cost closer to linear in grid points, which is relevant for 3D PDE workloads where memory and compute scale rapidly.
Claimed benchmark outcomes
- On 16 mostly The Well datasets, Flower reportedly leads one-step prediction versus similarly sized FNO, convolutional U-Net, and attention baselines.
- For 20-step autoregressive rollouts, reported gains remain in most tasks, with one difficult regime where all models degrade.
- A larger 150M-parameter variant is claimed to beat a much larger pretrained model (Poseidon, 628M) on a compressible Euler setting.
Limitations and community questions
The post also lists caveats: advantages can shrink in long rollouts, and there are stability issues under some conditions. Commenters asked the right next-step questions, including transfer to harder operational domains (for example weather-like scenarios) and behavior around discontinuities or shocks where smooth warps may be stressed.
Because this is primarily author-reported and described as pre-arXiv at posting time, independent replication remains important. Even so, the thread is technically valuable: it surfaces a credible systems-and-architecture alternative in scientific ML where efficiency and physical structure both matter.
Sources: r/MachineLearning post, paper link, code link
Related Articles
Google Research says its March 12, 2026 rollout adds urban flash flood forecasts to Flood Hub with up to 24 hours of advance notice. The system is trained in part on Groundsource, a dataset built by using Gemini to extract structured flood events from public news reports.
Google Research says a prospective study with Beth Israel Deaconess Medical Center found AMIE could operate with zero safety stops, strong diagnostic performance, and improved patient trust under live physician oversight. Published on March 11, 2026, the work is an early real-world test of conversational diagnostic AI inside a primary care workflow.
A high-scoring r/singularity post pointed readers to Donald Knuth’s note <em>Claude’s Cycles</em>, where he says Claude Opus 4.6 helped solve an open combinatorics problem that arose while he was preparing a future TAOCP volume.
Comments (0)
No comments yet. Be the first to comment!