MachineLearning Thread Highlights Flower, a Warp-Centric Neural PDE Solver

Original: [R] Neural PDE solvers built (almost) purely from learned warps View original →

Read in other languages: 한국어日本語
Sciences Feb 26, 2026 By Insights AI (Reddit) 2 min read Source

Thread context

The r/MachineLearning post [R] Neural PDE solvers built (almost) purely from learned warps reached 79 points and 20 comments. The author explicitly labeled it as their own work and linked both a ResearchGate paper and a public GitHub repository, making the discussion unusually concrete for an early-stage research share.

Core architectural idea

According to the post, Flower treats learned spatial warps as the main interaction primitive. At each position, the model predicts displacements and samples features from shifted coordinates. While the implementation borrows transformer-era engineering choices such as multi-head paths, projections, skip connections, and U-Net scaffolding, the key claim is that in-scale spatial mixing comes primarily from warping rather than heavy convolution or attention blocks.

The author argues this can keep cost closer to linear in grid points, which is relevant for 3D PDE workloads where memory and compute scale rapidly.

Claimed benchmark outcomes

  • On 16 mostly The Well datasets, Flower reportedly leads one-step prediction versus similarly sized FNO, convolutional U-Net, and attention baselines.
  • For 20-step autoregressive rollouts, reported gains remain in most tasks, with one difficult regime where all models degrade.
  • A larger 150M-parameter variant is claimed to beat a much larger pretrained model (Poseidon, 628M) on a compressible Euler setting.

Limitations and community questions

The post also lists caveats: advantages can shrink in long rollouts, and there are stability issues under some conditions. Commenters asked the right next-step questions, including transfer to harder operational domains (for example weather-like scenarios) and behavior around discontinuities or shocks where smooth warps may be stressed.

Because this is primarily author-reported and described as pre-arXiv at posting time, independent replication remains important. Even so, the thread is technically valuable: it surfaces a credible systems-and-architecture alternative in scientific ML where efficiency and physical structure both matter.

Sources: r/MachineLearning post, paper link, code link

Share: Long

Related Articles

Sciences Reddit 5d ago 2 min read

A high-scoring r/singularity post pointed readers to Donald Knuth’s note <em>Claude’s Cycles</em>, where he says Claude Opus 4.6 helped solve an open combinatorics problem that arose while he was preparing a future TAOCP volume.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.