Meanflow and IMF Redefine Single-Step Generative Modeling

Published on January 06, 2026 | Translated from Spanish
Architecture diagram comparing the original MeanFlow training flow with the new iMF formulation, showing the network that predicts the mean velocity and in-context conditioning, over a background of high-quality generated images.

Meanflow and iMF Redefine Single-Step Generative Modeling

The field of generative modeling seeks to create new high-quality data, and speed is a key factor. MeanFlow emerged as a promising framework for single-step generation, but its fast-advancing nature presented stability obstacles. Now, a deep reformulation of its core has given birth to iMF, marking a significant milestone. 🚀

Reformulating the Objective to Stabilize Training

The main problem lay in how to train the model. The original objective not only depended on real data, but also on the changing state of the neural network itself, complicating the process. The solution was to redefine this objective as a loss function calculated over the instantaneous velocity. To achieve this, an auxiliary network was introduced that predicts the mean velocity of the flow, allowing the instantaneous velocity to be reparametrized. This change transforms the problem into a more conventional and direct regression, greatly stabilizing the training cycle.

Key advantages of the reformulation:
  • Converts a complex optimization problem into a standard regression, easier to handle.
  • The network that predicts the mean velocity acts as a stabilizing anchor during training.
  • Allows the model to converge more consistently with fewer fluctuations.
"Sometimes, doing things faster doesn't mean skipping steps, but redefining the path from start to finish."

Flexibilizing Conditional Guidance for Generation

Another limitation of the initial method was its system for guiding generation. The classifier-free guidance had a fixed scale during training, restricting its adaptability when producing new samples. The new approach addresses this by formulating the guidance as explicit conditioning variables. This allows applying diverse conditions at generation time, preserving all flexibility. These conditions are processed through an in-context conditioning technique, which not only makes the model more versatile, but also reduces its overall size and improves its general performance.

Features of the new guidance system:
  • Conditions are explicit variables, not fixed parameters.
  • Uses in-context conditioning to efficiently process diverse information.
  • Achieves a more compact model with better performance.

iMF: A Result that Competes with Multi-Step Methods

The conjunction of these improvements results in iMF (Improved MeanFlow). This model was trained from scratch and, when evaluated on the ImageNet 256x256 dataset with a single function evaluation, achieved a FID score of 1.72. This result substantially surpasses previous single-step methods and, more notably, narrows the gap with generative approaches that require multiple steps or iterations. All this is achieved without employing model distillation techniques, consolidating fast-advancing generative modeling as an independent and powerful paradigm. 🎯