Tabular ML in production rarely lives in neat buckets. Most of the problems our users ship are continuous prediction problems: revenue, latency, risk, demand, price, time-to-failure, LTV, claim amount, and plenty more. In these settings, forcing the problem into classification (by binning targets) typically throws away information that downstream systems actually need: magnitude.
That’s why we’re announcing first-class regression support in TabTune.
With this update, TabTune makes regression feel as straightforward deployment of SOTA Tabular Foundational Models. TabTune offers model-aware preprocessing, automatic target normalization, and consistent evaluation APIs so you can use foundation models for regression without building and maintaining bespoke pipelines.
Why Regression Matters
A lot of real-world decision systems don’t just need a label. They need a number they can optimize around.
Regression is the right formulation when:
- Magnitude matters: “high demand” isn’t enough; planning needs “
- Costs scale with error: being off by 2 is not the same as being off by 200.
- Thresholds change dynamically: classification thresholds drift with region, inventory, risk appetite, and business cycles.
- Downstream systems consume numbers: optimizers, pricing engines, simulators, and capacity planners generally require continuous inputs.
Yes, discretizing into bins can simplify metrics, but it usually comes with hidden taxes:
- loss of granularity,
- harder calibration,
- artificial decision boundaries that make systems brittle.
Regression preserves the natural structure of the problem and tends to produce outputs that are directly usable.
What’s New in This Release
1) Regression is a first-class task in TabTune
TabTune now treats regression as a core capability, not a workaround. You get:
- Model-aware target normalization (automatic)
- Predictions returned on the original target scale (auto denormalized)
- Unified interfaces that match the classification experience
2) Two execution paths: inference and fine-tuning
Regression in TabTune supports two paths depending on your operational needs:
- Inference mode: adopt quickly, avoid training loops, keep deployment simple.
- Fine-tuning mode: adapt to domain-specific distributions and squeeze out additional performance where it matters.
This release makes the choice explicit and keeps both paths inside the same pipeline shape.
3) Standard regression evaluation out of the box
TabTune evaluates regression with widely used, interpretable metrics:
- MSE
- RMSE
- MAE
- R²
We strongly recommend reporting MAE + RMSE together: RMSE surfaces tail risk and large misses; MAE stays robust and easier to interpret.
Regression Model Support in TabTune
Regression is supported across a subset of TabTune’s model zoo. The scalable ICL family (TabICL, OrionMSP, OrionBix) is classification-only today. Regression is supported by the models below:
Current behavior: Regression runs in inference mode for supported models by default (no fine-tuning required). Target scaling is handled automatically, and predictions are returned on the original target scale.
Inference vs Fine-Tuning: Choosing the Right Path
Use inference when you want speed and simplicity
Inference mode is the default recommendation when:
- you want to validate value quickly,
- you want predictable ops without training pipelines,
- you’re deploying in constrained environments.
Inference is often “good enough” for many production regression problems, especially as a baseline you can ship.
Use fine-tuning when domain adaptation matters
Fine-tuning is the right choice when:
- your data distribution is domain-specific (industry features, non-standard missingness patterns),
- you have sufficient data and can afford a training loop,
- you want incremental gains that translate into real business impact.
In this update, we include a reference example for fine-tuning regression (ContextTab), so teams can adopt it when it’s worth the tradeoff.
Quickstart: Regression in TabTune (Inference & fine tuning TFMs)
Install
Load a regression dataset (example: California Housing)
Run Limix (inference)
Run TabDPT (inference)
Fine-tune ContextTab
Practical Recommendations
- Start with TabPFN-v2.5 or TabDPT for regression. They’re the most reliable stable options.
- Use ContextTab / Mitra / Limix when you’re intentionally experimenting and can tolerate variance across datasets.
- Report MAE + RMSE together, and treat RMSE as your “tail risk” alarm.
- If your target is heavily skewed or heavy-tailed, consider upstream transforms (e.g.,
log1p) and invert post-prediction based on your pipeline requirements.
Conclusion
Regression is where tabular ML earns its keep in production. Pricing, forecasting, capacity planning, risk, reliability, claims, LTV: these systems don’t run on labels. They run on continuous values, and they pay for error in real units, real budgets, and real outcomes.
With this release, TabTune ships first-class regression support for tabular foundation models, with two deliberate paths to production:
- Inference mode, when you want fast adoption and operational simplicity without managing training loops.
- Fine-tuning, when your domain data benefits from adaptation and you want to push performance beyond out-of-the-box behavior.
Across both, TabTune provides a consistent pipeline interface, automatic target scaling, and standard regression evaluation so teams can move from dataset to deployable continuous predictions without maintaining brittle, one-off regression plumbing.
If you already use TabTune for classification, regression is now a natural extension of the same workflow. If you’re evaluating tabular foundation models for continuous prediction, this is the most direct path to reliable numbers your downstream systems can actually use.





