No items found.

Now Shipping: TabTune Regression for Tabular Foundational Models

2.5 mins

Tabular ML in production rarely lives in neat buckets. Most of the problems our users ship are continuous prediction problems: revenue, latency, risk, demand, price, time-to-failure, LTV, claim amount, and plenty more. In these settings, forcing the problem into classification (by binning targets) typically throws away information that downstream systems actually need: magnitude.

That’s why we’re announcing first-class regression support in TabTune.

With this update, TabTune makes regression feel as straightforward deployment of SOTA Tabular Foundational Models. TabTune offers model-aware preprocessing, automatic target normalization, and consistent evaluation APIs so you can use foundation models for regression without building and maintaining bespoke pipelines.

Why Regression Matters

A lot of real-world decision systems don’t just need a label. They need a number they can optimize around.

Regression is the right formulation when:

  • Magnitude matters: “high demand” isn’t enough; planning needs “
  • Costs scale with error: being off by 2 is not the same as being off by 200.
  • Thresholds change dynamically: classification thresholds drift with region, inventory, risk appetite, and business cycles.
  • Downstream systems consume numbers: optimizers, pricing engines, simulators, and capacity planners generally require continuous inputs.

Yes, discretizing into bins can simplify metrics, but it usually comes with hidden taxes:

  • loss of granularity,
  • harder calibration,
  • artificial decision boundaries that make systems brittle.

Regression preserves the natural structure of the problem and tends to produce outputs that are directly usable.

What’s New in This Release

1) Regression is a first-class task in TabTune

TabTune now treats regression as a core capability, not a workaround. You get:

  • Model-aware target normalization (automatic)
  • Predictions returned on the original target scale (auto denormalized)
  • Unified interfaces that match the classification experience

2) Two execution paths: inference and fine-tuning

Regression in TabTune supports two paths depending on your operational needs:

  • Inference mode: adopt quickly, avoid training loops, keep deployment simple.
  • Fine-tuning mode: adapt to domain-specific distributions and squeeze out additional performance where it matters.

This release makes the choice explicit and keeps both paths inside the same pipeline shape.

3) Standard regression evaluation out of the box

TabTune evaluates regression with widely used, interpretable metrics:

  • MSE
  • RMSE
  • MAE

We strongly recommend reporting MAE + RMSE together: RMSE surfaces tail risk and large misses; MAE stays robust and easier to interpret.

Regression Model Support in TabTune

Regression is supported across a subset of TabTune’s model zoo. The scalable ICL family (TabICL, OrionMSP, OrionBix) is classification-only today. Regression is supported by the models below:

Model Regression support level Notes (what to expect)
TabPFN-v2.5 Stable Strong general-purpose regression support.
TabDPT Stable Production-ready regression behavior across diverse datasets.
ContextTab Experimental Works, but can be dataset-sensitive.
Mitra Experimental Available, but less reliable across tasks.
Limix Experimental Inference-style probabilistic setup; performance may vary by dataset.

Current behavior: Regression runs in inference mode for supported models by default (no fine-tuning required). Target scaling is handled automatically, and predictions are returned on the original target scale.

Inference vs Fine-Tuning: Choosing the Right Path

Use inference when you want speed and simplicity

Inference mode is the default recommendation when:

  • you want to validate value quickly,
  • you want predictable ops without training pipelines,
  • you’re deploying in constrained environments.

Inference is often “good enough” for many production regression problems, especially as a baseline you can ship.

Use fine-tuning when domain adaptation matters

Fine-tuning is the right choice when:

  • your data distribution is domain-specific (industry features, non-standard missingness patterns),
  • you have sufficient data and can afford a training loop,
  • you want incremental gains that translate into real business impact.

In this update, we include a reference example for fine-tuning regression (ContextTab), so teams can adopt it when it’s worth the tradeoff.

Quickstart: Regression in TabTune (Inference & fine tuning TFMs)

Install
</> Bash
pip install tabtune
Load a regression dataset (example: California Housing)
</> Python
from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split data = fetch_california_housing(as_frame=True) df = data.frame.copy() target_col = "MedHouseVal" X = df.drop(columns=[target_col]) y = df[target_col] X_trainval, X_test, y_trainval, y_test = train_test_split( X, y, test_size=0.15, random_state=42 ) X_train, X_val, y_train, y_val = train_test_split( X_trainval, y_trainval, test_size=0.1765, random_state=42 )
Run Limix (inference)
</> Python
from tabtune import TabularPipeline limix_pipe = TabularPipeline( model_name="Limix", task_type="regression", tuning_strategy="inference", tuning_params={"seed": 42} # add "device": "cuda" if available ) limix_pipe.fit(X_train, y_train) limix_val = limix_pipe.evaluate(X_val, y_val) limix_test = limix_pipe.evaluate(X_test, y_test)
Run TabDPT (inference)
</> Python
from tabtune import TabularPipeline tabdpt_pipe = TabularPipeline( model_name="TabDPT", task_type="regression", tuning_strategy="inference", tuning_params={"seed": 42} ) tabdpt_pipe.fit(X_train, y_train) tabdpt_val = tabdpt_pipe.evaluate(X_val, y_val) tabdpt_test = tabdpt_pipe.evaluate(X_test, y_test)
Fine-tune ContextTab
</> Python · Fine-tuning
from tabtune import TabularPipeline tuning_params = { "seed": 42, "epochs": 2, "learning_rate": 2e-5, "batch_size": 8, # "device": "cuda", # recommended if available } contexttab_finetuned = TabularPipeline( model_name="ContextTab", task_type="regression", tuning_strategy="finetune", finetune_mode="tbt", tuning_params=tuning_params, ) contexttab_finetuned.fit(X_train, y_train) ft_val = contexttab_finetuned.evaluate(X_val, y_val) ft_test = contexttab_finetuned.evaluate(X_test, y_test)

Practical Recommendations

  • Start with TabPFN-v2.5 or TabDPT for regression. They’re the most reliable stable options.
  • Use ContextTab / Mitra / Limix when you’re intentionally experimenting and can tolerate variance across datasets.
  • Report MAE + RMSE together, and treat RMSE as your “tail risk” alarm.
  • If your target is heavily skewed or heavy-tailed, consider upstream transforms (e.g., log1p) and invert post-prediction based on your pipeline requirements.

Conclusion

Regression is where tabular ML earns its keep in production. Pricing, forecasting, capacity planning, risk, reliability, claims, LTV: these systems don’t run on labels. They run on continuous values, and they pay for error in real units, real budgets, and real outcomes.

With this release, TabTune ships first-class regression support for tabular foundation models, with two deliberate paths to production:

  • Inference mode, when you want fast adoption and operational simplicity without managing training loops.
  • Fine-tuning, when your domain data benefits from adaptation and you want to push performance beyond out-of-the-box behavior.

Across both, TabTune provides a consistent pipeline interface, automatic target scaling, and standard regression evaluation so teams can move from dataset to deployable continuous predictions without maintaining brittle, one-off regression plumbing.

If you already use TabTune for classification, regression is now a natural extension of the same workflow. If you’re evaluating tabular foundation models for continuous prediction, this is the most direct path to reliable numbers your downstream systems can actually use.

Aditya Tanna
Research Scientist
Subscribe to Lexsi

Stay Up to Date With All the News & Updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.