🚀 TabPFNMix迴歸器
TabPFNMix迴歸器是一種表格基礎模型,它在從隨機迴歸器組合中採樣得到的純合成數據集上進行預訓練。該模型能夠有效處理表格數據的迴歸問題,為相關領域的研究和應用提供了強大的支持。
🚀 快速開始
要使用TabPFNMix迴歸器,你需要先安裝AutoGluon,可通過運行以下命令進行安裝:
pip install autogluon
✨ 主要特性
TabPFNMix基於一個具有3700萬個參數的12層編碼器 - 解碼器Transformer架構。它採用了一種結合上下文學習的預訓練策略,與TabPFN和TabForestPFN所使用的策略類似。
💻 使用示例
基礎用法
以下是一個使用TabPFNMix迴歸器進行微調與推理的最小示例:
import pandas as pd
from autogluon.tabular import TabularPredictor
if __name__ == '__main__':
train_data = pd.read_csv('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
subsample_size = 5000
if subsample_size is not None and subsample_size < len(train_data):
train_data = train_data.sample(n=subsample_size, random_state=0)
test_data = pd.read_csv('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
tabpfnmix_default = {
"model_path_classifier": "autogluon/tabpfn-mix-1.0-classifier",
"model_path_regressor": "autogluon/tabpfn-mix-1.0-regressor",
"n_ensembles": 1,
"max_epochs": 30,
}
hyperparameters = {
"TABPFNMIX": [
tabpfnmix_default,
],
}
label = "age"
problem_type = "regression"
predictor = TabularPredictor(
label=label,
problem_type=problem_type,
)
predictor = predictor.fit(
train_data=train_data,
hyperparameters=hyperparameters,
verbosity=3,
)
predictor.leaderboard(test_data, display=True)
📚 詳細文檔
如果你在研究中發現TabPFNMix很有用,請考慮引用相關論文:
@article{erickson2020autogluon,
title={Autogluon-tabular: Robust and accurate automl for structured data},
author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2003.06505},
year={2020}
}
@article{hollmann2022tabpfn,
title={Tabpfn: A transformer that solves small tabular classification problems in a second},
author={Hollmann, Noah and M{\"u}ller, Samuel and Eggensperger, Katharina and Hutter, Frank},
journal={arXiv preprint arXiv:2207.01848},
year={2022}
}
@article{breejen2024context,
title={Why In-Context Learning Transformers are Tabular Data Classifiers},
author={Breejen, Felix den and Bae, Sangmin and Cha, Stephen and Yun, Se-Young},
journal={arXiv preprint arXiv:2405.13396},
year={2024}
}
📄 許可證
本項目採用Apache-2.0許可證。