đ MOMENT-Large
MOMENT is a family of foundation models designed for general - purpose time - series analysis. These models can serve as a cornerstone for various time - series analysis tasks such as forecasting, classification, anomaly detection, and imputation. They are effective right out of the box, requiring little to no task - specific exemplars, enabling zero - shot forecasting and few - shot classification. Moreover, they can be tuned using in - distribution and task - specific data to enhance performance.
For detailed information on MOMENT models, training data, and experimental results, please refer to the paper MOMENT: A Family of Open Time - series Foundation Models.
MOMENT - 1 comes in three sizes: [Small](https://huggingface.co/AutonLab/MOMENT - 1 - small), [Base](https://huggingface.co/AutonLab/MOMENT - 1 - base), and [Large](https://huggingface.co/AutonLab/MOMENT - 1 - large).
đ Quick Start
⨠Features
- Versatile Application: Suitable for a wide range of time - series analysis tasks.
- Out - of - the - Box Effectiveness: Requires few or no task - specific exemplars.
- Tunable: Can be tuned with in - distribution and task - specific data.
đĻ Installation
Recommended Python Version: Python 3.11 (support for additional versions is expected soon).
You can install the momentfm
package using pip:
pip install momentfm
Alternatively, to install the latest version directly from the GitHub repository:
pip install git+https://github.com/moment - timeseries - foundation - model/moment.git
đģ Usage Examples
Basic Usage
To load the pre - trained model for different tasks, use the following code snippets:
Forecasting
from momentfm import MOMENTPipeline
model = MOMENTPipeline.from_pretrained(
"AutonLab/MOMENT - 1 - large",
model_kwargs={
'task_name': 'forecasting',
'forecast_horizon': 96
},
)
model.init()
Classification
from momentfm import MOMENTPipeline
model = MOMENTPipeline.from_pretrained(
"AutonLab/MOMENT - 1 - large",
model_kwargs={
'task_name': 'classification',
'n_channels': 1,
'num_class': 2
},
)
model.init()
Anomaly Detection, Imputation, and Pre - training
from momentfm import MOMENTPipeline
model = MOMENTPipeline.from_pretrained(
"AutonLab/MOMENT - 1 - large",
model_kwargs={"task_name": "reconstruction"},
)
mode.init()
Representation Learning
from momentfm import MOMENTPipeline
model = MOMENTPipeline.from_pretrained(
"AutonLab/MOMENT - 1 - large",
model_kwargs={'task_name': 'embedding'},
)
Advanced Usage
Here is the list of tutorials and reproducible experiments to get started with MOMENT for various tasks:
- [Forecasting](https://github.com/moment - timeseries - foundation - model/moment/blob/main/tutorials/forecasting.ipynb)
- [Classification](https://github.com/moment - timeseries - foundation - model/moment/blob/main/tutorials/classification.ipynb)
- [Anomaly Detection](https://github.com/moment - timeseries - foundation - model/moment/blob/main/tutorials/anomaly_detection.ipynb)
- [Imputation](https://github.com/moment - timeseries - foundation - model/moment/blob/main/tutorials/imputation.ipynb)
- [Representation Learning](https://github.com/moment - timeseries - foundation - model/moment/blob/main/tutorials/representation_learning.ipynb)
- [Real - world Electrocardiogram (ECG) Case Study](https://github.com/moment - timeseries - foundation - model/moment/blob/main/tutorials/ptbxl_classification.ipynb) -- This tutorial also shows how to fine - tune MOMENT for a real - world ECG classification problem, performing training and inference on multiple GPUs and parameter efficient fine - tuning (PEFT).
đ Documentation
Model Details
Model Description
Model Sources
- Repository: https://github.com/moment - timeseries - foundation - model/ (Pre - training and research code coming out soon!)
- Paper: https://arxiv.org/abs/2402.03885
- Demo: https://github.com/moment - timeseries - foundation - model/moment/tree/main/tutorials
Environmental Impact
We train multiple models over many days resulting in significant energy usage and a sizeable carbon footprint. However, we hope that releasing our models will ensure that future time - series modeling efforts are quicker and more efficient, resulting in lower carbon emissions.
We use the Total Graphics Power (TGP) to calculate the total power consumed for training MOMENT models, although the total power consumed by the GPU will likely vary a little based on the GPU utilization while training our model. Our calculations do not account for power demands from other sources of our compute. We use 336.566 Kg C02/MWH as the standard value of CO2 emission per megawatt hour of energy consumed for Pittsburgh.
- Hardware Type: NVIDIA RTX A6000 GPU
- GPU Hours: 404
- Compute Region: Pittsburgh, USA
- Carbon Emission (tCO2eq):
Hardware
All models were trained and evaluated on a computing cluster consisting of 128 AMD EPYC 7502 CPUs, 503 GB of RAM, and 8 NVIDIA RTX A6000 GPUs each with 49 GiB RAM. All MOMENT variants were trained on a single A6000 GPU (with any data or model parallelism).
đ License
This project is licensed under the MIT License.
đ Citation
BibTeX:
If you use MOMENT please cite our paper:
@inproceedings{goswami2024moment,
title={MOMENT: A Family of Open Time - series Foundation Models},
author={Mononito Goswami and Konrad Szafer and Arjun Choudhry and Yifu Cai and Shuo Li and Artur Dubrawski},
booktitle={International Conference on Machine Learning},
year={2024}
}
APA:
Goswami, M., Szafer, K., Choudhry, A., Cai, Y., Li, S., & Dubrawski, A. (2024).
MOMENT: A Family of Open Time - series Foundation Models. In International Conference on Machine Learning. PMLR.