IBM Time Series Analysis Suite.
This project provides a time-series forecasting service using the ibm-granite/granite-timeseries-ttm-r2 model, served with LitServe. View hugging face model card here.
- Exposes a simple REST API for time-series forecasting.
- Utilizes the
ibm-granite/granite-timeseries-ttm-r2model for predictions. - Built on the high-performance LitServe framework for serving machine learning models.
- Asynchronous API for efficient handling of requests.
- Request and response validation using Pydantic.
- Python 3.11+
- uv (recommended for environment management)
The required Python packages are listed in pyproject.toml.
-
Clone the repository:
git clone <your-repository-url> cd ais-time-series
-
Create a virtual environment and install the dependencies:
Using
uv(recommended):uv venv source .venv/bin/activate uv syncUsing
pipandvenv:python -m venv .venv source .venv/bin/activate pip install -e .
To start the forecasting server, run the following command from the project's root directory:
python main.pyThe server will start on http://127.0.0.1:8081.
Build and run the Docker container locally:
docker build -t ais-time-series .
docker run -p 8081:8081 ais-time-seriesThe client is based off the community notebook Time_Series_Getting_Started
In a separate terminal, you can run the example client to send a sample request to the running server:
python client.pyThe server is designed to be robust, handling data cleaning (forward/backward fill for missing values) and validation internally. You can send a POST request to the default /predict endpoint to get a forecast.
Method: POST
URL: http://127.0.0.1:8081/predict
The request body must be a JSON object with the following structure:
{
"data": {
"timestamp": ["2023-01-01T00:00:00", "..."],
"value1": [10.1, "..."],
"value2": [20.5, "..."]
},
"timestamp_col": "timestamp",
"target_cols": ["value1", "value2"],
"freq": "h",
"context_length": 512,
"prediction_length": 96
}data: A dictionary containing lists of equal length. One list must correspond to thetimestamp_col.timestamp_col: The name of the key indatathat holds the timestamps.target_cols: A list of keys indatato be forecasted.freq: The frequency of the time series data, expressed as a pandas frequency string (e.g.,"h"for hourly,"D"for daily). Defaults to"h".context_length: The number of past time steps to use as input for the model. The number of entries in your data must be at least this large.prediction_length: The number of future time steps to forecast (the forecast horizon).
Here is an example of how to send a request using curl. Note that context_length is set to 8, so we provide 10 data points.
curl -X POST http://127.0.0.1:8081/predict \
-H "Content-Type: application/json" \
-d '{
"data": {
"timestamp": [
"2023-01-01T00:00:00", "2023-01-01T01:00:00", "2023-01-01T02:00:00",
"2023-01-01T03:00:00", "2023-01-01T04:00:00", "2023-01-01T05:00:00",
"2023-01-01T06:00:00", "2023-01-01T07:00:00", "2023-01-01T08:00:00",
"2023-01-01T09:00:00"
],
"value1": [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
},
"timestamp_col": "timestamp",
"target_cols": ["value1"],
"freq": "h",
"context_length": 8,
"prediction_length": 2
}'The API will return a JSON object containing the forecast.
{
"prediction": [
{
"timestamp": "2023-01-01T10:00:00",
"value1": 20.123
},
{
"timestamp": "2023-01-01T11:00:00",
"value1": 21.456
}
]
}(Note: The prediction values above are illustrative and not the actual model output.)
This application is designed to be deployed in a containerized environment, such as OpenShift or Kubernetes.
The application can be deployed using the Helm chart located in the charts/ directory.
helm upgrade --install ais-time-series ./charts/ais-time-series -n <namespace>To deploy with helmfile, apply the helmfile.yaml
helmfile -f charts/helmfile.yaml applyTo deploy with ArgoCD, apply the application.yaml manifest:
oc apply -f charts/argocd/application.yaml