View Jupyter notebook on the GitHub.

Embedding models#

Binder

This notebooks contains examples with embedding models.

Table of contents

  • Using embedding models directly

  • Using embedding models with transforms

    • Baseline

    • EmbeddingSegmentTransform

    • EmbeddingWindowTransform

  • Saving and loading models

  • Loading external pretrained models

[1]:
import warnings

warnings.filterwarnings("ignore")

1. Using embedding models directly#

We have two models to generate embeddings for time series: TS2VecEmbeddingModel and TSTCCEmbeddingModel.

Each model has following methods:

  • fit to train model:

  • encode_segment to generate embeddings for the whole series. These features are regressors.

  • encode_window to generate embeddings for each timestamp. These features aren’t regressors and lag transformation should be applied to them before using in forecasting.

  • freeze to enable or disable skipping training in fit method. It is useful, for example, when you have a pretrained model and you want only to generate embeddings without new training during backtest.

  • save and load to save and load pretrained models, respectively.

[2]:
from lightning.pytorch import seed_everything

seed_everything(42, workers=True)
Seed set to 42
[2]:
42
[3]:
from etna.datasets import TSDataset
from etna.datasets import generate_ar_df

df = generate_ar_df(periods=10, start_time="2001-01-01", n_segments=3)
ts = TSDataset(df, freq="D")
ts.head()
[3]:
segment segment_0 segment_1 segment_2
feature target target target
timestamp
2001-01-01 1.624345 1.462108 -1.100619
2001-01-02 1.012589 -0.598033 0.044105
2001-01-03 0.484417 -0.920450 0.945695
2001-01-04 -0.588551 -1.304504 1.448190
2001-01-05 0.276856 -0.170735 2.349046

Now let’s work with models directly.

They are expecting array with shapes (n_segments, n_timestamps, num_features). The example shows working with TS2VecEmbeddingModel, it is all the same with TSTCCEmbeddingModel.

[4]:
x = ts.df.values.reshape(ts.size()).transpose(1, 0, 2)
x.shape
[4]:
(3, 10, 1)
[5]:
from etna.transforms.embeddings.models import TS2VecEmbeddingModel
from etna.transforms.embeddings.models import TSTCCEmbeddingModel

model_ts2vec = TS2VecEmbeddingModel(input_dims=1, output_dims=2)
model_ts2vec.fit(x, n_epochs=1)
segment_embeddings = model_ts2vec.encode_segment(x)
segment_embeddings.shape
[5]:
(3, 2)

As we are using encode_segment we get output_dims features consisting of one value for each segment.

And what about encode_window?

[6]:
window_embeddings = model_ts2vec.encode_window(x)
window_embeddings.shape
[6]:
(3, 10, 2)

We get output_dims features consisting of n_timestamps values for each segment.

You can change some attributes of the model after initialization, for example device, batch_size or num_workers.

[7]:
model_ts2vec.device = "cuda"

2. Using embedding models with transforms#

In this section we will test our models on example.

[8]:
HORIZON = 6

2.1 Baseline#

Before working with embedding features, let’s make forecasts using usual features.

[9]:
from etna.datasets import load_dataset

ts = load_dataset("m3_monthly")
ts = TSDataset(ts.to_pandas(features=["target"]), freq=None)
ts.head()
[9]:
segment M1000_MACRO M1001_MACRO M1002_MACRO M1003_MACRO M1004_MACRO M1005_MACRO M1006_MACRO M1007_MACRO M1008_MACRO M1009_MACRO ... M992_MACRO M993_MACRO M994_MACRO M995_MACRO M996_MACRO M997_MACRO M998_MACRO M999_MACRO M99_MICRO M9_MICRO
feature target target target target target target target target target target ... target target target target target target target target target target
timestamp
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

5 rows × 1428 columns

[10]:
from etna.metrics import SMAPE
from etna.models import CatBoostMultiSegmentModel
from etna.pipeline import Pipeline
from etna.transforms import LagTransform

model = CatBoostMultiSegmentModel()

lag_transform = LagTransform(in_column="target", lags=list(range(HORIZON, HORIZON + 6)), out_column="lag")

pipeline = Pipeline(model=model, transforms=[lag_transform], horizon=HORIZON)
metrics_df, _, _ = pipeline.backtest(ts, metrics=[SMAPE()], n_folds=3)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    5.0s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:   10.3s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   15.5s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   15.5s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.8s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    1.7s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    2.5s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    2.5s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[11]:
print("SMAPE: ", metrics_df["SMAPE"].mean())
SMAPE:  14.719683971886594

2.2 EmbeddingSegmentTransform#

EmbeddingSegmentTransform calls models’ encode_segment method inside.

[12]:
from etna.transforms import EmbeddingSegmentTransform
from etna.transforms.embeddings.models import BaseEmbeddingModel


def forecast_with_segment_embeddings(
    emb_model: BaseEmbeddingModel, training_params: dict = {}, n_folds: int = 3
) -> float:
    model = CatBoostMultiSegmentModel()

    emb_transform = EmbeddingSegmentTransform(
        in_columns=["target"], embedding_model=emb_model, training_params=training_params, out_column="emb"
    )
    pipeline = Pipeline(model=model, transforms=[lag_transform, emb_transform], horizon=HORIZON)
    metrics_df, _, _ = pipeline.backtest(ts, metrics=[SMAPE()], n_folds=n_folds)
    smape_score = metrics_df["SMAPE"].mean()
    return smape_score

You can see training parameters of the model to pass it to transform.

Let’s begin with TSTCCEmbeddingModel

[13]:
?TSTCCEmbeddingModel.fit
Signature:
TSTCCEmbeddingModel.fit(
    self,
    x: numpy.ndarray,
    n_epochs: int = 40,
    lr: float = 0.001,
    temperature: float = 0.2,
    lambda1: float = 1,
    lambda2: float = 0.7,
    verbose: bool = False,
) -> 'TSTCCEmbeddingModel'
Docstring:
Fit TSTCC embedding model.

Parameters
----------
x:
    data with shapes (n_segments, n_timestamps, input_dims).
n_epochs:
    The number of epochs. When this reaches, the training stops.
lr:
    The learning rate.
temperature:
    Temperature in NTXentLoss.
lambda1:
    The relative weight of the first item in the loss (temporal contrasting loss).
lambda2:
    The relative weight of the second item in the loss (contextual contrasting loss).
verbose:
    Whether to print the training loss after each epoch.
File:      ~/PycharmProjects/etna/etna/transforms/embeddings/models/tstcc.py
Type:      function
[14]:
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

emb_model = TSTCCEmbeddingModel(input_dims=1, tc_hidden_dim=16, depth=3, output_dims=6, device=device)
training_params = {"n_epochs": 10}
smape_score = forecast_with_segment_embeddings(emb_model, training_params)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:  1.0min
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:  2.2min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  3.3min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  3.3min
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    4.4s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    8.6s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   13.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   13.0s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[15]:
print("SMAPE: ", smape_score)
SMAPE:  14.039866723276173

Better then without embeddings. Let’s try TS2VecEmbeddingModel.

[16]:
emb_model = TS2VecEmbeddingModel(input_dims=1, hidden_dims=16, depth=3, output_dims=6, device=device)
training_params = {"n_epochs": 10}
smape_score = forecast_with_segment_embeddings(emb_model, training_params)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:   38.8s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:  1.3min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  1.9min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  1.9min
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    3.1s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    6.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    9.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    9.1s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[17]:
print("SMAPE: ", smape_score)
SMAPE:  13.66458975722417

Much better. Now let’s try another transform.

2.3 EmbeddingWindowTransform#

EmbeddingWindowTransform calls models’ encode_window method inside. As we have discussed, these features are not regressors and should be used as lags for future.

[18]:
from etna.transforms import EmbeddingWindowTransform
from etna.transforms import FilterFeaturesTransform


def forecast_with_window_embeddings(emb_model: BaseEmbeddingModel, training_params: dict) -> float:
    model = CatBoostMultiSegmentModel()

    output_dims = emb_model.output_dims

    emb_transform = EmbeddingWindowTransform(
        in_columns=["target"], embedding_model=emb_model, training_params=training_params, out_column="embedding_window"
    )
    lag_emb_transforms = [
        LagTransform(in_column=f"embedding_window_{i}", lags=[HORIZON], out_column=f"lag_emb_{i}")
        for i in range(output_dims)
    ]
    filter_transforms = FilterFeaturesTransform(exclude=[f"embedding_window_{i}" for i in range(output_dims)])

    transforms = [lag_transform] + [emb_transform] + lag_emb_transforms + [filter_transforms]

    pipeline = Pipeline(model=model, transforms=transforms, horizon=HORIZON)
    metrics_df, _, _ = pipeline.backtest(ts, metrics=[SMAPE()], n_folds=3)
    smape_score = metrics_df["SMAPE"].mean()
    return smape_score
[19]:
emb_model = TSTCCEmbeddingModel(input_dims=1, tc_hidden_dim=16, depth=3, output_dims=6, device=device)
training_params = {"n_epochs": 10}
smape_score = forecast_with_window_embeddings(emb_model, training_params)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:  1.4min
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:  2.7min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  4.2min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  4.2min
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:   18.4s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:   37.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   55.4s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   55.4s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[20]:
print("SMAPE: ", smape_score)
SMAPE:  116.00285960604187

Oops… What about TS2VecEmbeddingModel?

[21]:
emb_model = TS2VecEmbeddingModel(input_dims=1, hidden_dims=16, depth=3, output_dims=6, device=device)
training_params = {"n_epochs": 10}
smape_score = forecast_with_window_embeddings(emb_model, training_params)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:   47.9s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:  1.7min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  2.6min
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:  2.6min
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:   17.2s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:   34.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   51.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:   51.0s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   2 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[Parallel(n_jobs=1)]: Done   3 tasks      | elapsed:    0.1s
[22]:
print("SMAPE: ", smape_score)
SMAPE:  31.71456493613132

Window embeddings don’t help with this dataset. It means that you should try both models and both transforms to get the best results.

3. Saving and loading models#

If you have a pretrained embedding model and aren’t going to train it on calling fit, you should “freeze” training loop. It is helpful for using the model inside transforms, which call fit method on each fit of the pipeline.

[23]:
MODEL_PATH = "model.zip"
[24]:
emb_model.freeze()
emb_model.save(MODEL_PATH)

Now you are ready to load pretrained model.

[25]:
model_loaded = TS2VecEmbeddingModel.load(MODEL_PATH)

If you need to fine-tune pretrained model, you should “unfreeze” training loop. After that it will start fitting on calling fit method.

[26]:
model_loaded.freeze(is_freezed=False)

To get information about whether model is “freezed” or not use is_freezed property.

[27]:
model_loaded.is_freezed
[27]:
False

4. Loading external pretrained models#

In this section we introduce our pretrained embedding models.

[28]:
HORIZON = 12

ts = load_dataset("tourism_monthly")
ts = TSDataset(ts.to_pandas(features=["target"]), freq=None)
ts.head()
[28]:
segment m1 m10 m100 m101 m102 m103 m104 m105 m106 m107 ... m90 m91 m92 m93 m94 m95 m96 m97 m98 m99
feature target target target target target target target target target target ... target target target target target target target target target target
timestamp
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN 4.0 329.0 1341.0 319.0 1419.0 462.0 921.0 3118.0 ... 7301.0 4374.0 803.0 191.0 124.0 319.0 270.0 36.0 109.0 38.0
4 NaN NaN 40.0 439.0 1258.0 315.0 1400.0 550.0 1060.0 2775.0 ... 13980.0 3470.0 963.0 265.0 283.0 690.0 365.0 31.0 158.0 74.0

5 rows × 366 columns

Our base pipeline with lags.

[29]:
model = CatBoostMultiSegmentModel()

lag_transform = LagTransform(in_column="target", lags=list(range(HORIZON, HORIZON + 6)), out_column="lag")

pipeline = Pipeline(model=model, transforms=[lag_transform], horizon=HORIZON)
metrics_df, _, _ = pipeline.backtest(ts, metrics=[SMAPE()], n_folds=1)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    3.7s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    3.7s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.3s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.3s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[30]:
print("SMAPE: ", metrics_df["SMAPE"].mean())
SMAPE:  18.80136468764402

It is often useful to encode segment by SegmentEncoderTransform when using multi-segment models like now.

[31]:
from etna.transforms import SegmentEncoderTransform

model = CatBoostMultiSegmentModel()

lag_transform = LagTransform(in_column="target", lags=list(range(HORIZON, HORIZON + 6)), out_column="lag")
segment_transform = SegmentEncoderTransform()

pipeline = Pipeline(model=model, transforms=[lag_transform, segment_transform], horizon=HORIZON)
metrics_df, _, _ = pipeline.backtest(ts, metrics=[SMAPE()], n_folds=1)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    9.2s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    9.2s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.6s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.6s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[32]:
print("SMAPE: ", metrics_df["SMAPE"].mean())
SMAPE:  18.719919206298737

Segment embeddings from EmbeddingSegmentTransform can replace SegmentEncoderTransform’s feature. The main advantage of using segment embeddings is that you can forecast new segments by your trained pipeline. SegmentEncoderTransform can’t work with segments that weren’t present during training.

To see available embedding models use list_models method of TS2VecEmbeddingModel or TSTCCEmbeddingModel

[33]:
TS2VecEmbeddingModel.list_models()
[33]:
['ts2vec_tiny']

Let’s load ts2vec_tiny pretrained model.

[34]:
emb_model = TS2VecEmbeddingModel.load(path="ts2vec_model.zip", model_name="ts2vec_tiny")

smape_score = forecast_with_segment_embeddings(emb_model, n_folds=1)
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    6.9s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    6.9s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    1.8s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    1.8s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done   1 tasks      | elapsed:    0.0s
[35]:
print("SMAPE: ", smape_score)
SMAPE:  18.436162523492154

We get better result compared to SegmentEncoderTransform and opportunity to use pipeline for new segments.