etna.models.nn.NBeatsInterpretableModel#
- class NBeatsInterpretableModel(input_size: int, output_size: int, loss: Literal['mse'] | Literal['mae'] | Literal['smape'] | Literal['mape'] | Module = 'mse', trend_blocks: int = 3, trend_layers: int = 4, trend_layer_size: int = 256, degree_of_polynomial: int = 2, seasonality_blocks: int = 3, seasonality_layers: int = 4, seasonality_layer_size: int = 2048, num_of_harmonics: int = 1, lr: float = 0.001, window_sampling_limit: int | None = None, optimizer_params: dict | None = None, train_batch_size: int = 1024, test_batch_size: int = 1024, trainer_params: dict | None = None, train_dataloader_params: dict | None = None, test_dataloader_params: dict | None = None, val_dataloader_params: dict | None = None, split_params: dict | None = None, random_state: int | None = None)[source]#
- Bases: - NBeatsBaseModel- Interpretable N-BEATS model. - Paper: https://arxiv.org/pdf/1905.10437.pdf - Official implementation: ServiceNow/N-BEATS - Note - This model requires - torchextension to be installed. Read more about this at installation page.- Init interpretable N-BEATS model. - Parameters:
- input_size (int) – Input data size. 
- output_size (int) – Forecast size. 
- loss (Literal['mse'] | ~typing.Literal['mae'] | ~typing.Literal['smape'] | ~typing.Literal['mape'] | torch.nn.Module) – Optimisation objective. The loss function should accept three arguments: - y_true,- y_predand- mask. The last parameter is a binary mask that denotes which points are valid forecasts. There are several implemented loss functions available in the- etna.models.nn.nbeats.metricsmodule.
- trend_blocks (int) – Number of trend blocks. 
- trend_layers (int) – Number of inner layers in each trend block. 
- trend_layer_size (int) – Inner layer size in trend blocks. 
- degree_of_polynomial (int) – Polynomial degree for trend modeling. 
- seasonality_blocks (int) – Number of seasonality blocks. 
- seasonality_layers (int) – Number of inner layers in each seasonality block. 
- seasonality_layer_size (int) – Inner layer size in seasonality blocks. 
- num_of_harmonics (int) – Number of harmonics for seasonality estimation. 
- lr (float) – Optimizer learning rate. 
- window_sampling_limit (int | None) – Size of history for sampling training data. If set to - Nonefull series history used for sampling.
- optimizer_params (dict | None) – Additional parameters for the optimizer. 
- train_batch_size (int) – Batch size for training. 
- test_batch_size (int) – Batch size for testing. 
- optimizer_params – Parameters for optimizer for Adam optimizer (api reference - torch.optim.Adam).
- trainer_params (dict | None) – Pytorch lightning trainer parameters (api reference - lightning.pytorch.trainer.trainer.Trainer).
- train_dataloader_params (dict | None) – Parameters for train dataloader like sampler for example (api reference - torch.utils.data.DataLoader).
- test_dataloader_params (dict | None) – Parameters for test dataloader. 
- val_dataloader_params (dict | None) – Parameters for validation dataloader. 
- split_params (dict | None) – - Dictionary with parameters for torch.utils.data.random_split()for train-test splitting
- train_size: (float) value from 0 to 1 - fraction of samples to use for training 
- generator: (Optional[torch.Generator]) - generator for reproducibile train-test splitting 
- torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing - __len__
 
 
- Dictionary with parameters for 
- random_state (int | None) – Random state for train batches generation. 
 
 - Methods - fit(ts)- Fit model. - forecast(ts, prediction_size[, ...])- Make predictions. - Get model. - load(path[, ts])- Load an object. - Get default grid for tuning hyperparameters. - predict(ts, prediction_size[, return_components])- Make predictions. - raw_fit(torch_dataset)- Fit model on torch like Dataset. - raw_predict(torch_dataset)- Make inference on torch like Dataset. - save(path)- Save the object. - set_params(**params)- Return new object instance with modified parameters. - to_dict()- Collect all information about etna object in dict. - Attributes - This class stores its - __init__parameters as attributes.- Context size of the model. - fit(ts: TSDataset) DeepBaseModel[source]#
- Fit model. - Model continues training after each - fitcall.- Parameters:
- ts (TSDataset) – TSDataset with features 
- Returns:
- Model after fit 
- Return type:
- DeepBaseModel 
 
 - forecast(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset[source]#
- Make predictions. - This method will make autoregressive predictions. - Parameters:
- Returns:
- Dataset with predictions 
- Return type:
 
 - classmethod load(path: Path, ts: TSDataset | None = None) Self[source]#
- Load an object. - Warning - This method uses - dillmodule which is not secure. It is possible to construct malicious data which will execute arbitrary code during loading. Never load data that could have come from an untrusted source, or that could have been tampered with.
 - params_to_tune() Dict[str, BaseDistribution][source]#
- Get default grid for tuning hyperparameters. - This grid tunes parameters: - trend_blocks,- trend_layers,- trend_layer_size,- degree_of_polynomial,- seasonality_blocks,- seasonality_layers,- seasonality_layer_size,- lr. Other parameters are expected to be set by the user.- Returns:
- Grid to tune. 
- Return type:
 
 - predict(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset[source]#
- Make predictions. - This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts. - Parameters:
- Returns:
- Dataset with predictions 
- Return type:
 
 - raw_fit(torch_dataset: Dataset) DeepBaseModel[source]#
- Fit model on torch like Dataset. - Parameters:
- torch_dataset (Dataset) – Torch like dataset for model fit 
- Returns:
- Model after fit 
- Return type:
- DeepBaseModel 
 
 - raw_predict(torch_dataset: Dataset) Dict[Tuple[str, str], ndarray][source]#
- Make inference on torch like Dataset. 
 - set_params(**params: dict) Self[source]#
- Return new object instance with modified parameters. - Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a - modelin a- Pipeline.- Nested parameters are expected to be in a - <component_1>.<...>.<parameter>form, where components are separated by a dot.- Parameters:
- **params (dict) – Estimator parameters 
- Returns:
- New instance with changed parameters 
- Return type:
- Self 
 - Examples - >>> from etna.pipeline import Pipeline >>> from etna.models import NaiveModel >>> from etna.transforms import AddConstTransform >>> model = NaiveModel(lag=1) >>> transforms = [AddConstTransform(in_column="target", value=1)] >>> pipeline = Pipeline(model, transforms=transforms, horizon=3) >>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2}) Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )