etna.loggers.ClearMLLogger#

class ClearMLLogger(project_name: str | None = None, task_name: str | None = None, task_name_prefix: str = '', task_type: str = 'training', tags: Sequence[str] | None = None, output_uri: str | bool | None = None, auto_connect_frameworks: bool | Mapping[str, bool | str | list] = False, auto_resource_monitoring: bool | Mapping[str, Any] = True, auto_connect_streams: bool | Mapping[str, bool] = True, plot: bool = True, table: bool = True, config: Dict[str, Any] | None = None, save_dir: str | None = None)[source]#

Bases: BaseLogger

ClearML logger.

Note

This logger requires clearml extension to be installed. Read more about this at installation page.

Warning

There is a possibility, that aggregated metrics charts may log incorrectly. For more details see issue.

Create instance of ClearMLLogger.

Parameters:
  • project_name (str | None) – The name of the project in which the experiment will be created.

  • task_name (str | None) – The name of Task (experiment).

  • task_name_prefix (str) – Prefix for the Task name field.

  • task_type (str) – The task type.

  • tags (Sequence[str] | None) – Add a list of tags (str) to the created Task.

  • output_uri (str | bool | None) – The default location for output models and other artifacts.

  • auto_connect_frameworks (bool | Mapping[str, bool | str | list]) – Automatically connect frameworks.

  • auto_resource_monitoring (bool | Mapping[str, Any]) – Automatically create machine resource monitoring plots.

  • auto_connect_streams (bool | Mapping[str, bool]) – Control the automatic logging of stdout and stderr.

  • plot (bool) – Indicator for making and sending plots.

  • table (bool) – Indicator for making and sending tables.

  • config (Dict[str, Any] | None) – A dictionary-like object for saving inputs to your job, like hyperparameters for a model or settings for a data preprocessing job.

  • save_dir (str | None) – Path to the directory for saving intermediate data. Used only when logging DL models. Defaults to ./tb_save

Notes

For more details see documentation

Methods

finish_experiment(*args, **kwargs)

Finish Task.

init_task()

Reinit Task.

log(msg, **kwargs)

Log any event.

log_backtest_metrics(ts, metrics_df, ...)

Write metrics to logger.

log_backtest_run(metrics, forecast, test)

Backtest metrics from one fold to logger.

set_params(**params)

Return new object instance with modified parameters.

start_experiment([job_type, group])

Start Task.

to_dict()

Collect all information about etna object in dict.

Attributes

This class stores its __init__ parameters as attributes.

pl_logger

Pytorch lightning loggers.

finish_experiment(*args, **kwargs)[source]#

Finish Task.

init_task()[source]#

Reinit Task.

log(msg: str | Dict[str, Any], **kwargs)[source]#

Log any event.

This class logs string representation of a message.

Parameters:
  • msg (str | Dict[str, Any]) – Message or dict to log

  • kwargs – Additional parameters for particular implementation

log_backtest_metrics(ts: TSDataset, metrics_df: DataFrame, forecast_df: DataFrame, fold_info_df: DataFrame)[source]#

Write metrics to logger.

Parameters:
  • ts (TSDataset) – TSDataset to with backtest data

  • metrics_df (DataFrame) – Dataframe produced with etna.pipeline.Pipeline._get_backtest_metrics()

  • forecast_df (DataFrame) – Forecast from backtest

  • fold_info_df (DataFrame) – Fold information from backtest

log_backtest_run(metrics: DataFrame, forecast: DataFrame, test: DataFrame)[source]#

Backtest metrics from one fold to logger.

Parameters:
  • metrics (DataFrame) – Dataframe with metrics from backtest fold

  • forecast (DataFrame) – Dataframe with forecast

  • test (DataFrame) – Dataframe with ground truth

set_params(**params: dict) Self[source]#

Return new object instance with modified parameters.

Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a model in a Pipeline.

Nested parameters are expected to be in a <component_1>.<...>.<parameter> form, where components are separated by a dot.

Parameters:

**params (dict) – Estimator parameters

Returns:

New instance with changed parameters

Return type:

Self

Examples

>>> from etna.pipeline import Pipeline
>>> from etna.models import NaiveModel
>>> from etna.transforms import AddConstTransform
>>> model = NaiveModel(lag=1)
>>> transforms = [AddConstTransform(in_column="target", value=1)]
>>> pipeline = Pipeline(model, transforms=transforms, horizon=3)
>>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2})
Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )
start_experiment(job_type: str | None = None, group: str | None = None, *args, **kwargs)[source]#

Start Task.

Complete logger initialization or reinitialize it before the next experiment with the same name.

Parameters:
  • job_type (str | None) – Specify the type of task, which is useful when you’re grouping runs together into larger experiments using group.

  • group (str | None) – Specify a group to organize individual tasks into a larger experiment.

to_dict()[source]#

Collect all information about etna object in dict.

property pl_logger[source]#

Pytorch lightning loggers.