etna.loggers.WandbLogger#
- class WandbLogger(name: str | None = None, entity: str | None = None, project: str | None = None, job_type: str | None = None, group: str | None = None, tags: List[str] | None = None, plot: bool = True, table: bool = True, name_prefix: str = '', config: Dict[str, Any] | None = None, log_model: bool = False)[source]#
- Bases: - BaseLogger- Weights&Biases logger. - Note - This logger requires - wandbextension to be installed. Read more about this at installation page.- Create instance of WandbLogger. - Parameters:
- name (str | None) – Wandb run name. 
- entity (str | None) – An entity is a username or team name where you’re sending runs. 
- project (str | None) – The name of the project where you’re sending the new run 
- job_type (str | None) – Specify the type of run, which is useful when you’re grouping runs together into larger experiments using group. 
- group (str | None) – Specify a group to organize individual runs into a larger experiment. 
- tags (List[str] | None) – A list of strings, which will populate the list of tags on this run in the UI. 
- plot (bool) – Indicator for making and sending plots. 
- table (bool) – Indicator for making and sending tables. 
- name_prefix (str) – Prefix for the name field. 
- config (Dict[str, Any] | None) – This sets wandb.config, a dictionary-like object for saving inputs to your job, like hyperparameters for a model or settings for a data preprocessing job. 
- log_model (bool) – - Log checkpoints created by - pytorch_lightning.callbacks.ModelCheckpointas W&B artifacts. latest and best aliases are automatically set.- if - log_model == 'all', checkpoints are logged during training.
- if - log_model == True, checkpoints are logged at the end of training, except when- pytorch_lightning.callbacks.ModelCheckpoint.save_top_k==-1which also logs every checkpoint during training.
- if - log_model == False(default), no checkpoint is logged.
 
 
 - Methods - Finish experiment. - log(msg, **kwargs)- Log any event. - log_backtest_metrics(ts, metrics_df, ...)- Write metrics to logger. - log_backtest_run(metrics, forecast, test)- Backtest metrics from one fold to logger. - Reinit experiment. - set_params(**params)- Return new object instance with modified parameters. - start_experiment([job_type, group])- Start experiment. - to_dict()- Collect all information about etna object in dict. - Attributes - This class stores its - __init__parameters as attributes.- Init experiment. - Pytorch lightning loggers. - log(msg: str | Dict[str, Any], **kwargs)[source]#
- Log any event. - e.g. “Fitted segment segment_name” to stderr output. - Parameters:
 - Notes - We log dictionary to wandb only. 
 - log_backtest_metrics(ts: TSDataset, metrics_df: DataFrame, forecast_df: DataFrame, fold_info_df: DataFrame)[source]#
- Write metrics to logger. 
 - log_backtest_run(metrics: DataFrame, forecast: DataFrame, test: DataFrame)[source]#
- Backtest metrics from one fold to logger. 
 - set_params(**params: dict) Self[source]#
- Return new object instance with modified parameters. - Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a - modelin a- Pipeline.- Nested parameters are expected to be in a - <component_1>.<...>.<parameter>form, where components are separated by a dot.- Parameters:
- **params (dict) – Estimator parameters 
- Returns:
- New instance with changed parameters 
- Return type:
- Self 
 - Examples - >>> from etna.pipeline import Pipeline >>> from etna.models import NaiveModel >>> from etna.transforms import AddConstTransform >>> model = NaiveModel(lag=1) >>> transforms = [AddConstTransform(in_column="target", value=1)] >>> pipeline = Pipeline(model, transforms=transforms, horizon=3) >>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2}) Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )