diff --git a/docs/book/.gitbook/assets/coordinates-view.png b/docs/book/.gitbook/assets/coordinates-view.png new file mode 100644 index 00000000000..0c52194430d Binary files /dev/null and b/docs/book/.gitbook/assets/coordinates-view.png differ diff --git a/docs/book/.gitbook/assets/experiment_comparison_video.png b/docs/book/.gitbook/assets/experiment_comparison_video.png new file mode 100644 index 00000000000..2cfd746b2bc Binary files /dev/null and b/docs/book/.gitbook/assets/experiment_comparison_video.png differ diff --git a/docs/book/.gitbook/assets/table-view.png b/docs/book/.gitbook/assets/table-view.png new file mode 100644 index 00000000000..4a1778f9178 Binary files /dev/null and b/docs/book/.gitbook/assets/table-view.png differ diff --git a/docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md b/docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md index fd27d792107..ca83cd3407c 100644 --- a/docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md +++ b/docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md @@ -1,15 +1,13 @@ --- icon: ufo-beam -description: Tracking metrics and metadata +description: Tracking and comparing metrics and metadata --- # Track metrics and metadata -ZenML provides a unified way to log and manage metrics and metadata through -the `log_metadata` function. This versatile function allows you to log -metadata across various entities like models, artifacts, steps, and runs -through a single interface. Additionally, you can adjust if you want to -automatically the same metadata for the related entities. +ZenML provides a unified way to log and manage metrics and metadata through the `log_metadata` function. This versatile function allows you to log metadata across various entities like models, artifacts, steps, and runs through a single interface. Additionally, you can adjust if you want to automatically log the same metadata for related entities. + +## Logging Metadata ### The most basic use-case @@ -24,14 +22,81 @@ def my_step() -> ...: ... ``` -This will log the `accuracy` for the step, its pipeline run, and if provided -its model version. +This will log the `accuracy` for the step, its pipeline run, and if provided its model version. + +### A real-world example + +Here's a more comprehensive example showing how to log various types of metadata in a machine learning pipeline: + +```python +from zenml import step, pipeline, log_metadata + +@step +def process_engine_metrics() -> float: + # does some machine learning things + + # Log operational metrics + log_metadata( + metadata={ + "engine_temperature": 3650, # Kelvin + "fuel_consumption_rate": 245, # kg/s + "thrust_efficiency": 0.92, + } + ) + return 0.92 + +@step +def analyze_flight_telemetry(efficiency: float) -> None: + # does some more machine learning things + + # Log performance metrics + log_metadata( + metadata={ + "altitude": 220000, # meters + "velocity": 7800, # m/s + "fuel_remaining": 2150, # kg + "mission_success_prob": 0.9985, + } + ) + +@pipeline +def telemetry_pipeline(): + efficiency = process_engine_metrics() + analyze_flight_telemetry(efficiency) +``` + +This data can be visualized and compared in the ZenML Pro dashboard. The +illustrations below show the data from this example in the [ZenML Pro](https://www.zenml.io/pro) dashboard +using the Experiment Comparison tool. + +{% hint style="warning" %} +This feature is currently in Alpha Preview. We encourage you to share feedback about your use cases and requirements through our Slack community. +{% endhint %} + +## Visualizing and Comparing Metadata (Pro) + +Once you've logged metadata in your pipelines, you can use ZenML's Experiment Comparison tool to analyze and compare metrics across different runs. This feature is available in the [ZenML Pro](https://www.zenml.io/pro) dashboard. + +[![Experiment Comparison Introduction Video](../../../../book/.gitbook/assets/experiment_comparison_video.png)](https://www.loom.com/share/693b2d829600492da7cd429766aeba6a?sid=7182e55b-31e9-4b38-a3be-07c989dbea32) + +### Comparison Views + +The Experiment Comparison tool offers two complementary views for analyzing your pipeline metadata: + +1. **Table View**: Compare metadata across runs with automatic change tracking + +![Table View](../../../../book/.gitbook/assets/table-view.png) + +2. **Parallel Coordinates Plot**: Visualize relationships between different metrics + +![Parallel Coordinates](../../../../book/.gitbook/assets/coordinates-view.png) + +The tool lets you compare up to 20 pipeline runs simultaneously and supports any +numerical metadata (`float` or `int`) that you've logged in your pipelines. ### Additional use-cases -The `log_metadata` function also supports various use-cases by allowing you to -specify the target entity (e.g., model, artifact, step, or run) with flexible -parameters. You can learn more about these use-cases in the following pages: +The `log_metadata` function supports various use-cases by allowing you to specify the target entity (e.g., model, artifact, step, or run) with flexible parameters. You can learn more about these use-cases in the following pages: - [Log metadata to a step](attach-metadata-to-a-step.md) - [Log metadata to a run](attach-metadata-to-a-run.md) @@ -39,10 +104,7 @@ parameters. You can learn more about these use-cases in the following pages: - [Log metadata to a model](attach-metadata-to-a-model.md) {% hint style="warning" %} -The older methods for logging metadata to specific entities, such as -`log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata`, are -now deprecated. It is recommended to use `log_metadata` for all future -implementations. +The older methods for logging metadata to specific entities, such as `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata`, are now deprecated. It is recommended to use `log_metadata` for all future implementations. {% endhint %}
ZenML Scarf
diff --git a/docs/book/user-guide/starter-guide/manage-artifacts.md b/docs/book/user-guide/starter-guide/manage-artifacts.md index e6464d41f0c..14e36bb327e 100644 --- a/docs/book/user-guide/starter-guide/manage-artifacts.md +++ b/docs/book/user-guide/starter-guide/manage-artifacts.md @@ -167,7 +167,58 @@ def annotation_approach() -> ( return "string" ``` -### Specify a type for your artifacts +## Comparing metadata across runs (Pro) + +The [ZenML Pro](https://www.zenml.io/pro) dashboard includes an Experiment Comparison tool that allows you to visualize and analyze metadata across different pipeline runs. This feature helps you understand patterns and changes in your pipeline's behavior over time. + +### Using the comparison views + +The tool offers two complementary views for analyzing your metadata: + +#### Table View +The tabular view provides a structured comparison of metadata across runs: + +![Comparing metadata values across different pipeline runs in table view.](../../../book/.gitbook/assets/table-view.png) + +This view automatically calculates changes between runs and allows you to: + +* Sort and filter metadata values +* Track changes over time +* Compare up to 20 runs simultaneously + +#### Parallel Coordinates View +The parallel coordinates visualization helps identify relationships between different metadata parameters: + +![Comparing metadata values across different pipeline runs in parallel coordinates view.](../../../book/.gitbook/assets/coordinates-view.png) + +This view is particularly useful for: + +* Discovering correlations between different metrics +* Identifying patterns across pipeline runs +* Filtering and focusing on specific parameter ranges + +### Accessing the comparison tool + +To compare metadata across runs: + +1. Navigate to any pipeline in your dashboard +2. Click the "Compare" button in the top navigation +3. Select the runs you want to compare +4. Switch between table and parallel coordinates views using the tabs + +{% hint style="info" %} +The comparison tool works with any numerical metadata (`float` or `int`) that you've logged in your pipelines. Make sure to log meaningful metrics in your steps to make the most of this feature. +{% endhint %} + +### Sharing comparisons + +The tool preserves your comparison configuration in the URL, making it easy to share specific views with team members. Simply copy and share the URL to allow others to see the same comparison with identical settings and filters. + +{% hint style="warning" %} +This feature is currently in Alpha Preview. We encourage you to share feedback about your use cases and requirements through our Slack community. +{% endhint %} + +## Specify a type for your artifacts Assigning a type to an artifact allows ZenML to highlight them differently in the dashboard and also lets you filter your artifacts better. @@ -193,7 +244,7 @@ model = ... save_artifact(model, name="model", artifact_type=ArtifactType.MODEL) ``` -### Consuming external artifacts within a pipeline +## Consuming external artifacts within a pipeline While most pipelines start with a step that produces an artifact, it is often the case to want to consume artifacts external from the pipeline. The `ExternalArtifact` class can be used to initialize an artifact within ZenML with any arbitrary data type. @@ -226,7 +277,7 @@ Optionally, you can configure the `ExternalArtifact` to use a custom [materializ Using an `ExternalArtifact` for your step automatically disables caching for the step. {% endhint %} -### Consuming artifacts produced by other pipelines +## Consuming artifacts produced by other pipelines It is also common to consume an artifact downstream after producing it in an upstream pipeline or step. As we have learned in the [previous section](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md#fetching-artifacts-directly), the `Client` can be used to fetch artifacts directly inside the pipeline code: