You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the ClassificationQualityByClass metric for my classificaiton ML model. In the reference data I have 7 labels, however, in the current data there are only example for 3 of them. When I run a report with this data and metric, the tables for current and reference only show the metrics for the 3 labels that appear on the "current" data, but not for all 7 labels in the reference data.
I checked the renderer class for this metric, and I can see that for the reference plot, the names being used for x-axis are the ones from the current_metrics frame, instead of the reference_matrix_frame. Is this done on purpose? If not, I think a quick fix that would return the metrics for all labels in the reference data is using
ref_names = ref_metrics_frame.columns.tolist()
x = list(map(str,ref_names))
for the reference plot (line 169 in evidently/metrics/classification_performance/quality_by_class_metrics.py).
Otherwise, is there any way to overwrite original renderer plots?
The text was updated successfully, but these errors were encountered:
I am using the ClassificationQualityByClass metric for my classificaiton ML model. In the reference data I have 7 labels, however, in the current data there are only example for 3 of them. When I run a report with this data and metric, the tables for current and reference only show the metrics for the 3 labels that appear on the "current" data, but not for all 7 labels in the reference data.
I checked the renderer class for this metric, and I can see that for the reference plot, the names being used for x-axis are the ones from the current_metrics frame, instead of the reference_matrix_frame. Is this done on purpose? If not, I think a quick fix that would return the metrics for all labels in the reference data is using
for the reference plot (line 169 in evidently/metrics/classification_performance/quality_by_class_metrics.py).
Otherwise, is there any way to overwrite original renderer plots?
The text was updated successfully, but these errors were encountered: