Puncc (short for Predictive uncertainty calibration and conformalization) is an open-source Python library. It seamlessly integrates a collection of state-of-the-art conformal prediction algorithms and associated techniques for diverse machine learning tasks, including regression, classification, object detection and anomaly detection.
Puncc can be used with any predictive model to provide rigorous uncertainty estimations.
Under data exchangeability (or i.i.d), the generated prediction sets are guaranteed to cover the true outputs within a user-defined error
Documentation is available online.
- πΎ Installation
- π Documentation
- π¨βπ Tutorials
- π QuickStart
- π Citation
- π» Contributing
- π Acknowledgments
- π¨βπ» Creators
- π License
puncc requires a version of python higher than 3.8 and several libraries including Scikit-learn and Numpy. It is recommended to install puncc in a virtual environment to not mess with your system's dependencies.
You can directly install the library using pip:
pip install puncc
For comprehensive documentation, we encourage you to visit the official documentation page.
We highly recommend following the introductory tutorials to get familiar with the library and its API.
Conformal prediction enables to transform point predictions into interval predictions with high probability of coverage. The figure below shows the result of applying the split conformal algorithm on a linear regressor.
Many conformal prediction algorithms can easily be applied using puncc. The code snippet below shows the example of split conformal prediction with a pretrained linear model:
from deel.puncc.api.prediction import BasePredictor
from deel.puncc.regression import SplitCP
# Load calibration and test data
# ...
# Pretrained regression model
# trained_linear_model = ...
# Wrap the model to enable interoperability with different ML libraries
trained_predictor = BasePredictor(trained_linear_model)
# Instanciate the split conformal wrapper for the linear model.
# Train argument is set to False because we do not want to retrain the model
split_cp = SplitCP(trained_predictor, train=False)
# With a calibration dataset, compute (and store) nonconformity scores
split_cp.fit(X_calib=X_calib, y_calib=y_calib)
# Obtain the model's point prediction y_pred and prediction interval
# PI = [y_pred_lower, y_pred_upper] for a target coverage of 90% (1-alpha).
y_pred, y_pred_lower, y_pred_upper = split_cp.predict(X_test, alpha=0.1)
The library provides several metrics (deel.puncc.metrics
) and plotting capabilities (deel.puncc.plotting
) to evaluate and visualize the results of a conformal procedure. For a target error rate of
Puncc provides two ways of defining and using conformal prediction wrappers:
- A direct approach to run state-of-the-art conformal prediction procedures. This is what we used in the previous conformal regression example.
- Low-level API: a more flexible approach based of full customization of the prediction model, the choice of nonconformity scores and the split between fit and calibration datasets.
A quick comparison of both approaches is provided in the API tutorial for a regression problem.
Overview of Implemented Methods from the Literature:
Procedure Type | Procedure Name | Description (more details in Theory overview) |
---|---|---|
Conformal Regression | deel.puncc.regression.SplitCP |
Split Conformal Regression |
Conformal Regression | deel.puncc.regression.LocallyAdaptiveCP |
Locally Adaptive Conformal Regression |
Conformal Regression | deel.puncc.regression.CQR |
Conformalized Quantile Regression |
Conformal Regression | deel.puncc.regression.CvPlus |
CV + (cross-validation) |
Conformal Regression | deel.puncc.regression.EnbPI |
Ensemble Batch Prediction Intervals method |
Conformal Regression | deel.puncc.regression.aEnbPI |
Locally adaptive Ensemble Batch Prediction Intervals method |
Conformal Classification | deel.puncc.classification.LAC |
Least Ambiguous Set-Valued Classifiers |
Conformal Classification | deel.puncc.classification.APS |
Adaptive Prediction Sets |
Conformal Classification | deel.puncc.classification.RAPS |
Regularized Adaptive Prediction Sets (APS is a special case where |
Conformal Anomaly Detection | deel.puncc.anomaly_detection.SplitCAD |
Split Conformal Anomaly detection (used to control the maximum false positive rate) |
Conformal Object Detection | deel.puncc.object_detection.SplitBoxWise |
Box-wise split conformal object detection |
If you use our library for your work, please cite our paper:
@inproceedings{mendil2023puncc,
title={PUNCC: a Python Library for Predictive Uncertainty Calibration and Conformalization},
author={Mendil, Mouhcine and Mossina, Luca and Vigouroux, David},
booktitle={Conformal and Probabilistic Prediction with Applications},
pages={582--601},
year={2023},
organization={PMLR}
}
Puncc has been used to support the work presented in our COPA 2022 paper on conformal prediction for time series.
@inproceedings{mendil2022robust,
title={Robust Gas Demand Forecasting With Conformal Prediction},
author={Mendil, Mouhcine and Mossina, Luca and Nabhan, Marc and Pasini, Kevin},
booktitle={Conformal and Probabilistic Prediction with Applications},
pages={169--187},
year={2022},
organization={PMLR}
}
Puncc's development team is a group of passionate scientists and engineers who are committed to developing a dependable and user-friendly open-source software. We are always looking for new contributors to this initiative. If you are interested in helping us develop puncc, please feel free to get involved.
Contributions are welcome! Feel free to report an issue or open a pull request. Take a look at our guidelines here.
The package is released under MIT license.