To see MLServer in action you can check out the examples below. These are end-to-end notebooks, showing how to serve models with MLServer.
If you are interested in how MLServer interacts with particular model frameworks, you can check the following examples. These focus on showcasing the different inference runtimes that ship with MLServer out of the box. Note that, for advanced use cases, you can also write your own custom inference runtime (see the example below on custom models).
- Serving Scikit-Learn models
- Serving XGBoost models
- Serving LightGBM models
- Serving Tempo pipelines
- Serving MLflow models
- Serving custom models
:caption: Inference Runtimes
:titlesonly:
:hidden:
./sklearn/README.md
./xgboost/README.md
./lightgbm/README.md
./tempo/README.md
./mlflow/README.md
./custom/README.md
To see some of the advanced features included in MLServer (e.g. multi-model serving), check out the examples below.
- Multi-Model Serving with multiple frameworks
- Loading / unloading models from a model repository
- Content-Type Decoding
- Custom Conda environment
- Serving custom models requiring JSON inputs or outputs
:caption: MLServer Features
:titlesonly:
:hidden:
./mms/README.md
./model-repository/README.md
./content-type/README.md
./conda/README.md
./custom-json/README.md