These guides take you through the steps required to solve real-world problems. These guides assumes some level of understanding and working knowledge on the general ML workflow as well as the concepts of packaging ML models to run on AI Inference Server, which can be acquired by studying the End-to-End tutorials
These tutorials provide answers to the following questions:
- How to setup your environment to be able to run tutorials?
- How to define pipeline components?
- How to create entrypoints for AI Inference Server to be able to feed the model with data?
- How to use variables in pipelines?
- How to add and handle Python dependencies?
- How to handle file resources?
- How to return the result of the Ml model execution?
- How to create metrics out of processing results and how to add it to your pipeline output?
- How to use pipeline parameters?
- How to write components for older versions of AI Inference Server?
- How to time series signals?
- How to process images?
- How to use Tensorflow instead of Tensorflow Light
- How to version pacakges and use Pacakge ID
- How to mock AI Inference Server logger locally
- How to package models into an inference pipeline
- How to test pipeline configuration package locally
- How to convert and deploy the packaged inference pipeline to AI@Edge
- How to create delta packages
- How to configure the GPU Runtime component
- How to use Azure MLOps