This repository contains reusable and cross-platform automation recipes to run DevOps, MLOps, and MLPerf via a simple and human-readable Collective Mind interface (CM) while adapting to different operating systems, software and hardware.
All СM scripts have a simple Python API, extensible JSON/YAML meta description and unified input/output to make them reusable in different projects either individually or by chaining them together into portable automation workflows, applications and web services adaptable to continuously changing models, data sets, software and hardware.
We develop and test CM scripts as a community effort to support the following projects:
- CM for MLPerf: modularize and automate MLPerf benchmarks
- CM for research and education: provide a common interface to automate and reproduce results from research papers and MLPerf benchmarks;
- CM for ABTF: provide a unified CM interface to run automotive benchmarks;
- CM for optimization: co-design efficient and cost-effective software and hardware for AI, ML and other emerging workloads via open challenges.
You can read this ArXiv paper to learn more about the CM motivation and long-term vision.
Please provide your feedback or submit your issues here.
Online catalog: cKnowledge, MLCommons.
Please use this BibTeX file to cite this project.
Install the MLCommons CM automation language.
cm pull repo mlcommons@cm4mlops --branch=dev
cm run script "python app image-classification onnx _cpu" --help
cm run script "download file _wget" --url=https://cKnowledge.org/ai/data/computer_mouse.jpg --verify=no --env.CM_DOWNLOAD_CHECKSUM=45ae5c940233892c2f860efdf0b66e7e
cm run script "python app image-classification onnx _cpu" --input=computer_mouse.jpg
cmr "python app image-classification onnx _cpu" --input=computer_mouse.jpg
cmr --tags=python,app,image-classification,onnx,_cpu --input=computer_mouse.jpg
cmr 3d5e908e472b417e --input=computer_mouse.jpg
cm docker script "python app image-classification onnx _cpu" --input=computer_mouse.jpg
cm gui script "python app image-classification onnx _cpu"
Check this script/reproduce-ieee-acm-micro2023-paper-96.
cm run script --tags=run-mlperf,inference,_performance-only,_short \
--division=open \
--category=edge \
--device=cpu \
--model=resnet50 \
--precision=float32 \
--implementation=mlcommons-python \
--backend=onnxruntime \
--scenario=Offline \
--execution_mode=test \
--power=no \
--adr.python.version_min=3.8 \
--clean \
--compliance=no \
--quiet \
--time
cmr "run-mlperf inference _find-performance _full _r4.1" \
--model=bert-99 \
--implementation=nvidia \
--framework=tensorrt \
--category=datacenter \
--scenario=Offline \
--execution_mode=test \
--device=cuda \
--docker \
--docker_cm_repo=mlcommons@cm4mlops \
--docker_cm_repo_flags="--branch=mlperf-inference" \
--test_query_count=100 \
--quiet
cm run script \
--tags=run-mlperf,inference,_r4.1 \
--model=sdxl \
--implementation=reference \
--framework=pytorch \
--category=datacenter \
--scenario=Offline \
--execution_mode=valid \
--device=cuda \
--quiet
Grigori Fursin and Arjun Suresh
Arjun Suresh, Anandhu S, Grigori Fursin
We thank cKnowledge.org, cTuning foundation and MLCommons for sponsoring this project!
We thank all volunteers, collaborators and contributors for their support, fruitful discussions, and useful feedback!