Skip to content

EthicalML/awesome-production-machine-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome GitHub X

Awesome Production Machine Learning

This repository contains a curated list of awesome open source libraries that will help you deploy, monitor, version, scale, and secure your production machine learning 🚀

You can keep up to date by watching this github repo to get a summary of the new production ML libraries added every month via releases 🤩

Additionally, we provide a search toolkit that helps you quickly navigate through the toolchain.

Quick links to sections on this page

⚔ Adversarial Robustness 🤖 Agentic Workflow 🔧 AutoML
🗺️ Computation Load Distribution 🏷️ Data Labelling & Synthesis 🧵 Data Pipeline
📓 Data Science Notebook 💾 Data Storage Optimisation 💸 Data Stream Processing
💪 Deployment & Serving 📈 Evaluation & Monitoring 🔍 Explainability & Fairness
🎁 Feature Store 🔴 Industry-strength Anomaly Detection 👁️ Industry-strength Computer Vision
🔠 Industry-strength Natural Language Processing 🙌 Industry-strength Recommender System 🍕 Industry-strength Reinforcement Learning
📊 Industry-strength Visualisation 📅 Metadata Management 📜 Model, Data & Experiment Tracking
🔩 Model Storage Optimisation 🔥 Neural Search & Retrieval 🧮 Optimized Computation
🔏 Privacy & Security 🏁 Training Orchestration

Contributing to the list

Please review our CONTRIBUTING.md requirements when submitting a PR to help us keep the list clean and up-to-date - thank you to the community for supporting its steady growth 🚀

Star History Chart

10 Min Video Overview

This 10 minute video provides an overview of the motivations for machine learning operations as well as a high level overview on some of the tools in this repo. This newer video covers the an updated 2024 version of the state of MLOps.

Want to receive recurrent updates on this repo and other advancements?

You can join the Machine Learning Engineer newsletter. Join over 10,000 ML professionals and enthusiasts who receive weekly curated articles & tutorials on production Machine Learning.
Also check out the Awesome Artificial Intelligence Regulation List, where we aim to map the landscape of "Frameworks", "Codes of Ethics", "Guidelines", "Regulations", etc related to Artificial Intelligence.

Main Content

Adversarial Robustness

  • AdvBox - A toolbox to generate adversarial examples that fool neural networks in PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, TensorFlow, and Advbox can benchmark the robustness of machine learning models.
  • Adversarial DNN Playground - think TensorFlow Playground, but for Adversarial Examples! A visualization tool designed for learning and teaching - the attack library is limited in size, but it has a nice front-end to it with buttons you can press!
  • AdverTorch - library for adversarial attacks / defenses specifically for PyTorch.
  • ART - ART (Adversarial Robustness Toolbox) provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.
  • Artificial Adversary AirBnB's library to generate text that reads the same to a human but passes adversarial classifiers.
  • Counterfit - Counterfit is a command-line tool and generic automation layer for assessing the security of machine learning systems.
  • Factool - Factool is a tool augmented framework for detecting factual errors of texts generated by large language models.
  • Foolbox - Foolbox is a Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.
  • MIA - A library for running membership inference attacks (MIA) against machine learning models.
  • NeMo Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
  • OpenAttack - OpenAttack is a Python-based textual adversarial attack toolkit, which handles the whole process of textual adversarial attacking, including preprocessing text, accessing the victim model, generating adversarial examples and evaluation.

Agentic Workflow

  • Agents - Agents allows users to build AI-driven server programs that can see, hear, and speak in realtime.
  • AgentScope - AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models.
  • AutoGen - AutoGen is an open-source framework for building AI agent systems.
  • Chidori - Chidori is a reactive runtime that supports building robust AI agents using languages like Node.js, Python, and Rust, with a focus on reactivity and observability in agent workflows.
  • CrewAI - CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents.
  • LangGraph - LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows.
  • Modelscope-Agent - Modelscope-Agent is a customizable and scalable agent framework.
  • OpenAGI - OpenAGI is used as the agent creation package to build agents for AIOS.
  • Swarm - Swarm is an educational framework exploring ergonomic, lightweight multi-agent orchestration.
  • Swarms - Swarms is an enterprise grade and production ready multi-agent collaboration framework that enables you to orchestrate many agents to work collaboratively at scale to automate real-world activities.

AutoML

  • AutoGluon - Automated feature, model, and hyperparameter selection for tabular, image, and text data on top of popular machine learning libraries (Scikit-Learn, LightGBM, CatBoost, PyTorch, MXNet).
  • Autokeras - AutoML library for Keras based on "Auto-Keras: Efficient Neural Architecture Search with Network Morphism".
  • auto-sklearn - Framework to automate algorithm and hyperparameter tuning for sklearn.
  • Feature Engine - Feature-engine is a Python library that contains several transformers to engineer features for use in machine learning models.
  • Featuretools - An open source framework for automated feature engineering.
  • FLAML - FLAML is a fast library for automated machine learning & tuning.
  • go-featureprocessing - A feature pre-processing framework in Go that matches functionality of sklearn.
  • HEBO - Set of open-source hyperparameter optimization frameworks, including the winning submission to the NeurIPS 2020 Black-Box Optimisation Challenge tested on hyperparameter tuning tasks.
  • Katib - A Kubernetes-based system for Hyperparameter Tuning and Neural Architecture Search.
  • keras-tuner - Keras Tuner is an easy-to-use, distributable hyperparameter optimisation framework that solves the pain points of performing a hyperparameter search. Keras Tuner makes it easy to define a search space and leverage included algorithms to find the best hyperparameter values.
  • Neural Architecture Search with Controller RNN - Basic implementation of Controller RNN from Neural Architecture Search with Reinforcement Learning and Learning Transferable Architectures for Scalable Image Recognition.
  • Neural Network Intelligence - NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments.
  • Optuna - Optuna is an automatic hyperparameter optimisation software framework, particularly designed for machine learning.
  • OSS Vizier - OSS Vizier is a Python-based service for black-box optimisation and research, one of the first hyperparameter tuning services designed to work at scale.
  • sklearn-deap Use evolutionary algorithms instead of gridsearch in scikit-learn.
  • TPOT - Automation of sklearn pipeline creation (including feature selection, pre-processor, etc.).
  • tsfresh - Automatic extraction of relevant features from time series.
  • Upgini - Free automated data & feature enrichment library for machine learning: automatically searches through thousands of ready-to-use features from public and community shared data sources and enriches your training dataset with only the accuracy improving features.

Computation Load Distribution

  • Apache Beam Apache Beam is a unified programming model for Batch and Streaming.
  • Bagua - Bagua is a performant and flexible distributed training framework for PyTorch, providing a faster alternative to PyTorch DDP and Horovod. It supports advanced distributed training algorithms such as quantization and decentralization.
  • Colossal-AI - A unified deep learning system for big model era, which helps users to efficiently and quickly deploy large AI model training and inference.
  • Dask - Distributed parallel processing framework for Pandas and NumPy computations - (Video).
  • DEAP - A novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelisation mechanisms such as multiprocessing and SCOOP.
  • DeepSpeed - A deep learning optimization library (lightweight PyTorch wrapper) that makes distributed training easy, efficient, and effective.
  • DLRover - DLRover makes the distributed training of large AI models easy, stable, fast and green.
  • einops - Flexible and powerful tensor operations for readable and reliable code.
  • Fiber - Distributed computing library for modern computer clusters from Uber.
  • Flashlight - A fast, flexible machine learning library written entirely in C++ from the Facebook AI Research and the creators of Torch, TensorFlow, Eigen and Deep Speech.
  • Hivemind - Decentralized deep learning in PyTorch.
  • Horovod - Uber's distributed training framework for TensorFlow, Keras, and PyTorch.
  • Liger Kernel - Liger Kernel is a collection of Triton kernels designed specifically for LLM training.
  • LightGBM - LightGBM is a gradient boosting framework that uses tree based learning algorithms.
  • PaddlePaddle - PaddlePaddle is a framework to perform large-scale deep network training, using data sources distributed across hundreds of nodes.
  • PyTorch Lightning - PyTorch Lightning pretrains, finetunes and deploys AI models on multiple GPUs, TPUs with zero code changes.
  • PyWren - Answer the question of the "cloud button" for python function execution. It's a framework that abstracts AWS Lambda to enable data scientists to execute any Python function - (Video).
  • Ray - Ray is a flexible, high-performance distributed execution framework for machine learning (VIDEO).
  • TensorFlowOnSpark - TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.
  • Vespa Vespa is an engine for low-latency computation over large data sets.

Data Labelling and Synthesis

  • Argilla - Argilla helps domain experts and data teams to build better NLP datasets in less time.
  • Baal - Baal is an active learning library that supports both industrial applications and research usecases.
  • brat rapid annotation tool - Web-based text annotation tool for Named-Entity-Recogntion task.
  • cleanlab - Python library for data-centric AI. Can automatically: find mislabeled data, detect outliers, estimate consensus + annotator-quality for multi-annotator datasets, suggest which data is best to (re)label next.
  • COCO Annotator - Web-based image segmentation tool for object detection, localization and keypoints
  • CVAT - CVAT (Computer Vision Annotation Tool) is OpenCV's web-based annotation tool for both videos and images for computer algorithms.
  • Doccano - Open source text annotation tools for humans, providing functionality for sentiment analysis, named entity recognition, and machine translation.
  • Gretel Synthetics - Gretel Synthetics is a synthetic data generators for structured and unstructured text, featuring differentially private learning.
  • ImageTagger - Image labelling tool with support for collaboration, supporting bounding box, polygon, line, point labelling, label export, etc.
  • ImgLab - Image annotation tool for bounding boxes with auto-suggestion and extensibility for plugins.
  • Label Studio - Multi-domain data labeling and annotation tool with standardized output format.
  • makesense.ai - Free to use online tool for labelling photos. Prepared labels can be downloaded in one of multiple supported formats.
  • MedTagger - A collaborative framework for annotating medical datasets using crowdsourcing.
  • modAL - modAL is an active learning framework designed with modularity, flexibility and extensibility in mind.
  • NeMo Curator - NeMo Curator is a GPU-accelerated framework for efficient large language model data curation.
  • OpenLabeling - Open source tool for labelling images with support for labels, edges, as well as image resizing and zooming in.
  • PixelAnnotationTool - Image annotation tool with ability to "colour" on the images to select labels for segmentation. Process is semi-automated with the watershed marked algorithm of OpenCV
  • refinery - The data scientist's open-source choice to scale, assess and maintain natural language data.
  • Rubrix - Open-source tool for tracking, exploring, and labeling data for AI projects.
  • SDV - Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset.
  • Semantic Segmentation Editor - Hitachi's Open source tool for labelling camera and LIDAR data.
  • Snorkel - Snorkel is a system for quickly generating training data with weak supervision.
  • Superintendent - superintendent provides an ipywidget-based interactive labelling tool for your data.
  • YData Synthetic - YData Synthetic is a package to generate synthetic tabular and time-series data leveraging the state of the art generative models.

Data Pipeline

  • Apache Airflow - Data Pipeline framework built in Python, including scheduler, DAG definition and a UI for visualisation.
  • Apache Nifi - Apache NiFi was made for dataflow. It supports highly configurable directed graphs of data routing, transformation, and system mediation logic.
  • Apache Oozie - Workflow scheduler for Hadoop jobs.
  • Argo Workflows - Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).
  • Azkaban - Azkaban is a batch workflow job scheduler created at LinkedIn to run Hadoop jobs. Azkaban resolves the ordering through job dependencies and provides an easy to use web user interface to maintain and track your workflows.
  • BatchFlow - BatchFlow helps data scientists conveniently work with random or sequential batches of your data and define data processing and machine learning workflows for large datasets.
  • Bonobo - ETL framework for Python 3.5+ with focus on simple atomic operations working concurrently on rows of data.
  • Chronos - More of a job scheduler for Mesos than ETL pipeline.
  • Couler - Unified interface for constructing and managing machine learning workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.
  • DataTrove - DataTrove is a library to process, filter and deduplicate text data at a very large scale.
  • D6tflow - A python library that allows for building complex data science workflows on Python.
  • DALL·E Flow - DALL·E Flow is an interactive workflow for generating high-definition images from text prompt.
  • Dagster - A data orchestrator for machine learning, analytics, and ETL.
  • DBND - DBND is an agile pipeline framework that helps data engineering teams track and orchestrate their data processes.
  • DBT - ETL tool for running transformations inside data warehouses.
  • Flyte - Lyft’s Cloud Native Machine Learning and Data Processing Platform - (Demo).
  • Genie - Job orchestration engine to interface and trigger the execution of jobs from Hadoop-based systems.
  • Gokart - Wrapper of the data pipeline Luigi.
  • Hamilton - Hamilton is a micro-orchestration framework for defining dataflows. Runs anywhere python runs (e.g. jupyter, fastAPI, spark, ray, dask). Brings software engineering best practices without you knowing it. Use it to define feature engineering transforms, end-to-end model pipelines, and LLM workflows. It complements macro-orchestration systems (e.g. kedro, luigi, airflow, dbt, etc.) as it replaces the code within those macro tasks. Comes with a self-hostable UI that captures lineage & provenance, execution telemetry & data summaries, and builds a self-populating catalog; usable in development as well as production.
  • Instill VDP - Instill VDP (Versatile Data Pipeline) aims to streamline the data processing pipelines from inception to completion.
  • Instructor - Instructor makes it easy to get structured data like JSON from LLMs like GPT-3.5, GPT-4, GPT-4-Vision, and open-source models.
  • Kedro - Kedro is a workflow development tool that helps you build data pipelines that are robust, scalable, deployable, reproducible and versioned.
  • Luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs, handling dependency resolution, workflow management, visualisation, etc..
  • Metaflow - A framework for data scientists to easily build and manage real-life data science projects.
  • Neuraxle - A framework for building neat pipelines, providing the right abstractions to chain your data transformation and prediction steps with data streaming, as well as doing hyperparameter searches (AutoML).
  • Pachyderm - Open source distributed processing framework build on Kubernetes focused mainly on dynamic building of production machine learning pipelines - (Video).
  • PipelineX - Based on Kedro and MLflow. Full comparison is found here.
  • Ploomber - The fastest way to build data pipelines. Develop iteratively, deploy anywhere.
  • Prefect Core - Workflow management system that makes it easy to take your data pipelines and add semantics like retries, logging, dynamic mapping, caching, failure notifications, and more.
  • Snakemake - Workflow management system for reproducible and scalable data analyses.
  • Sycamore - Sycamore is an open source, AI-powered document processing engine for ETL, RAG, LLM-based applications, and analytics on unstructured data.
  • Towhee - General-purpose machine learning pipeline for generating embedding vectors using one or many ML models.
  • unstructured - unstructured streamlines and optimizes the data processing workflow for LLMs, ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more.
  • ZenML - ZenML is an extensible, open-source MLOps framework to create reproducible ML pipelines with a focus on automated metadata tracking, caching, and many integrations to other tools.

DS Notebook

  • Apache Zeppelin - Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more.
  • H2O Flow - Jupyter notebook-like interface for H2O to create, save and re-use "flows".
  • Jupyter Notebooks - Web interface python sandbox environments for reproducible development
  • ML Workspace - All-in-one web IDE for machine learning and data science. Combines Jupyter, VS Code, Tensorflow, and many other tools/libraries into one Docker image.
  • .NET Interactive - .NET Interactive takes the power of .NET and embeds it into your interactive experiences.
  • Papermill - Papermill is a library for parameterizing notebooks and executing them like Python scripts.
  • Polynote - Polynote is an experimental polyglot notebook environment. Currently, it supports Scala and Python (with or without Spark), SQL, and Vega.
  • RMarkdown - The rmarkdown package is a next generation implementation of R Markdown based on Pandoc.
  • Stencila - Stencila is a platform for creating, collaborating on, and sharing data driven content. Content that is transparent and reproducible.
  • Voilà - Voilà turns Jupyter notebooks into standalone web applications that can e.g. be used as dashboards.

Data Storage Optimisation

  • AIStore - AIStore is a lightweight object storage system with the capability to linearly scale out with each added storage node and a special focus on petascale deep learning.
  • Alluxio - A virtual distributed storage system that bridges the gab between computation frameworks and storage systems.
  • Apache Arrow - In-memory columnar representation of data compatible with Pandas, Hadoop-based systems, etc..
  • Apache Druid - A high performance real-time analytics database. Check this article for introduction.
  • Apache Hudi - Hudi is a transactional data lake platform that brings core warehouse and database functionality directly to a data lake. Hudi is great for streaming workloads, and also allows creation of efficient incremental batch pipelines. Supports popular query engines including Spark, Flink, Presto, Trino, Hive, etc. More info here.
  • Apache Iceberg - Iceberg is an ACID-compliant, high-performance format built for huge analytic tables (containing tens of petabytes of data), and it brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time. More info here.
  • Apache Ignite - A memory-centric distributed database, caching, and processing platform for transactional, analytical, and streaming workloads delivering in-memory speeds at petabyte scale - Demo.
  • Apache Parquet - On-disk columnar representation of data compatible with Pandas, Hadoop-based systems, etc..
  • Apache Pinot - A realtime distributed OLAP datastore. Comparison of the open source OLAP systems for big data: ClickHouse, Druid, and Pinot is found here.
  • BayesDB - A Bayesian database table for querying the probable implications of data as easily as SQL databases query the data itself. - (Video)
  • Casibase - Casibase is a LangChain-like RAG (Retrieval-Augmented Generation) knowledge database with web UI and Enterprise SSO.
  • Chroma - BayesDB is an AI-native embedding database.
  • ClickHouse - ClickHouse is an open source column oriented database management system.
  • Delta Lake - Delta Lake is a storage layer that brings scalable, ACID transactions to Apache Spark and other big-data engines.
  • EdgeDB - NoSQL interface for Postgres that allows for object interaction to data stored.
  • GPTCache - GPTCache is a library for creating semantic cache for large language model queries.
  • HopsFS - HDFS-compatible file system with scale-out strongly consistent metadata.
  • InfluxDB Scalable datastore for metrics, events, and real-time analytics.
  • Milvus Milvus is a cloud-native, open-source vector database built to manage embedding vectors generated by machine learning models and neural networks.
  • Marqo Marqo is an end-to-end vector search engine.
  • pgvector pgvector helps with vector similarity search for Postgres.
  • PostgresML PostgresML is a machine learning extension for PostgreSQL that enables you to perform training and inference on text and tabular data using SQL queries.
  • Safetensors Simple, safe way to store and distribute tensors.
  • TimescaleDB An open-source time-series SQL database optimized for fast ingest and complex queries packaged as a PostgreSQL extension - (Video).
  • Weaviate - A low-latency vector search engine (GraphQL, RESTful) with out-of-the-box support for different media types. Modules include Semantic Search, Q&A, Classification, Customizable Models (PyTorch/TensorFlow/Keras), and more.
  • Zarr - Python implementation of chunked, compressed, N-dimensional arrays designed for use in parallel computing.

Data Stream Processing

  • Apache Flink - Open source stream processing framework with powerful stream and batch processing capabilities.
  • Apache Kafka - Kafka client library for buliding applications and microservices where the input and output are stored in kafka clusters.
  • Apache Samza - Distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management.
  • Apache Spark - Micro-batch processing for streams using the apache spark framework as a backend supporting stateful exactly-once semantics.
  • Brooklin - Distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management.
  • Bytewax - Flexible Python-centric stateful stream processing framework built on top of Rust engine.
  • FastStream - A modern broker-agnostic streaming Python framework supporting Apache Kafka, RabbitMQ and NATS protocols, inspired by FastAPI and easily integratable with other web frameworks.
  • Faust - Streaming library built on top of Python's Asyncio library using the async kafka client inspired by the kafka streaming library.
  • TensorStore - Library for reading and writing large multi-dimensional arrays.
  • RobustBench - another robustness resource maintained by some of the leading names in adversarial ML. They specifically focus on defenses, and onesa standardized adversarial robustness benchmark.

Deployment and Serving

  • AirLLM - AirLLM optimizes inference memory usage, allowing 70B large language models to run inference on a single 4GB GPU card without quantization, distillation and pruning.
  • Apache PredictionIO - An open source Machine Learning Server built on top of a state-of-the-art open source stack for developers and data scientists to create predictive engines for any machine learning task.
  • Backprop - Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
  • BentoML - BentoML is an open source framework for high performance ML model serving.
  • Cortex - Cortex is an open source platform for deploying machine learning models—trained with any framework—as production web services. No DevOps required.
  • DeepDetect - Machine Learning production server for TensorFlow, XGBoost and Cafe models written in C++ and maintained by Jolibrain.
  • DeepSparse - DeepSparse is a sparsity-aware deep learning inference runtime for CPUs.
  • exo - exo helps you run your AI cluster at home with everyday devices.
  • Hydrosphere Serving - Hydrosphere Serving is a cluster for deploying and versioning your machine learning models in production.
  • Intel® Extension for Transformers - An Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere.
  • Inference - A fast, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. With Inference, you can deploy models such as YOLOv5, YOLOv8, CLIP, SAM, and CogVLM on your own hardware using Docker.
  • Infinity - Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip.
  • IPEX-LLM - IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.
  • Jina - Jina builds multimodal AI services and pipelines that communicate via gRPC, HTTP, and WebSockets, then scales them up and deploys to production.
  • KsanaLLM - KsanaLLM is a high performance and easy-to-use engine for LLM inference and serving.
  • KServe - KServe provides a Kubernetes Custom Resource Definition for serving predictive and generative ML.
  • KTransformers - KTransformers is a flexible framework for experiencing cutting-edge LLM inference optimizations.
  • Lepton AI - LeptonAI Python library allows you to build an AI service from Python code with ease.
  • LightLLM - LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
  • LocalAI - LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing.
  • m2cgen - A lightweight library which allows to transpile trained classic machine learning models into a native code of C, Java, Go, R, PHP, Dart, Haskell, Rust and many other programming languages.
  • MindsDB - MindsDB is the platform to create, serve, and fine-tune models in real-time from your database, vector store, and application data.
  • MLRun- MLRun is an open MLOps framework for quickly building and managing continuous ML and generative AI applications across their lifecycle.
  • MLServer - An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more.
  • Mosec - A rust-powered and multi-stage pipelined model server which offers dynamic batching and more. Super easy to implement and deploy as micro-services.
  • Nuclio - A high-performance "serverless" framework focused on data, I/O, and compute-intensive workloads. It is well integrated with popular data science tools, such as Jupyter and Kubeflow; supports a variety of data and streaming sources; and supports execution over CPUs and GPUs.
  • OpenDiT - OpenDiT is an open-source project that provides a high-performance implementation of Diffusion Transformer(DiT), specifically designed to enhance the efficiency of training and inference for DiT applications, including text-to-video generation and text-to-image generation.
  • OpenLLM - OpenLLM allows developers to run any open-source LLMs (Llama 3.1, Qwen2, Phi3 and more) or custom models as OpenAI-compatible APIs with a single command.
  • OpenScoring - REST web service for the true real-time scoring (< 1 ms) of Scikit-Learn, R and Apache Spark models.
  • OpenVINO - OpenVINO is an open-source toolkit for optimizing and deploying AI inference.
  • PowerInfer - PowerInfer is a CPU/GPU LLM inference engine leveraging activation locality for your device.
  • Prompt2Model - Prompt2Model is a system that takes a natural language task description (like the prompts used for LLMs such as ChatGPT) to train a small special-purpose model that is conducive for deployment.
  • Redis-AI - A Redis module for serving tensors and executing deep learning models. Expect changes in the API and internals.
  • Seldon Core - Open source platform for deploying and monitoring machine learning models in Kubernetes - (Video).
  • SkyPilot - SkyPilot is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution.
  • skops - skops is a Python library helping you share your scikit-learn based models and put them in production.
  • SparseML - SparseML is an open-source model optimization toolkit that enables you to create inference-optimized sparse models using pruning, quantization, and distillation algorithms.
  • S-LoRA - Serving Thousands of Concurrent LoRA Adapters.
  • Tempo - Open source SDK that provides a unified interface to multiple MLOps projects that enable data scientists to deploy and productionise machine learning systems.
  • Tensorflow Serving - High-performant framework to serve Tensorflow models via grpc protocol able to handle 100k requests per second per core.
  • text-generation-inference - Large Language Model Text Generation Inference.
  • TorchServe - TorchServe is a flexible and easy to use tool for serving PyTorch models.
  • Triton Inference Server - Triton is a high performance open source serving software to deploy AI models from any framework on GPU & CPU while maximizing utilization.
  • UnionML - UnionML is an open source MLOps framework that aims to reduce the boilerplate and friction that comes with building models and deploying them to production.
  • Vercel AI - Vercel AI is a TypeScript toolkit designed to help you build AI-powered applications using popular frameworks like Next.js, React, Svelte, Vue and runtimes like Node.js.
  • vLLM - vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs.

Evaluation and Monitoring

  • AlpacaEval - AlpacaEval is an automatic evaluator for instruction-following language models.
  • ARES - ARES is a framework for automatically evaluating Retrieval-Augmented Generation (RAG) models.
  • AutoML Benchmark - AutoML Benchmark is a framework for evaluating and comparing open-source AutoML systems.
  • Banana-lyzer - Banana-lyzer is an open-source AI Agent evaluation framework and dataset for web tasks with Playwright.
  • Code Generation LM Evaluation Harness - Code Generation LM Evaluation Harness is a framework for the evaluation of code generation models.
  • continuous-eval - continuous-eval is a framework for data-driven evaluation of LLM-powered applications.
  • Deepchecks - Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling you to test your data and models from research to production thoroughly.
  • DeepEval - DeepEval is a simple-to-use, open-source evaluation framework for LLM applications.
  • EvalAI - EvalAI is an open-source platform for evaluating and comparing AI algorithms at scale.
  • Evals - Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
  • EvalScope - EvalScope is a streamlined and customizable framework for efficient large model evaluation and performance benchmarking.
  • Evaluate - Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized.
  • Evalverse - Evalverse is a framework to effortlessly evaluate and report LLMs with no-code requests and comprehensive reports.
  • Evidently - Evidently is an open-source framework to evaluate, test and monitor ML and LLM-powered systems.
  • FlagEval - FlagEval is an open-source evaluation toolkit as well as an open platform for evaluation of large models.
  • FMBench - FMBench is a tool for running performance benchmarks for any Foundation Model (FM) deployed on any AWS Generative AI service, be it Amazon SageMaker, Amazon Bedrock, Amazon EKS, or Amazon EC2.
  • Giskard - Giskard is an evaluation & testing framework for LLMs & ML models.
  • HarmBench - HarmBench is a fast and scalable framework for evaluating automated red teaming methods and LLM attacks/defenses.
  • Helicone - Helicone is an observability platform for LLMs.
  • HELM - HELM (Holistic Evaluation of Language Models) provides tools for the holistic evaluation of language models, including standardized datasets, a unified API for various models, diverse metrics, robustness, and fairness perturbations, a prompt construction framework, and a proxy server for unified model access.
  • Inspect - Inspect is a framework for large language model evaluations.
  • InterCode - InterCode is a lightweight, flexible, and easy-to-use framework for designing interactive code environments to evaluate language agents that can code.
  • Langfuse - Langfuse is an observability & analytics solution for LLM-based applications.
  • LangTest - LangTest is a comprehensive evaluation toolkit for NLP models.
  • Language Model Evaluation Harness - Language Model Evaluation Harness is a framework to test generative language models on a large number of different evaluation tasks.
  • LightEval - LightEval is a lightweight LLM evaluation suite.
  • LLMonitor - LLMonitor is an observability & analytics for AI apps and agents.
  • LLMPerf - LLMPerf is a tool for evaluating the performance of LLM APIs.
  • LLM AutoEval - LLM AutoEval simplifies the process of evaluating LLMs using a convenient Colab notebook.
  • lmms-eval - lmms-eval is an evaluation framework meticulously crafted for consistent and efficient evaluation of LMM.
  • MLPerf Inference - MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios.
  • mltrace - mltrace is a lightweight, open-source Python tool to get "bolt-on" observability in ML pipelines.
  • MTEB - Massive Text Embedding Benchmark (MTEB) is a comprehensive benchmark of text embeddings.
  • NannyML - NannyML is a library that allows you to estimate post-deployment model performance (without access to targets), detect data drift, and intelligently link data drift alerts back to changes in model performance.
  • OLMo-Eval - OLMo-Eval is an evaluation suite for evaluating open language models.
  • OpenCompass - OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
  • Opik - Opik is an open-source platform for evaluating, testing and monitoring LLM applications.
  • Optimum-Benchmark - A unified multi-backend utility for benchmarking Transformers and Diffusers with support for Optimum's arsenal of hardware optimizations/quantization schemes.
  • PhaseLLM - PhaseLLM is a large language model evaluation and workflow framework.
  • Phoenix - Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting.
  • PromptBench - PromptBench is a unified evaluation framework for large language models
  • Prometheus-Eval - Prometheus-Eval is a collection of tools for training, evaluating, and using language models specialized in evaluating other language models.
  • Ragas - Ragas is a framework to evaluate RAG pipelines.
  • RAGChecker - RAGChecker is an advanced automatic evaluation framework designed to assess and diagnose Retrieval-Augmented Generation (RAG) systems.
  • Rageval - Rageval is a tool to evaluate RAG system.
  • RefChecker - RefChecker provides a standardized assessment framework to identify subtle hallucinations present in the outputs of large language models (LLMs).
  • RewardBench - RewardBench is a benchmark designed to evaluate the capabilities and safety of reward models.
  • TensorFlow Model Analysis - TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models on large amounts of data in a distributed manner, using the same metrics defined in their trainer.
  • Tonic Validate - Tonic Validate is a high-performance evaluation framework for LLM/RAG outputs.
  • TruLens - TruLens provides a set of tools for evaluating and tracking LLM experiments.
  • TrustLLM - TrustLLM is a comprehensive framework to evaluate the trustworthiness of large language models, which includes principles, surveys, and benchmarks.
  • UpTrain - UpTrain is an open-source tool for evaluating LLM applications.
  • VBench - VBench is a comprehensive benchmark suite for video generative models.
  • VLMEvalKit - VLMEvalKit is an open-source evaluation toolkit of large vision-language models (LVLMs).

Explainability and Fairness

  • Aequitas - An open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools.
  • AI Explainability 360 - Interpretability and explainability of data and machine learning models including a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.
  • AI Fairness 360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
  • Alibi - Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.
  • anchor - Code for the paper "High precision model agnostic explanations", a model-agnostic system that explains the behaviour of complex models with high-precision rules called anchors.
  • captum - model interpretability and understanding library for PyTorch developed by Facebook. It contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models.
  • DeepLIFT - Codebase that contains the methods in the paper "Learning important features through propagating activation differences". Here is the slides and the video of the 15 minute talk given at ICML.
  • DeepVis Toolbox - This is the code required to run the Deep Visualization Toolbox, as well as to generate the neuron-by-neuron visualizations using regularized optimisation. The toolbox and methods are described casually here and more formally in this paper.
  • ELI5 - "Explain Like I'm 5" is a Python package which helps to debug machine learning classifiers and explain their predictions.
  • FACETS - Facets contains two robust visualizations to aid in understanding and analyzing machine learning datasets. Get a sense of the shape of each feature of your dataset using Facets Overview, or explore individual observations using Facets Dive.
  • Fairlearn - Fairlearn is a python toolkit to assess and mitigate unfairness in machine learning models.
  • FairML - FairML is a python toolbox auditing the machine learning models for bias.
  • Fairness Comparison - This repository is meant to facilitate the benchmarking of fairness aware machine learning algorithms based on this paper.
  • Fairness Indicators - The tool supports teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.
  • iNNvestigate - An open-source library for analyzing Keras models visually by methods such as DeepTaylor-Decomposition, PatternNet, Saliency Maps, and Integrated Gradients.
  • Integrated-Gradients - This repository provides code for implementing integrated gradients for networks with image inputs.
  • InterpretML - InterpretML is an open-source package for training interpretable models and explaining blackbox systems.
  • keras-vis - keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximization, Saliency maps, Class activation maps.
  • Lightly - A python framework for self-supervised learning on images. The learned representations can be used to analyze the distribution in unlabeled data and rebalance datasets.
  • Lightwood - A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glued together seamlessly with an objective to build predictive models with one line of code.
  • LIME - Local Interpretable Model-agnostic Explanations for machine learning models.
  • LOFO Importance - LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.
  • mljar-supervised - A Python package for AutoML on tabular data with feature engineering, hyper-parameters tuning, explanations and automatic documentation.
  • SHAP - SHapley Additive exPlanations is a unified approach to explain the output of any machine learning model.
  • SHAPash - Shapash is a Python library that provides several types of visualization that display explicit labels that everyone can understand.
  • themis-ml - themis-ml is a Python library built on top of pandas and sklearn that implements fairness-aware machine learning algorithms.
  • Themis - Themis is a testing-based approach for measuring discrimination in a software system.
  • Transformer Debugger - Transformer Debugger (TDB) is a tool developed by OpenAI's Superalignment team with the goal of supporting investigations into specific behaviors of small language models.
  • TreeInterpreter - Package for interpreting scikit-learn's decision tree and random forest predictions. Allows decomposing each prediction into bias and feature contribution components as described here.
  • WhatIf - An easy-to-use interface for expanding understanding of a black-box classification or regression ML model.
  • woe - Tools for WoE Transformation mostly used in ScoreCard Model for credit rating

Feature Store

  • Butterfree - A tool for building feature stores which allows you to transform your raw data into beautiful features.
  • FEAST - Feast (Feature Store) is an open source feature store for machine learning. Feast is the fastest path to manage existing infrastructure to productionize analytic data for model training and online inference.
  • Feathr - A scalable, unified data and AI engineering platform for enterprise
  • Featureform - A virtual featurestore. Plug-&-play with your existing infra. Data Scientist approved. Discovery, Governance, Lineage, & Collaboration just a pip install away. Supports pandas, Python, spark, SQL + integrations with major cloud vendors.
  • Hopsworks Feature Store - Offline/Online Feature Store for ML (Video).

Industry-strength AD

  • adtk - A Python toolkit for rule-based/unsupervised anomaly detection in time series.
  • Alibi Detect - alibi-detect is a Python package focused on outlier, adversarial and concept drift detection.
  • Darts - Darts is a library for user-friendly forecasting and anomaly detection on time series.
  • Deequ - A library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
  • Deep Anomaly Detection with Outlier Exposure - Outlier Exposure (OE) is a method for improving anomaly detection performance in deep learning models. Paper
  • PyOD - A Python Toolbox for Scalable Outlier Detection (Anomaly Detection).
  • SUOD - SUOD (Scalable Unsupervised Outlier Detection) is an acceleration system for large-scale anomaly/outlier detection.
  • TextAttack - TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
  • TFDV - TFDV (Tensorflow Data Validation) is a library for exploring and validating machine learning data.
  • TODS - TODS is a full-stack automated machine learning system for outlier detection on multivariate time-series data.

Industry Strength CV

  • Deep Lake - Deep Lake is a data infrastructure optimized for computer vision.
  • Detectron2 - Detectron2 is Facebook AI Research's next generation library that provides state-of-the-art detection and segmentation algorithms.
  • iGibson - iGibson is a simulation environment providing fast visual rendering and physics simulation based on Bullet.
  • JDiffusion - JDiffusion is a diffusion model library for generating images or videos based on Diffusers and Jittor.
  • KerasCV - KerasCV is a library of modular computer vision oriented Keras components.
  • LAVIS - LAVIS is a deep learning library for LAnguage-and-VISion intelligence research and applications.
  • libcom - libcom is an image composition toolbox.
  • MMDetection - MMDetection is an open source object detection toolbox based on PyTorch.
  • SCEPTER - SCEPTER is an open-source code repository dedicated to generative training, fine-tuning, and inference, encompassing a suite of downstream tasks such as image generation, transfer, editing.
  • SuperGradients - SuperGradients is an open-source library for training PyTorch-based computer vision models.
  • supervision - Supervision is a Python library designed for efficient computer vision pipeline management, providing tools for annotation, visualization, and monitoring of models.
  • VideoSys - VideoSys supports many diffusion models with our various acceleration techniques, enabling these models to run faster and consume less memory.
  • VISSL - VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.

Industry Strength NLP

  • aisuite - aisuite is a simple, unified interface to multiple generative AI providers.
  • Align-Anything - Align-Anything aims to align any modality large models (any-to-any models), including LLMs, VLMs, and others, with human intentions and values
  • Blackstone - Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project from the Incorporated Council of Law Reporting for England and Wales' research lab, ICLR&D.
  • BERTopic - BERTopic is a topic modeling technique that leverages transformers and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions.
  • Burr - Burr helps you develop applications that make decisions (chatbot, agent, simulation). It comes with production-ready features (telemetry, persistence, deployment, etc.) and the open-source, free, and local-first Burr UI.
  • Coqui STT - Coqui STT is a fast, open-source, multi-platform, deep-learning toolkit for training and deploying speech-to-text models.
  • CodeTF - CodeTF is a one-stop Python transformer-based library for code large language models (Code LLMs) and code intelligence, provides a seamless interface for training and inferencing on code intelligence tasks like code summarization, translation, code generation and so on.
  • CTRL - A Conditional Transformer Language Model for Controllable Generation released by SalesForce.
  • dspy - A framework for programming with foundation models.
  • Dust - Dust assists in the design and deployment of large language model apps.
  • ESPnet - ESPnet is an end-to-end speech processing toolkit.
  • Facebook's XLM - PyTorch original implementation of Cross-lingual Language Model Pretraining which includes BERT, XLM, NMT, XNLI, PKM, etc..
  • FastChat - FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
  • Flair - Simple framework for state-of-the-art NLP developed by Zalando which builds directly on PyTorch.
  • FlexGen - FlexGen is a high-throughput generation engine for running large language models with limited GPU memory.
  • Gensim - Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora.
  • GluonNLP - GluonNLP is a toolkit that enables easy text preprocessing, datasets loading and neural models building to help you speed up your Natural Language Processing (NLP) research.
  • Grover - Grover is a model for Neural Fake News -- both generation and detection. However, it probably can also be used for other generation tasks.
  • h2oGPT - h2oGPT is an open source generative AI, gives organizations like yours the power to own large language models while preserving your data ownership.
  • Haystack - Haystack is an open source NLP framework to interact with your data using Transformer models and LLMs (GPT-3 and alike). Haystack offers production-ready tools to quickly build ChatGPT-like question answering, semantic search, text generation, and more.
  • Interactive Composition Explorer - ICE is a Python library and trace visualizer for language model programs.
  • Kashgari - Kashgari is a simple and powerful NLP Transfer learning framework, build a state-of-art model in 5 minutes for named entity recognition (NER), part-of-speech tagging (PoS), and text classification tasks.
  • Lamini - Lamini is an LLM engine for rapidly customizing models.
  • LangChain - LangChain assists in building applications with LLMs through composability.
  • LlamaIndex - LlamaIndex (GPT Index) is a data framework for your LLM application.
  • LLaMA - LLaMA is intended as a minimal, hackable and readable example to load LLaMA (arXiv) models and run inference.
  • LLaMA2-Accessory - LLaMA2-Accessory is an open-source toolkit for pretraining, finetuning and deployment of Large Language Models (LLMs) and multimodal LLMs.
  • LMFlow - LMFlow is an extensible, convenient, and efficient toolbox for finetuning large machine learning models.
  • Megatron-LM - Megatron-LM is a highly optimized and efficient library for training large language models.
  • MLC LLM - MLC LLM is a universal solution that allows any language models to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases.
  • Ollama - Get up and running with large language models, locally.
  • PaddleNLP - PaddleNLP is a Large Language Model (LLM) development suite based on the PaddlePaddle deep learning framework, supporting efficient large model training, lossless compression, and high-performance inference on various hardware devices.
  • Semantic Kernel - Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code.
  • sense2vec - A Pytorch library that allows for training and using sense2vec models, which are models that leverage the same approach than word2vec, but also leverage part-of-speech attributes for each token, which allows it to be "meaning-aware".
  • Sentence Transformers - Sentence Transformers provides an easy method to compute dense vector representations for sentences, paragraphs, and images.
  • SpaCy - spaCy is a library for advanced Natural Language Processing in Python and Cython.
  • SWIFT - SWIFT is a scalable lightweight infrastructure for deep learning model fine-tuning.
  • Tensorflow Lingvo - A framework for building neural networks in Tensorflow, particularly sequence models.
  • Tensorflow Text - TensorFlow Text provides a collection of text related classes and ops ready to use with TensorFlow 2.0.
  • Transformers - Huggingface's library of state-of-the-art pretrained models for Natural Language Processing (NLP).
  • trlX - trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset.

Industry Strength RecSys

  • EasyRec - EasyRec is a framework for large scale recommendation algorithms.
  • Gorse - Gorse aims to be a universal open-source recommender system that can be quickly introduced into a wide variety of online services.
  • Implicit - Implicit provides fast Python implementations of several different popular recommendation algorithms for implicit feedback datasets
  • LightFM - LightFM is a Python implementation of a number of popular recommendation algorithms for both implicit and explicit feedback
  • NVTabular - NVTabular is a feature engineering and preprocessing library for tabular data that is designed to easily manipulate terabyte scale datasets and train deep learning (DL) based recommender systems.
  • Merlin - NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.
  • Recommenders - Recommenders contains benchmark and best practices for building recommendation systems, provided as Jupyter notebooks.
  • Surprise - Surprise is a Python scikit for building and analyzing recommender systems that deal with explicit rating data.
  • YouTokenToMe - YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE).

Industry Strength RL

  • Acme - Acme is a library of reinforcement learning (RL) building blocks that strives to expose simple, efficient, and readable agents.
  • AI-Optimizer - AI-Optimizer is a next-generation deep reinforcement learning suit, providing rich algorithm libraries ranging from model-free to model-based RL algorithms, from single-agent to multi-agent algorithms. Moreover, AI-Optimizer contains a flexible and easy-to-use distributed training framework for efficient policy training.
  • ALF - ALF is a reinforcement learning framework emphasizing on the flexibility and easiness of implementing complex algorithms involving many different components.
  • AlpacaFarm - AlpacaFarm is a simulation framework for methods that learn from human feedback.
  • CityLearn - CityLearn is an open source OpenAI Gym environment for the implementation of Multi-Agent Reinforcement Learning (RL) for building energy coordination and demand response in cities.
  • CleanRL - CleanRL is a Deep Reinforcement Learning library that provides high-quality single-file implementation with research-friendly features. The implementation is clean and simple, yet we can scale it to run thousands of experiments using AWS Batch.
  • CompilerGym - CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks.
  • d3rlpy - d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.
  • D4RL - D4RL is an open-source benchmark for offline reinforcement learning.
  • DIAMBRA - DIAMBRA Arena is a software package featuring a collection of high-quality environments for Reinforcement Learning research and experimentation.
  • Dopamine - Dopamine is a research framework for fast prototyping of reinforcement learning algorithms. It aims to fill the need for a small, easily grokked codebase in which users can freely experiment with wild ideas (speculative research).
  • EvoTorch - EvoTorch is an open source evolutionary computation library developed at NNAISENSE, built on top of PyTorch.
  • FinRL - FinRL is the first open-source framework to demonstrate the great potential of financial reinforcement learning.
  • garage - garage is a toolkit for developing and evaluating reinforcement learning algorithms, and an accompanying library of state-of-the-art implementations built using that toolkit.
  • Gymnasium - Gymnasium is an open source Python library for developing and comparing reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API.
  • Gymnasium-Robotics - Gymnasium-Robotics contains a collection of Reinforcement Learning robotic environments that use the Gymansium API. The environments run with the MuJoCo physics engine and the maintained mujoco python bindings.
  • Jumanji - Jumanji is a suite of Reinforcement Learning (RL) environments written in JAX providing clean, hardware-accelerated environments for industry-driven research.
  • MALib - MALib is a parallel framework of population-based learning nested with reinforcement learning methods. MALib provides higher-level abstractions of MARL training paradigms, which enables efficient code reuse and flexible deployments on different distributed computing paradigms.
  • MARLlib - MARLlib is a comprehensive Multi-Agent Reinforcement Learning algorithm library based on RLlib. It provides MARL research community with a unified platform for building, training, and evaluating MARL algorithms.
  • Mava - Mava is a framework for distributed multi-agent reinforcement learning in JAX.
  • Melting Pot - Melting Pot is a suite of test scenarios for multi-agent reinforcement learning.
  • MetaDrive - MetaDrive is a driving simulator that composes diverse driving scenarios for generalizable RL.
  • Minigrid - The Minigrid library contains a collection of discrete grid-world environments to conduct research on Reinforcement Learning. The environments follow the Gymnasium standard API and they are designed to be lightweight, fast, and easily customizable.
  • MiniHack - MiniHack is a sandbox framework for easily designing rich and diverse environments for Reinforcement Learning
  • MiniWorld - MiniWorld is a minimalistic 3D interior environment simulator for reinforcement learning & robotics research.
  • ML-Agents - ML-Agents is an open-source project that enables games and simulations to serve as environments for training reinforcement learning intelligent agents.
  • MushroomRL - MushroomRL is a Python reinforcement learning (RL) library whose modularity allows to easily use well-known Python libraries for tensor computation (e.g. PyTorch, Tensorflow) and RL benchmarks (e.g. OpenAI Gym, PyBullet, Deepmind Control Suite).
  • OmniSafe - OmniSafe is an infrastructural framework designed to accelerate safe reinforcement learning (RL) research.
  • Overcooked-AI - Overcooked-AI is a benchmark environment for fully cooperative human-AI task performance, based on the wildly popular video game Overcooked.
  • PARL - PARL is a flexible and high-efficient reinforcement learning framework.
  • PettingZoo - PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gymnasium.
  • RLeXplore - RLeXplore provides stable baselines of exploration methods in reinforcement learning.
  • RLMeta - RLMeta is a flexible lightweight research framework for Distributed Reinforcement Learning based on PyTorch and moolib
  • Safety-Gymnasium - Safety-Gymnasium is a highly scalable and customizable safe reinforcement learning environment library.
  • skrl - skrl is an open-source modular library for Reinforcement Learning written in Python (using PyTorch) and designed with a focus on readability, simplicity, and transparency of algorithm implementation.
  • Stable Baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms.
  • SuperSuit - SuperSuit introduces a collection of small functions which can wrap reinforcement learning environments to do preprocessing ('microwrappers').
  • TF-Agents - A reliable, scalable and easy to use TensorFlow library for contextual bandits and reinforcement learning.
  • TRL - Train transformer language models with reinforcement learning.
  • veRL - veRL (HybridFlow) is a flexible, efficient and industrial-level RL(HF) training framework designed for LLMs.

Industry Strength Visualisation

  • Apache ECharts - Apache ECharts is a powerful, interactive charting and data visualization library for browser.
  • Apache Superset - A modern, enterprise-ready business intelligence web application.
  • Bokeh - Bokeh is an interactive visualization library for Python that enables beautiful and meaningful visual presentation of data in modern web browsers.
  • Geoplotlib - geoplotlib is a python toolbox for visualizing geographical data and making maps.
  • ggplot2 - An implementation of the grammar of graphics for R.
  • gradio - Quickly create and share demos of models - by only writing Python. Debug models interactively in your browser, get feedback from collaborators, and generate public links without deploying anything.
  • Kangas - Kangas is a tool for exploring, analyzing, and visualizing large-scale multimedia data. It provides a straightforward Python API for logging large tables of data, along with an intuitive visual interface for performing complex queries against your dataset.
  • matplotlib - A Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.
  • Missingno - missingno provides a small toolset of flexible and easy-to-use missing data visualizations and utilities that allows you to get a quick visual summary of the completeness (or lack thereof) of your dataset.
  • Netron - Netron is a viewer for neural network, deep learning and machine learning models.
  • PDPBox - This repository is inspired by ICEbox. The goal is to visualize the impact of certain features towards model prediction for any supervised learning algorithm.
  • Perspective Streaming pivot visualization via WebAssembly.
  • Pixiedust - PixieDust is a productivity tool for Python or Scala notebooks, which lets a developer encapsulate business logic into something easy for your customers to consume.
  • Plotly - An interactive, open source, and browser-based graphing library for Python.
  • PyCEbox - Python Individual Conditional Expectation Plot Toolbox.
  • pygal - pygal is a dynamic SVG charting library written in Python.
  • Redash - Redash is anopen source visualisation framework that is built to allow easy access to big datasets leveraging multiple backends.
  • seaborn - Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
  • Spotlight - Spotlight helps you to identify critical data segments and model failure modes. It enables you to build and maintain reliable machine learning models by curating high-quality datasets.
  • Streamlit - Streamlit lets you create apps for your machine learning projects with deceptively simple Python scripts. It supports hot-reloading, so your app updates live as you edit and save your file.
  • tensorboardX - Write TensorBoard events with simple function call.
  • TensorBoard - TensorBoard is a visualization toolkit for machine learning experimentation that makes it easy to host, track, and share ML experiments.
  • Transformer Explainer - Transformer Explainer is an interactive visualization tool designed to help anyone learn how Transformer-based models like GPT work.
  • Vega-Altair - Vega-Altair is a declarative statistical visualization library for Python.
  • ydata-profiling - ydata-profiling provides a one-line Exploratory Data Analysis (EDA) experience in a consistent and fast solution.

Metadata Management

  • Amundsen - Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.
  • Apache Atlas - Apache Atlas framework is an extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem.
  • DataHub - DataHub is LinkedIn's generalized metadata search & discovery tool.
  • Marquez - Marquez is an open source metadata service for the collection, aggregation, and visualization of a data ecosystem's metadata.
  • Metacat - Metacat is a unified metadata exploration API service. Metacat focusses on solving these problems: 1) federated views of metadata systems; 2) arbitrary metadata storage about data sets; 3) metadata discovery.
  • ML Metadata - a library for recording and retrieving metadata associated with ML developer and data scientist workflows.
  • Model Card Toolkit - Model Card Toolkit is a toolkit that streamlines and automates the generation of model cards.
  • TensorFlow Metadata - TensorFlow Metadata provides standard representations for metadata that are useful when training machine learning models with TensorFlow.

Model, Data and Experiment Tracking

  • AI2 Tango - AI2 Tango replaces messy directories and spreadsheets full of file versions by organizing experiments into discrete steps that can be cached and reused throughout the lifetime of a research project.
  • Aim - A super-easy way to record, search and compare AI experiments.
  • Catalyst - High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing.
  • ClearML - Auto-Magical Experiment Manager & Version Control for AI (previously Trains).
  • CodaLab - CodaLab Worksheets is a collaborative platform for reproducible research that allows researchers to run, manage, and share their experiments in the cloud. It helps researchers ensure that their runs are reproducible and consistent.
  • Deepkit - An open-source platform and cross-platform desktop application to execute, track, and debug modern machine learning experiments.
  • Dolt - Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository.
  • DVC - DVC (Data Version Control) is a git fork that allows for version management of models.
  • Flor - Easy to use logger and automatic version controller made for data scientists who write ML code.
  • Guild AI - Open source toolkit that automates and optimizes machine learning experiments.
  • Hangar - Version control for tensor data, git-like semantics on numerical data with high speed and efficiency.
  • Keepsake - Version control for machine learning.
  • lakeFS - Repeatable, atomic and versioned data lake on top of object storage.
  • MLflow - Open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment.
  • ModelDB - An open-source system to version machine learning models including their ingredients code, data, config, and environment and to track ML metadata across the model lifecycle.
  • ModelStore - An open-source Python library that allows you to version, export, and save a machine learning model to your cloud storage provider.
  • Neptune - Neptune is a scalable experiment tracker for teams that train foundation models.
  • ormb - Docker for Your ML/DL Models Based on OCI Artifacts.
  • Polyaxon - A platform for reproducible and scalable machine learning and deep learning on kubernetes - (Video).
  • Quilt - Versioning, reproducibility and deployment of data and models.
  • Sacred - Tool to help you configure, organize, log and reproduce machine learning experiments.
  • Studio - Model management framework which minimizes the overhead involved with scheduling, running, monitoring and managing artifacts of your machine learning experiments.
  • TerminusDB - A graph database management system that stores data like git.
  • Weights & Biases - Weights & Biase is a machine learning experiment tracking, dataset versioning, hyperparameter search, visualization, and collaboration.

Model Storage Optimisation

  • AutoAWQ - AutoAWQ is an easy-to-use package for 4-bit quantized models.
  • AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
  • AWQ - Activation-aware Weight Quantization for LLM Compression and Acceleration.
  • GGML - GGML is a high-performance, tensor library for machine learning that enables efficient inference on CPUs, particularly optimized for large language models.
  • GPTQ - Accurate Post-training Quantization of Generative Pretrained Transformers.
  • MMdnn - MMdnn is a comprehensive cross-framework tool from Microsoft that facilitates model conversion, visualization, and deployment across various deep learning frameworks.
  • neural-compressor - Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks.
  • NNEF - Neural Network Exchange Format (NNEF) is an open standard for representing neural network models to enable interoperability and portability across different machine learning frameworks and platforms.
  • ONNX - ONNX (Open Neural Network Exchange) is an open-source format designed to facilitate interoperability and portability of machine learning models across different frameworks and platforms.
  • PFA - PFA (Portable Format for Analytics) format is a standard for representing and exchanging predictive models and analytics workflows in a portable, JSON-based format.
  • PMML - PMML (Predictive Model Markup Language) is an XML-based standard for representing and sharing predictive models between different applications.
  • Quanto - Quanto aims to simplify quantizing deep learning models.

Neural Search and Retrieval

  • Annoy - Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point.
  • AutoRAG - AutoRAG is a RAG AutoML tool for automatically finds an optimal RAG pipeline for your data.
  • BeyondLLM - Beyond LLM offers an all-in-one toolkit for experimentation, evaluation, and deployment of RAG systems, simplifying the process with automated integration, customizable evaluation metrics, and support for various LLMs tailored to specific needs, ultimately aiming to reduce LLM hallucination risks and enhance reliability.
  • CLIP-as-service - CLIP-as-service is a low-latency high-scalability service for embedding images and text. It can be easily integrated as a microservice into neural search solutions.
  • Cognita - Cognita is a RAG framework for building modular and production-ready applications.
  • DocArray - DocArray is a library for nested, unstructured, multimodal data in transit, including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer multimodal data with a Pythonic API.
  • Faiss - Faiss is a library for efficient similarity search and clustering of dense vectors.
  • fastRAG - fastRAG is a research framework for efficient and optimized retrieval augmented generative pipelines, incorporating state-of-the-art LLMs and Information Retrieval.
  • Finetuner - Finetuner provides an effective way to improve performance on neural search tasks.
  • GraphRAG - GraphRAG is a data pipeline and transformation suite that is designed to extract meaningful, structured data from unstructured text using the power of LLMs.
  • HippoRAG - HippoRAG is a novel retrieval augmented generation (RAG) framework inspired by the neurobiology of human long-term memory that enables LLMs to continuously integrate knowledge across external documents.
  • LightRAG - A simple and fast retrieval-augmented generation framework.
  • llmware - llmware provides a unified framework for building LLM-based applications (e.g, RAG, Agents), using small, specialized models that can be deployed privately, integrated with enterprise knowledge sources safely and securely, and cost-effectively tuned and adapted for any business process.
  • Mem0 - Mem0 enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions.
  • MindSQL - MindSQL is a Python RAG library to streamline the interaction between users and their databases using just a few lines of code.
  • NGT - NGT provides commands and a library for performing high-speed approximate nearest neighbor searches against a large volume of data in high dimensional vector data space.
  • NMSLIB - Non-Metric Space Library (NMSLIB): An efficient similarity search library and a toolkit for evaluation of k-NN methods for generic non-metric spaces.
  • Qdrant - An open source vector similarity search engine with extended filtering support.
  • R2R - R2R (RAG to Riches) is a comprehensive platform for building, deploying, and scaling RAG applications with hybrid search, multimodal support, and advanced observability.
  • RAGFlow - RAGFlow is a RAG engine based on deep document understanding.
  • RAGxplorer - RAGxplorer is a tool to build RAG visualisations.
  • Rule-based Retrieval - Rule-based Retrieval enables users to create and manage RAG applications with advanced filtering capabilities.
  • Vanna - Vanna is a RAG framework for SQL generation and related functionality.

Optimized Computation

  • Adapters - Adapters is a unified library for parameter-efficient and modular transfer learning.
  • AutoTrain Advanced - AutoTrain Advanced is a no-code solution that allows you to train machine learning models in just a few clicks.
  • BindsNET - BindsNET is a spiking neural network simulation library geared towards the development of biologically inspired algorithms for machine learning.
  • BitBLAS - BitBLAS is a library to support mixed-precision BLAS operations on GPUs
  • bitsandbytes - Bitsandbytes library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
  • BrainCog - BrainCog (Brain-inspired Cognitive Intelligence Engine) is a brain-inspired spiking neural network based platform for Brain-inspired Artificial Intelligence and simulating brains at multiple scales.
  • Composer - Composer is a PyTorch library that enables you to train neural networks faster, at lower cost, and to higher accuracy.
  • CuDF - Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.
  • CuML - cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects.
  • CuPy - An implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it.
  • Flax - A neural network library and ecosystem for JAX designed for flexibility.
  • H2O-3 - Fast scalable Machine Learning platform for smarter applications: Deep Learning, Gradient Boosting & XGBoost, Random Forest, Generalized Linear Modeling (Logistic Regression, Elastic Net), K-Means, PCA, Stacked Ensembles, Automatic Machine Learning (AutoML), etc..
  • Jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more.
  • Kompute - Blazing fast, lightweight and mobile phone-enabled Vulkan compute framework optimized for advanced GPU data processing usecases.
  • MLX - MLX is an array framework for machine learning on Apple silicon.
  • Modin - Speed up your Pandas workflows by changing a single line of code.
  • Nevergrad - Nevergrad is a gradient-free optimisation platform.
  • Norse - Norse aims to exploit the advantages of bio-inspired neural components, which are sparse and event-driven - a fundamental difference from artificial neural networks.
  • Numba - A compiler for Python array and numerical functions.
  • NumpyGroupies Optimised tools for group-indexing operations: aggregated sum and more.
  • OpenFlamingo - OpenFlamingo is an open-source framework for training large multimodal models.
  • Optimum - Optimum is an extension of Transformers and Diffusers, providing a set of optimization tools enabling maximum efficiency to train and run models on targeted hardware while keeping things easy to use.
  • PEFT - Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters.
  • PyTorch - PyTorch is a library to develop and train neural network based deep learning models.
  • scikit-learn - Scikit-learn is a powerful machine learning library that provides a wide variety of modules for data access, data preparation and statistical model building.
  • SetFit - SetFit is an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers.
  • snnTorch - snnTorch is a deep and online learning library with spiking neural networks.
  • Sonnet - Sonnet is a library built on top of TensorFlow 2 designed to provide simple, composable abstractions for machine learning research.
  • Tensor2Tensor - Tensor2Tensor is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
  • TensorFlow - TensorFlow is a leading library designed for developing and deploying state-of-the-art machine learning applications.
  • ThunderKittens ThunderKittens is a framework to make it easy to write fast deep learning kernels in CUDA.
  • torchkeras The torchkeras library is a simple tool for training neural network in pytorch jusk in a keras style.
  • TorchOpt - TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
  • Vaex Vaex is a high performance Python library for lazy Out-of-Core DataFrames (similar to Pandas), to visualize and explore big tabular datasets. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
  • Vowpal Wabbit Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
  • Weld - High-performance runtime for data analytics applications, Here is an interview with Weld’s main contributor.
  • XGBoost - XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable.
  • yellowbrick - yellowbrick is a matplotlib-based model evaluation plots for scikit-learn and other machine learning libraries.

Privacy and Security

  • BastionLab - BastionLab is a framework for confidential data science collaboration. It uses Confidential Computing, Access control data science, and Differential Privacy to enable data scientists to remotely perform data exploration, statistics, and training on confidential data while ensuring maximal privacy for data owners.
  • Concrete-ML - Concrete-ML is a Privacy-Preserving Machine Learning (PPML) open-source set of tools built on top of The Concrete Framework by Zama. It aims to simplify the use of fully homomorphic encryption (FHE) for data scientists to help them automatically turn machine learning models into their homomorphic equivalent.
  • Fedlearner - Fedlearner is collaborative machine learning framework that enables joint modeling of data distributed between institutions.
  • FATE - FATE (Federated AI Technology Enabler) is the world's first industrial grade federated learning open source framework to enable enterprises and institutions to collaborate on data while protecting data security and privacy.
  • FedML - FedML provides a research and production integrated edge-cloud platform for Federated/Distributed Machine Learning at anywhere at any scale.
  • Flower - Flower is a Federated Learning Framework with a unified approach. It enables the federation of any ML workload, with any ML framework, and any programming language.
  • Google's Differential Privacy - This is a C++ library of ε-differentially private algorithms, which can be used to produce aggregate statistics over numeric data sets containing private or sensitive information.
  • Guardrails - Guardrails is a package that lets a user add structure, type and quality guarantees to the outputs of large language models.
  • Intel Homomorphic Encryption Backend - The Intel HE transformer for nGraph is a Homomorphic Encryption (HE) backend to the Intel nGraph Compiler, Intel's graph compiler for Artificial Neural Networks.
  • Microsoft SEAL - Microsoft SEAL is an easy-to-use open-source (MIT licensed) homomorphic encryption library developed by the Cryptography Research group at Microsoft.
  • OpenFL - OpenFL is a Python framework for Federated Learning. OpenFL is designed to be a flexible, extensible and easily learnable tool for data scientists. OpenFL is developed by Intel Internet of Things Group (IOTG) and Intel Labs.
  • PySyft - A Python library for secure, private Deep Learning. PySyft decouples private data from model training, using Multi-Party Computation (MPC) within PyTorch.
  • Rosetta - A privacy-preserving framework based on TensorFlow with customized backend Operations using Multi-Party Computation (MPC). Rosetta reuses the APIs of TensorFlow and allows to transfer original TensorFlow codes into a privacy-preserving manner with minimal changes.
  • Substra - Substra is an open-source framework for privacy-preserving, traceable and collaborative Machine Learning.
  • Tensorflow Privacy - A Python library that includes implementations of TensorFlow optimizers for training machine learning models with differential privacy.
  • TF Encrypted - A Framework for Confidential Machine Learning on Encrypted Data in TensorFlow.

Training Orchestration

  • Accelerate - Accelerate abstracts exactly and only the boilerplate code related to multi-GPU/TPU/mixed-precision and leaves the rest of your code unchanged.
  • Axolotl - Axolotl is a tool designed to streamline the fine-tuning of various AI models, offering support for multiple configurations and architectures.
  • CML - Continuous Machine Learning (CML) is an open-source library for implementing continuous integration & delivery (CI/CD) in machine learning projects.
  • CoreNet - CoreNet is a deep neural network toolkit that allows researchers and engineers to train standard and novel small and large-scale models for variety of tasks, including foundation models (e.g., CLIP and LLM), object classification, object detection, and semantic segmentation.
  • Determined - Deep learning training platform with integrated support for distributed training, hyperparameter tuning, and model management (supports Tensorflow and Pytorch).
  • envd - Machine learning development environment for data science and AI/ML engineering teams.
  • Fabrik - Fabrik is an online collaborative platform to build, visualize and train deep learning models via a simple drag-and-drop interface.
  • Hopsworks - Hopsworks is a data-intensive platform for the design and operation of machine learning pipelines that includes a Feature Store - (Video).
  • Ludwig - Ludwig is a low-code framework for building custom AI models like LLMs and other deep neural networks.
  • Kubeflow - A cloud-native platform for machine learning based on Google’s internal machine learning pipelines.
  • MFTCoder - MFTCoder is an open-source project of CodeFuse for accurate and efficient Multi-task Fine-tuning(MFT) on Large Language Models(LLMs), especially on Code-LLMs(large language model for code tasks).
  • MLeap - Standardisation of pipeline and model serialization for Spark, Tensorflow and sklearn.
  • Nanotron - Nanotron provides distributed primitives to train a variety of models efficiently using 3D parallelism.
  • NeMo - NVIDIA NeMo is a scalable and cloud-native generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains. It is designed to help you efficiently create, customize, and deploy new generative AI models by leveraging existing code and pre-trained model checkpoints.
  • Nos - Nos is an open-source platform to efficiently run AI workloads on Kubernetes, increasing GPU utilization and reducing infrastructure and operational costs.
  • NVIDIA TensorRT - TensorRT is a C++ library for high-performance inference on NVIDIA GPUs and deep learning accelerators.
  • Open Platform for AI - Platform that provides complete AI model training and resource management capabilities.
  • PyCaret ) - low-code library for training and deploying models (scikit-learn, XGBoost, LightGBM, spaCy)
  • Sematic - Platform to build resource-intensive pipelines with simple Python.
  • Skaffold - Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. You can iterate on your application source code locally then deploy to local or remote Kubernetes clusters.
  • Streaming - A Data Streaming Library for Efficient Neural Network Training.
  • TFX - Tensorflow Extended (TFX) is a production oriented configuration framework for ML based on TensorFlow, incl. monitoring and model version management.
  • torchdistill - torchdistill offers various state-of-the-art knowledge distillation methods and enables you to design (new) experiments simply by editing a declarative yaml config file instead of Python code.
  • veScale - veScale is a PyTorch native LLM training framework.