Detect and redact PII locally with SOTA performance
-
Updated
Mar 25, 2025 - Python
Detect and redact PII locally with SOTA performance
Extract structured data from local or remote LLM models
A chrome extention for quering a local llm model using llama-cpp-python, includes a pip package for running the server, 'pip install local-llama' to install
entirely oss and locally running version of recall (originally revealed by msft for copilot+pcs)
Bell inequalities and local models via Frank-Wolfe algorithms
A 💅 stylish 💅 local multi-model AI assistant and API.
Main code chunks used for models in the publication "Exploring the Potential of Adaptive, Local Machine Learning (ML) in Comparison ton the Prediction Performance of Global Models: A Case Study from Bayer's Caco-2 Permeability Database"
Extracting complete webpage articles from a screen recording using local models
Vision-based avatar, reads Google News and extracts news by itself using only local models
A streamlined interface for interacting with local Large Language Models (LLMs) using Streamlit. Features interactive chat, configurable model parameters, and more.
Add a description, image, and links to the local-models topic page so that developers can more easily learn about it.
To associate your repository with the local-models topic, visit your repo's landing page and select "manage topics."