Core functionality for Osam.
-
Updated
Jun 30, 2024 - Python
Core functionality for Osam.
Official implementation of CVPR'24 paper 'Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts'.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
The official evaluation suite and dynamic data release for MixEval.
A Survey on Vision-Language Geo-Foundation Models (VLGFMs)
The codebase for the book "AI-Powered Search" (Manning Publications, 2024)
日本語LLMまとめ - Overview of Japanese LLMs
Deploy Audiocraft Musicgen on Amazon SageMaker using SageMaker Endpoints for Async Inference.
YOLO World for Osam.
EfficientSAM for Osam.
Get up and running with SAM, Efficient-SAM, and other segment-anything models locally.
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
Making large AI models cheaper, faster and more accessible
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Foundation model benchmarking tool. Run any model on any AWS platform and benchmark for performance across instance type and serving stack options.
ONNX models of YOLO-World (an open-vocabulary object detection).
Video Foundation Models & Data for Multimodal Understanding
Evaluation framework for oncology foundation models (FMs)
Information and materials for the Turing's Foundation Models reading group.
Must-read Papers on Knowledge Editing for Large Language Models.
Add a description, image, and links to the foundation-models topic page so that developers can more easily learn about it.
To associate your repository with the foundation-models topic, visit your repo's landing page and select "manage topics."