Skip to content
forked from xtreme1-io/xtreme1

Xtreme1 - The Next GEN Platform for Multisensory Training Data. #3D annotation, lidar-camera fusion annotation, image annotation and visualation tools are supported!

License

Notifications You must be signed in to change notification settings

JOP-Lee/xtreme1

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Xtreme1 logo

Slack Twitter Online Docs

Intro

Xtreme1 is the world's first open-source platform for multisensory training data.

Xtreme1 provides deep insight into data annotation, data curation, and ontology management to solve 2D image and 3D point cloud dataset ML challenges. The built-in AI-assisted tools take your annotation efforts to the next level of efficiency for your 2D/3D Object Detection, 3D Instance Segmentation, and LiDAR-Camera Fusion projects.

It is now hosted in LF AI & Data Foundation as a sandbox project.

Join community

Website | Slack | Twitter | Medium | Issues

Join the Xtreme1 community on Slack to share your suggestions, advice, and questions with us.

👉 Join us on Slack today!

Key features

Image Annotation (B-box, Segmentation) - YOLOR & RITM Lidar-camera Fusion (Frame series) Annotation - OpenPCDet & AB3DMOT

1️⃣ Supports data labeling for images, 3D LiDAR and 2D/3D Sensor Fusion datasets

2️⃣ Built-in pre-labeling and interactive models support 2D/3D object detection, segmentation and classification

3️⃣ Configurable Ontology Center for general classes (with hierarchies) and attributes for use in your model training

4️⃣ Data management and quality monitoring

5️⃣ Find labeling errors and fix them

6️⃣ Model results visualization to help you evaluate your model

Image Data Curation (Visualizing & Debug) - MobileNetV3 & openTSNE Lidar-camera Fusion Data Curation (Filter by Class name X Cross Dataset)

Quick start

Download package

Download the latest release package and unzip it.

wget https://github.com/xtreme1-io/xtreme1/releases/download/v0.5.5/xtreme1-v0.5.5.zip
unzip -d xtreme1-v0.5.5 xtreme1-v0.5.5.zip

Start all services

docker compose up

Visit https://localhost:8190 in the browser (Google Chrome is recommended) to try out Xtreme1!

⚠️ Install built-in models

You need to explicitly specify a model profile to enable model services.

docker compose --profile model up

Enable model services

Make sure you have installed NVIDIA Driver and NVIDIA Container Toolkit. But you do not need to install the CUDA Toolkit, as it already contained in the model image.

# You need set "default-runtime" as "nvidia" in /etc/docker/daemon.json and restart docker to enable NVIDIA Container Toolkit
{
  "runtimes": {
    "nvidia": {
      "path": "nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia"
}

For more installation, development and deployment, check out Xtreme1 Docs.

License

This software is licensed under the Apache 2.0 LICENSE. Xtreme1 is a trademark of LF AI Projects.

If Xtreme1 is part of your development process / project / publication, please cite us ❤️ :

@misc{Xtreme1,
title = {Xtreme1 - The Next GEN Platform For Multisensory Training Data},
year = {2022},
note = {Software available from https://github.com/xtreme1-io/xtreme1/},
url={https://xtreme1.io/},
author = {LF AI Projects},
}

About

Xtreme1 - The Next GEN Platform for Multisensory Training Data. #3D annotation, lidar-camera fusion annotation, image annotation and visualation tools are supported!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 41.1%
  • Vue 35.0%
  • Java 15.3%
  • JavaScript 3.4%
  • HTML 2.5%
  • Less 2.1%
  • Other 0.6%