Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multimodal Large Model Joint Learning Algorithm: Reproduction Based on KubeEdge-Ianvs #165

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@


---

# Quick Start about Single Task Learning Semantic Segmentation

Welcome to Ianvs! Ianvs is a benchmarking platform designed to evaluate the performance of distributed synergy AI solutions in accordance with recognized standards. This quick start guide will help you test your **Single Task Learning (STL)** algorithm on Ianvs. By following these streamlined steps, you can efficiently develop and benchmark your solution within minutes.

### **Prerequisites**
Before using Ianvs, ensure that your system meets the following requirements:
- A single machine (a laptop or a virtual machine is sufficient; no cluster is needed)
- At least 2 CPUs
- 4GB+ of free memory (depending on the algorithm and simulation settings)
- 10GB+ of free disk space
- An internet connection for accessing GitHub, pip, etc.
- Python 3.6+ installed

This guide assumes you are using **Linux** with Python 3.8. If you’re on Windows, most steps will apply, but some commands and package requirements may differ.

---

## Step 1. Ianvs Installation

### Clone Ianvs
First, set up a workspace and clone Ianvs:
```shell
mkdir /ianvs
cd /ianvs

mkdir project
cd project
git clone https://github.com/kubeedge/ianvs.git
```

### Install Dependencies
Next, install the required third-party dependencies:
```shell
sudo apt-get update
sudo apt-get install libgl1-mesa-glx -y
python -m pip install --upgrade pip

cd ianvs
python -m pip install ./examples/resources/third_party/*
python -m pip install -r requirements.txt
```

### Install Ianvs
Finally, install Ianvs:
```shell
python setup.py install
```

---

## Step 2. Dataset Preparation

### Cloud-Robotics Dataset Summary

The **Cloud-Robotics Dataset** features **annotated real-world images** with **dense semantic and instance segmentation** across **30 classes** in 7 groups (e.g., vehicles, humans, nature, objects). It includes polygonal annotations, diverse daytime scenes, dynamic objects, and varying layouts. Data is provided in a JSON format, making it ideal for pixel-level semantic labeling and benchmarking vision models for robotics.

Organize the dataset for STL as shown below:

```plaintext
Dataset/
├── 1280x760
│ ├── gtFine
│ │ ├── train
│ │ ├── test
│ │ └── val
│ ├── rgb
│ │ ├── train
│ │ ├── test
│ │ └── val
│ └── viz
│ ├── train
│ ├── test
│ └── val
├── 2048x1024
│ ├── gtFine
│ │ ├── train
│ │ ├── test
│ │ └── val
│ ├── rgb
│ │ ├── train
│ │ ├── test
│ │ └── val
│ └── viz
│ ├── train
│ ├── test
│ └── val
├── 640x480
├── gtFine
│ ├── train
│ ├── test
│ └── val
├── json
│ ├── train
│ ├── test
│ └── val
├── rgb
│ ├── train
│ ├── test
│ └── val
└── viz
├── train
├── test
└── val
```

### Dataset Preparation Command
```shell
mkdir dataset
cd dataset
unzip dataset.zip
```

Update the dataset's **URL address** in the `testenv.yaml` configuration file. More details can be found in the [testenv.yaml guide](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation).

---

## Step 3. Configure Algorithm


Update the algorithm's **URL address** in the `algorithm.yaml` file. Refer to the [algorithm.yaml guide](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for detailed instructions.

---

## Step 4. Ianvs Execution and Results

Run Ianvs for benchmarking:
```shell
cd /ianvs/project
ianvs -f examples/cloud-robotics/single_task_learning/semantic-segmentation/benchmarkingjob.yaml
```

---
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
benchmarkingjob:
# job name of bechmarking; string type;
name: "benchmarkingjob"
# the url address of job workspace that will reserve the output of tests; string type;
workspace: "/ianvs/single_task_learning/cloud"

# the url address of test environment configuration file; string type;
# the file format supports yaml/yml;
testenv: "./examples/cloud_robot/single_task_learning/testenv/testenv.yaml"

# the configuration of test object
test_object:
# test type; string type;
# currently the option of value is "algorithms",the others will be added in succession.
type: "algorithms"
# test algorithm configuration files; list types;
algorithms:
# algorithm name; string type;
- name: "rfnet_singletask_learning"
# the url address of test algorithm configuration file; string type;
# the file format supports yaml/yml
url: "./examples/cloud-robotics/single_task_learning/testalgorithm/rfnet/algorithm.yaml"

# the configuration of ranking leaderboard
rank:
# rank leaderboard with metric of test case's evaluation and order ; list type;
# the sorting priority is based on the sequence of metrics in the list from front to back;
sort_by: [ { "accuracy": "descend" }, { "BWT": "descend" } ]

# visualization configuration
visualization:
# mode of visualization in the leaderboard; string type;
# There are quite a few possible dataitems in the leaderboard. Not all of them can be shown simultaneously on the screen.
# In the leaderboard, we provide the "selected_only" mode for the user to configure what is shown or is not shown.
mode: "selected_only"
# method of visualization for selected dataitems; string type;
# currently the options of value are as follows:
# 1> "print_table": print selected dataitems;
method: "print_table"

# selected dataitem configuration
# The user can add his/her interested dataitems in terms of "paradigms", "modules", "hyperparameters" and "metrics",
# so that the selected columns will be shown.
selected_dataitem:
# currently the options of value are as follows:
# 1> "all": select all paradigms in the leaderboard;
# 2> paradigms in the leaderboard, e.g., "singletasklearning"
paradigms: [ "all" ]
# currently the options of value are as follows:
# 1> "all": select all modules in the leaderboard;
# 2> modules in the leaderboard, e.g., "basemodel"
modules: [ "all" ]
# currently the options of value are as follows:
# 1> "all": select all hyperparameters in the leaderboard;
# 2> hyperparameters in the leaderboard, e.g., "momentum"
hyperparameters: [ "all" ]
# currently the options of value are as follows:
# 1> "all": select all metrics in the leaderboard;
# 2> metrics in the leaderboard, e.g., "F1_SCORE"
metrics: [ "accuracy", "task_avg_acc", "BWT", "FWT"]

# model of save selected and all dataitems in workspace `./rank` ; string type;
# currently the options of value are as follows:
# 1> "selected_and_all": save selected and all dataitems;
# 2> "selected_only": save selected dataitems;
save_mode: "selected_and_all"




Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
algorithm:
paradigm_type: "singletasklearning"
initial_model_url: "./models/model.pth"

modules:
- type: "basemodel"
name: "RFNet"
url: "./examples/cloudrobotics/singletask_learning_bench/testalgorithms/rfnet/basemodel.py"

hyperparameters:
- momentum:
values:
- 0.95
- 0.5
- learning_rate:
values:
- 0.1
- epochs:
values:
- 2


Loading