diff --git a/.gitignore b/.gitignore index 6ec6124..f2cd3fc 100644 --- a/.gitignore +++ b/.gitignore @@ -27,4 +27,4 @@ Tracking/AlphaTracker/train_yolo/darknet/backup/ Tracking/AlphaTracker/train_yolo/darknet/darknet53.conv.74 Tracking/AlphaTracker/train_yolo/darknet/train.sh -main_ui +./main_ui diff --git a/Manual/BehavioralClustering.md b/Manual/BehavioralClustering.md index 5720976..8ea0213 100644 --- a/Manual/BehavioralClustering.md +++ b/Manual/BehavioralClustering.md @@ -13,8 +13,16 @@ The main process of hierarchical clustering list below can be found in `./fft_m
-## Run clustering algorithm +## Run By GUI (recommended for non-cs users) +
+
+
+ AlphaTracker GUI behavior clustering page +
+Please visit our video tutorial for behavior clustering at YouTube or BiliBili. + +## Or run by command line ### Step 1. Configuration Set the Behavioral Clustering folder as the current directory. diff --git a/Manual/Installation.md b/Manual/Installation.md index 15dd1af..c3c86b3 100644 --- a/Manual/Installation.md +++ b/Manual/Installation.md @@ -4,22 +4,30 @@ Download the AlphaTracker repository and rename the main folder from `AlphaTracker-main` to `Alphatracker`. Or you can use `git clone` to clone AlphaTracker repository. -## Install Conda +## Install Anaconda -This project is tested in conda env in linux, and thus that is the recommended environment. To install conda, please follow the instructions from the [conda website](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) With conda installed, please set up the environment with the following steps. +This project is tested in conda env in linux, and thus that is the recommended environment. To install conda, please follow the instructions from the [conda website](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) With conda installed, please set up the environment with the following steps. **Please install anaconda (not miniconda) if you need to use the AlphaTracker GUI.** -### NVIDIA driver +## NVIDIA driver -Please makse sure that your NVIDIA driver version >= 450. +Please make sure that your NVIDIA driver version >= 450. You can download Nvidia driver for your computer at [nvidia website](https://www.nvidia.com/Download/index.aspx). -### Install AlphaTracker +## Install AlphaTracker +### By GUI (recommended for non-cs users) +
+
+
+ AlphaTracker GUI and installation page +
+Please visit our video tutorial for installation at YouTube or BiliBili. +### Or by command line In your command window, locate the terminal prompt. Open this application. Then, find the folder that contains the `AlphaTracker` repository that you just downloaded. Then inside the terminal window, change the directory as follows: `cd /path/to/AlphaTracker`. Then run the following command: ```bash -bash install.sh +bash scripts/install.sh ```
diff --git a/Manual/Tracking.md b/Manual/Tracking.md index a710b38..ce6c184 100644 --- a/Manual/Tracking.md +++ b/Manual/Tracking.md @@ -1,9 +1,57 @@ -# 01 Tracking +# Tracking +## By GUI (recommended for non-cs users) +
+
+
+ AlphaTracker GUI tracking page +
+Please visit our video tutorial for tracking at YouTube or BiliBili. + +
+ +## Or by command line +### Step 1. Configuration + +Before tracking, you need to change the parameters in [Tracking/AlphaTracker/setting.py](../Tracking/AlphaTracker/setting.py) (blue block in Figure 2). The meaning of +the parameters can be found in the comments. + +We will use a trained weight to track a demo video by default. + +### Step 2. Running the code + +Change directory to the [alphatracker folder](../Tracking/AlphaTracker/) and run the following command line to do tracking: +```bash +# if your current virtual environment is not alphatracker +# run this command first: conda activate alphatracker +python track.py +``` + +### General Notes about the Parameters: +1. Remember not to include any spaces or parentheses in your file names. Also, file names are case-sensitive. +2. For training the parameter num_mouse must include the same number of items as the number of json files +that have annotated data. For example if you have one json file with annotated data for 3 animals then +```num_mouse=[3]``` if you have two json files with annoted data for 3 animals then ```num_mouse=[3,3]```. +3. ```sppe_lr``` is the learning rate for the SAPE network. If your network is not performing well you can lower this +number and try retraining +4. ```sppe_epoch``` is the number of training epochs that the SAPE network does. More epochs will take longer but +can potentially lead to better performance. + +
-## Training (Optional) + +# Training (Optional) We have provided pretrained models. However, if you want to train your own models on your custom dataset, you can refer to the following steps. +## By GUI (recommended for non-cs users) +
+
+
+ AlphaTracker GUI training page +
+Please visit our video tutorial for training at YouTube or BiliBili. + +## Or by command line ### Step 1. Data Preparation Labeled data is required to train the model. The code would read RGB images and json files of @@ -47,37 +95,4 @@ https://drive.google.com/file/d/1TYIXYYIkDDQQ6KRPqforrup_rtS0YetR/view?usp=shari There is a demo video in [Tracking/Alphatracker/data](../Tracking/Alphatracker/data) that you can use for tracking. If you want to use the trained network we provide to track this video set `exp_name=demo` in the [Tracking/AlphaTracker/setting.py](../Tracking/AlphaTracker/setting.py) -## Tracking - -### Step 1. Configuration - -Before tracking, you need to change the parameters in [Tracking/AlphaTracker/setting.py](../Tracking/AlphaTracker/setting.py) (blue block in Figure 2). The meaning of -the parameters can be found in the comments. - -We will use a trained weight to track a demo video by default. - -### Step 2. Running the code - -Change directory to the [alphatracker folder](../Tracking/AlphaTracker/) and run the following command line to do tracking: -```bash -# if your current virtual environment is not alphatracker -# run this command first: conda activate alphatracker -python track.py -``` - - - -
- -### General Notes about the Parameters: -1. Remember not to include any spaces or parentheses in your file names. Also, file names are case-sensitive. -2. For training the parameter num_mouse must include the same number of items as the number of json files -that have annotated data. For example if you have one json file with annotated data for 3 animals then -```num_mouse=[3]``` if you have two json files with annoted data for 3 animals then ```num_mouse=[3,3]```. -3. ```sppe_lr``` is the learning rate for the SAPE network. If your network is not performing well you can lower this -number and try retraining -4. ```sppe_epoch``` is the number of training epochs that the SAPE network does. More epochs will take longer but -can potentially lead to better performance. - -
diff --git a/Manual/UI.md b/Manual/UI.md index 8d0b48b..fb91457 100644 --- a/Manual/UI.md +++ b/Manual/UI.md @@ -4,17 +4,27 @@ This interface is browser-based. We recommend using `Google Chrome` as the brows Pre-installed Python3 is required since this package includes Python scripts. -### Running +## Running +### By GUI (recommended for non-cs users) +
+
+
+ AlphaTracker GUI open WebUI page +
-Change your working directory to [UI/](../UI) by running `cd ./UI`. Then run `python server.py` in command window in the unzipped folder. A window should appear in the user's browser. Then click `html/`. From there, select a program you want to run. `cluster.html` is the Cluster UI and `curate.html` is the Tracking UI. - - +### Or by command line +Change your working directory to [UI/](../UI) by running `cd ./UI`. Then run `python server.py` in command window in the unzipped folder.
## Tracking UI +A window should appear in the user's browser. Then click `html/`. From there, select a program you want to run. `cluster.html` is the Cluster UI and `curate.html` is the Tracking UI. + +
+ + ### Import data diff --git a/Manual/media/main_ui/behavior.png b/Manual/media/main_ui/behavior.png new file mode 100644 index 0000000..5140ecf Binary files /dev/null and b/Manual/media/main_ui/behavior.png differ diff --git a/Manual/media/main_ui/install2.png b/Manual/media/main_ui/install2.png new file mode 100644 index 0000000..6d4649f Binary files /dev/null and b/Manual/media/main_ui/install2.png differ diff --git a/Manual/media/main_ui/main.png b/Manual/media/main_ui/main.png new file mode 100644 index 0000000..6d2a19a Binary files /dev/null and b/Manual/media/main_ui/main.png differ diff --git a/Manual/media/main_ui/main_behavior.png b/Manual/media/main_ui/main_behavior.png new file mode 100644 index 0000000..9b84e6f Binary files /dev/null and b/Manual/media/main_ui/main_behavior.png differ diff --git a/Manual/media/main_ui/main_install.png b/Manual/media/main_ui/main_install.png new file mode 100644 index 0000000..2d15bc5 Binary files /dev/null and b/Manual/media/main_ui/main_install.png differ diff --git a/Manual/media/main_ui/main_result.png b/Manual/media/main_ui/main_result.png new file mode 100644 index 0000000..955f546 Binary files /dev/null and b/Manual/media/main_ui/main_result.png differ diff --git a/Manual/media/main_ui/main_track.png b/Manual/media/main_ui/main_track.png new file mode 100644 index 0000000..ab77645 Binary files /dev/null and b/Manual/media/main_ui/main_track.png differ diff --git a/Manual/media/main_ui/main_train.png b/Manual/media/main_ui/main_train.png new file mode 100644 index 0000000..2f120e1 Binary files /dev/null and b/Manual/media/main_ui/main_train.png differ diff --git a/Manual/media/main_ui/main_ui.gif b/Manual/media/main_ui/main_ui.gif new file mode 100644 index 0000000..db55b60 Binary files /dev/null and b/Manual/media/main_ui/main_ui.gif differ diff --git a/Manual/media/main_ui/track.png b/Manual/media/main_ui/track.png new file mode 100644 index 0000000..18cb22b Binary files /dev/null and b/Manual/media/main_ui/track.png differ diff --git a/Manual/media/main_ui/train.png b/Manual/media/main_ui/train.png new file mode 100644 index 0000000..de59ac1 Binary files /dev/null and b/Manual/media/main_ui/train.png differ diff --git a/Manual/media/main_ui/vis_results.png b/Manual/media/main_ui/vis_results.png new file mode 100644 index 0000000..38dbb39 Binary files /dev/null and b/Manual/media/main_ui/vis_results.png differ diff --git a/README.md b/README.md index 91b0829..68d74d9 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,17 @@

-[AlphaTracker](https://github.com/MVIG-SJTU/AlphaTracker) is a multi-animal tracking and behavioral analysis tool which incorporates **multi-animal tracking**, **pose estimation** and **unsupervised behavioral clustering** to empower system neuroscience research. Alphatracker achieves the state-of-art accuracy of multi-animal tracking which lays the foundation for stringent biological studies. Moreover, the minimum requirement for hardware (regular webcams) and efficient training procedure allows readily adoption by most neuroscience labs. +[AlphaTracker](https://github.com/MVIG-SJTU/AlphaTracker) is a multi-animal tracking and behavioral analysis tool which incorporates **multi-animal tracking**, **pose estimation** and **unsupervised behavioral clustering** to empower system neuroscience research. Alphatracker achieves the state-of-art accuracy of multi-animal tracking which lays the foundation for stringent biological studies. Moreover, the minimum requirement for hardware (regular webcams) and efficient training procedure allows readily adoption by most neuroscience labs. **We also provide simple GUI for most procedure in AlphaTracker so as to facilitate research for non-cs labmates and students.**

Architecture and Pipeline of AlphaTracker
+
+
+
+ Illustration of AlphaTracker main GUI +
## Instructions diff --git a/Tracking/AlphaTracker/PoseFlow/tracker-general-fixNum-newSelect-noOrb.py b/Tracking/AlphaTracker/PoseFlow/tracker-general-fixNum-newSelect-noOrb.py index 0416b8e..bba6d43 100644 --- a/Tracking/AlphaTracker/PoseFlow/tracker-general-fixNum-newSelect-noOrb.py +++ b/Tracking/AlphaTracker/PoseFlow/tracker-general-fixNum-newSelect-noOrb.py @@ -36,8 +36,6 @@ from functools import cmp_to_key import time -# from ..setting import pose_pair - def display_pose_cv2(imgdir, visdir, tracked, cmap, args): diff --git a/Tracking/AlphaTracker/setting.py b/Tracking/AlphaTracker/setting.py index dd78663..d362404 100644 --- a/Tracking/AlphaTracker/setting.py +++ b/Tracking/AlphaTracker/setting.py @@ -16,7 +16,8 @@ "./data/sample_annotated_data/demo/train9.json" ] # list of paths to the json files that contain labels of the images for training num_mouse = [2] # the number of mouse in the images in each image folder path -exp_name = "demo" # the name of the experiment +exp_name = "demo" # the name of the training experiment +exp_name_track = "demo" # the exp name of the tracking experiment, denoting which trained results to use num_pose = 4 # number of the pose that is labeled, remember to change self.nJoints in train_sppe/src/utils/dataset/coco.py pose_pair = [[0, 1], [0, 2], [0, 3]] train_val_split = ( @@ -70,71 +71,3 @@ AlphaTracker_root = os.path.abspath(AlphaTracker_root) result_folder = os.path.abspath(result_folder) - -with open('train.cfg', 'r') as f: - dat = f.read() - if not dat: - print(f'error, train.cfg is empty') - try: - dict_state = eval(dat) - except Exception as e: - print(f'load train.cfg Exception: {e}') - print(dict_state) - -gpu_id = int(dict_state['gpu_id']) # the id of gpu that will be used - -# data related settings -image_root_list = [dict_state['image_root_list']] # list of image folder paths to the RGB images for training -json_file_list = [dict_state['json_file_list']] # list of paths to the json files that contain labels of the images for training -num_mouse = [int(dict_state['num_mouse'])] # the number of mouse in the images in each image folder path -exp_name = dict_state['exp_name'] # the name of the experiment -num_pose = int(dict_state['num_pose']) # number of the pose that is labeled, remember to change self.nJoints in train_sppe/src/utils/dataset/coco.py - -pose_pair = np.array([[float(j) for j in i.split('-')] for i in dict_state['pose_pair'].split(',')]) -print('pose pair is:',pose_pair) -train_val_split = float(dict_state['train_val_split']) # ratio of data that used to train model, the rest will be used for validation -image_suffix = dict_state['image_suffix'] # suffix of the image, png or jpg - - -# training hyperparameter setting -# Protip: if your training does not give good enough tracking you can lower lr and increase epoch number -# but lowering the lr too much can be bad for tracking quality as well. -sppe_lr = float(dict_state['sppe_lr']) -sppe_epoch = int(dict_state['sppe_epoch']) -sppe_pretrain = dict_state['sppe_pretrain'] -sppe_batchSize = int(dict_state['sppe_batchSize']) -yolo_lr = float(dict_state['yolo_lr']) -yolo_iter = int(dict_state['yolo_iter']) ## if use pretrained model please make sure yolo_iter to be large enough to guarantee finetune is done -yolo_pretrain = dict_state['yolo_pretrain'] # './train_yolo/darknet/darknet53.conv.74' -yolo_batchSize = int(dict_state['yolo_batchSize']) - - -with open('track.cfg', 'r') as f: - dat = f.read() - if not dat: - print(f'error, track.cfg is empty') - try: - dict_state2 = eval(dat) - except Exception as e: - print(f'load track.cfg Exception: {e}') - print(dict_state2) - - -# demo video setting -# note video_full_path is for track.py, video_paths is for track_batch.py -# video_full_path is the path to the video that will be tracked -video_full_path = dict_state2['video_full_path'] -video_paths = [ - dict_state2['video_full_path'], - ] # make sure video names are different from each other -start_frame = int(dict_state2['start_frame']) # id of the start frame of the video -end_frame = int(dict_state2['end_frame']) # id of the last frame of the video -max_pid_id_setting = int(dict_state2['max_pid_id_setting']) # number of mice in the video -result_folder = dict_state2['result_folder'] # path to the folder used to save the result -remove_oriFrame = int(dict_state2['remove_oriFrame']) # whether to remove the original frame that generated from video -vis_track_result = int(dict_state2['vis_track_result']) - -# weights and match are parameter of tracking algorithm -# following setting should work fine, no need to change -weights = dict_state2['weights'] -match = int(dict_state2['match']) diff --git a/Tracking/AlphaTracker/setting_ui.py b/Tracking/AlphaTracker/setting_ui.py new file mode 100644 index 0000000..27e30f9 --- /dev/null +++ b/Tracking/AlphaTracker/setting_ui.py @@ -0,0 +1,86 @@ +import os +import numpy as np + +# code path setting +AlphaTracker_root = "./" + +with open('train.cfg', 'r') as f: + dat = f.read() + if not dat: + print(f'error, train.cfg is empty') + try: + dict_state = eval(dat) + except Exception as e: + print(f'load train.cfg Exception: {e}') + print(dict_state) + +gpu_id = int(dict_state['gpu_id']) # the id of gpu that will be used + +# data related settings +image_root_list = [dict_state['image_root_list']] # list of image folder paths to the RGB images for training +json_file_list = [dict_state['json_file_list']] # list of paths to the json files that contain labels of the images for training +num_mouse = [int(dict_state['num_mouse'])] # the number of mouse in the images in each image folder path +exp_name = dict_state['exp_name'] # the name of the experiment +num_pose = int(dict_state['num_pose']) # number of the pose that is labeled, remember to change self.nJoints in train_sppe/src/utils/dataset/coco.py + +pose_pair = np.array([[float(j) for j in i.split('-')] for i in dict_state['pose_pair'].split(',')]) +print('pose pair is:',pose_pair) +train_val_split = float(dict_state['train_val_split']) # ratio of data that used to train model, the rest will be used for validation +image_suffix = dict_state['image_suffix'] # suffix of the image, png or jpg + + +# training hyperparameter setting +# Protip: if your training does not give good enough tracking you can lower lr and increase epoch number +# but lowering the lr too much can be bad for tracking quality as well. +sppe_lr = float(dict_state['sppe_lr']) +sppe_epoch = int(dict_state['sppe_epoch']) +sppe_pretrain = dict_state['sppe_pretrain'] +sppe_batchSize = int(dict_state['sppe_batchSize']) +yolo_lr = float(dict_state['yolo_lr']) +yolo_iter = int(dict_state['yolo_iter']) ## if use pretrained model please make sure yolo_iter to be large enough to guarantee finetune is done +yolo_pretrain = dict_state['yolo_pretrain'] # './train_yolo/darknet/darknet53.conv.74' +yolo_batchSize = int(dict_state['yolo_batchSize']) + + +with open('track.cfg', 'r') as f: + dat = f.read() + if not dat: + print(f'error, track.cfg is empty') + try: + dict_state2 = eval(dat) + except Exception as e: + print(f'load track.cfg Exception: {e}') + print(dict_state2) + + +# demo video setting +# note video_full_path is for track.py, video_paths is for track_batch.py +# video_full_path is the path to the video that will be tracked +video_full_path = dict_state2['video_full_path'] +video_paths = [ + dict_state2['video_full_path'], + ] # make sure video names are different from each other +start_frame = int(dict_state2['start_frame']) # id of the start frame of the video +end_frame = int(dict_state2['end_frame']) # id of the last frame of the video +max_pid_id_setting = int(dict_state2['max_pid_id_setting']) # number of mice in the video +result_folder = dict_state2['result_folder'] # path to the folder used to save the result +remove_oriFrame = int(dict_state2['remove_oriFrame']) # whether to remove the original frame that generated from video +vis_track_result = int(dict_state2['vis_track_result']) + +# weights and match are parameter of tracking algorithm +# following setting should work fine, no need to change +weights = dict_state2['weights'] +match = int(dict_state2['match']) + +exp_name_track = dict_state2['exp_name_track'] + +# the following code is for self-check and reformat +assert len(image_root_list) == len( + json_file_list +), "the length of image_root_list and json_file_list should be the same" +for i in range(len(image_root_list)): + image_root_list[i] = os.path.abspath(image_root_list[i]) + json_file_list[i] = os.path.abspath(json_file_list[i]) + +AlphaTracker_root = os.path.abspath(AlphaTracker_root) +result_folder = os.path.abspath(result_folder) diff --git a/Tracking/AlphaTracker/track.cfg b/Tracking/AlphaTracker/track.cfg index ecb81f4..77f4dbb 100644 --- a/Tracking/AlphaTracker/track.cfg +++ b/Tracking/AlphaTracker/track.cfg @@ -1 +1 @@ -{'video_full_path': '/home/flexiv/AlphaTracker/Tracking/AlphaTracker/data/demo.mp4', 'start_frame': '0', 'end_frame': '300', 'max_pid_id_setting': '2', 'result_folder': './track_result/', 'remove_oriFrame': '0', 'vis_track_result': '1', 'weights': '0 6 0 0 0 0 ', 'match': '0'} \ No newline at end of file +{'video_full_path': '/home/flexiv/AlphaTracker/Tracking/AlphaTracker/data/demo.mp4', 'start_frame': '0', 'end_frame': '300', 'max_pid_id_setting': '2', 'result_folder': './track_result/', 'remove_oriFrame': '0', 'vis_track_result': '1', 'weights': '0 6 0 0 0 0 ', 'match': '0', 'exp_name_track': 'demo'} \ No newline at end of file diff --git a/Tracking/AlphaTracker/track.py b/Tracking/AlphaTracker/track.py index 16f8915..c1efdd6 100644 --- a/Tracking/AlphaTracker/track.py +++ b/Tracking/AlphaTracker/track.py @@ -1,26 +1,43 @@ import cv2 import os import time - +import sys from tqdm import tqdm -from setting import ( - AlphaTracker_root, - exp_name, - num_pose, - gpu_id, - sppe_epoch, - video_full_path, - start_frame, - end_frame, - max_pid_id_setting, - result_folder, - weights, - match, - remove_oriFrame, - vis_track_result, -) - +if len(sys.argv)==1: + from setting import ( + AlphaTracker_root, + exp_name_track, + num_pose, + gpu_id, + sppe_epoch, + video_full_path, + start_frame, + end_frame, + max_pid_id_setting, + result_folder, + weights, + match, + remove_oriFrame, + vis_track_result, + ) +elif len(sys.argv)==2 and sys.argv[1]=='ui': + from setting_ui import ( + AlphaTracker_root, + exp_name_track, + num_pose, + gpu_id, + sppe_epoch, + video_full_path, + start_frame, + end_frame, + max_pid_id_setting, + result_folder, + weights, + match, + remove_oriFrame, + vis_track_result, + ) # demo video setting video_image_save_path_base = result_folder + "/oriFrameFromVideo/" @@ -34,20 +51,20 @@ # automatic setting # general data setting -ln_image_dir = AlphaTracker_root + "/data/" + exp_name + "/color_image/" +ln_image_dir = AlphaTracker_root + "/data/" + exp_name_track + "/color_image/" # sppe data setting -train_h5_file = sppe_root + "/data/" + exp_name + "/data_newLabeled_01_train.h5" -val_h5_file = sppe_root + "/data/" + exp_name + "/data_newLabeled_01_val.h5" +train_h5_file = sppe_root + "/data/" + exp_name_track + "/data_newLabeled_01_train.h5" +val_h5_file = sppe_root + "/data/" + exp_name_track + "/data_newLabeled_01_val.h5" # yolo data setting -color_img_prefix = "data/" + exp_name + "/color/" -file_list_root = "data/" + exp_name + "/" +color_img_prefix = "data/" + exp_name_track + "/color/" +file_list_root = "data/" + exp_name_track + "/" yolo_image_annot_root = darknet_root + "/" + color_img_prefix train_list_file = darknet_root + "/" + file_list_root + "/" + "train.txt" val_list_file = darknet_root + "/" + file_list_root + "/" + "valid.txt" -valid_image_root = darknet_root + "/data/" + exp_name + " /valid_image/" +valid_image_root = darknet_root + "/data/" + exp_name_track + " /valid_image/" if not os.path.exists(result_folder): os.makedirs(result_folder) @@ -104,10 +121,10 @@ video_image_save_path, result_folder, darknet_root, - exp_name, + exp_name_track, darknet_root, sppe_root, - exp_name, + exp_name_track, sppe_epoch, ) print(demo_cmd) @@ -136,7 +153,7 @@ match, weights, result_folder, - exp_name, + exp_name_track, max_pid_id_setting, match, weights.replace(" ", ""), diff --git a/Tracking/AlphaTracker/train.py b/Tracking/AlphaTracker/train.py index 0dff2f1..ad5cc86 100644 --- a/Tracking/AlphaTracker/train.py +++ b/Tracking/AlphaTracker/train.py @@ -5,26 +5,47 @@ from importlib import reload import data_utils -from setting import ( - AlphaTracker_root, - image_root_list, - json_file_list, - num_mouse, - exp_name, - num_pose, - train_val_split, - image_suffix, - gpu_id, - sppe_lr, - sppe_epoch, - yolo_lr, - yolo_iter, - sppe_pretrain, - yolo_pretrain, - yolo_batchSize, - sppe_batchSize, -) +if len(sys.argv)==1: + from setting import ( + AlphaTracker_root, + image_root_list, + json_file_list, + num_mouse, + exp_name, + num_pose, + train_val_split, + image_suffix, + gpu_id, + sppe_lr, + sppe_epoch, + yolo_lr, + yolo_iter, + sppe_pretrain, + yolo_pretrain, + yolo_batchSize, + sppe_batchSize, + ) +elif len(sys.argv)==2 and sys.argv[1]=='ui': + from setting_ui import ( + AlphaTracker_root, + image_root_list, + json_file_list, + num_mouse, + exp_name, + num_pose, + train_val_split, + image_suffix, + gpu_id, + sppe_lr, + sppe_epoch, + yolo_lr, + yolo_iter, + sppe_pretrain, + yolo_pretrain, + yolo_batchSize, + sppe_batchSize, + ) class cd: """Context manager for changing the current working directory""" diff --git a/UI/html/cluster.html b/UI/html/cluster.html index 720a030..67ce559 100644 --- a/UI/html/cluster.html +++ b/UI/html/cluster.html @@ -2,7 +2,7 @@ - AlphaMice-ClusterView + AlphaTracker-ClusterView @@ -186,7 +186,7 @@ padding:9px 0 0 0; z-index: 50; float: left; - ">AlphaMice + ">AlphaTracker
Tracking
diff --git a/UI/html/curate.html b/UI/html/curate.html index 99f08c2..00eefaf 100644 --- a/UI/html/curate.html +++ b/UI/html/curate.html @@ -2,7 +2,7 @@ - AlphaMice-Curate + AlphaTracker-Curate @@ -124,7 +124,7 @@ padding:9px 0 0 0; z-index: 50; float: left; - ">AlphaMice + ">AlphaTracker
Tracking
diff --git a/UI/server.py b/UI/server.py index 088b55c..520ad75 100644 --- a/UI/server.py +++ b/UI/server.py @@ -18,7 +18,8 @@ def end_headers(self): if __name__ == "__main__": port = 8000 - with socketserver.TCPServer(("", port), Handler) as httpd: + with socketserver.TCPServer(("", port), Handler, False) as httpd: + httpd.allow_reuse_address = True print("Serving at: http://127.0.0.1:{}".format(port)) webbrowser.open_new("http://127.0.0.1:{}".format(port)) httpd.serve_forever() diff --git a/res/4_120x74.png b/res/4_120x74.png index f73a2ca..69f7e68 100644 Binary files a/res/4_120x74.png and b/res/4_120x74.png differ diff --git a/res/5_120x74.png b/res/5_120x74.png index 9024d41..f73a2ca 100644 Binary files a/res/5_120x74.png and b/res/5_120x74.png differ diff --git a/res/6_120x74.png b/res/6_120x74.png new file mode 100644 index 0000000..9024d41 Binary files /dev/null and b/res/6_120x74.png differ diff --git a/scripts/behavior.sh b/scripts/behavior.sh index 479200f..982d954 100644 --- a/scripts/behavior.sh +++ b/scripts/behavior.sh @@ -15,6 +15,5 @@ bash run_all.sh python fft_main_sep_twoMiceInteract.py - echo behavior over diff --git a/scripts/track.sh b/scripts/track.sh index ca158c7..beda431 100644 --- a/scripts/track.sh +++ b/scripts/track.sh @@ -11,7 +11,7 @@ conda activate alphatracker cd ./Tracking/AlphaTracker/ -python track.py +python track.py ui echo track over diff --git a/scripts/train.sh b/scripts/train.sh index d558d47..7af1ba4 100644 --- a/scripts/train.sh +++ b/scripts/train.sh @@ -11,7 +11,7 @@ conda activate alphatracker cd ./Tracking/AlphaTracker/ -python train.py +python train.py ui echo train over diff --git a/scripts/vis_results.sh b/scripts/vis_results.sh new file mode 100644 index 0000000..83e49ac --- /dev/null +++ b/scripts/vis_results.sh @@ -0,0 +1,18 @@ +#!/bin/bash + +# if result button is clicked, this script will be called + +echo vis_results start + + +. ~/anaconda3/etc/profile.d/conda.sh + +conda activate alphatracker + +cd ./UI + +python server.py + + +echo vis_results over + diff --git a/state.txt b/state.txt index fd9ba2d..d949225 100644 --- a/state.txt +++ b/state.txt @@ -1 +1 @@ -{'btn0': 1, 'btn1': 1, 'btn2': 1, 'btn3': 1, 'btn4': 0} \ No newline at end of file +{'btn0': 0, 'btn1': 0, 'btn2': 0, 'btn3': 0, 'btn4': 0}