The idea of this project is to allow training arbitrary networks and convert them to the NCS stick. An example of pedestrian detection is included that runs at 20 fps in a raspberry pi with one NCS. This model has obviously much less parameters than the tiny-yolo model.
This part of the repo is based on https://github.com/rodrigo2019/keras-yolo2. Follow his readme, it's very clear. I haven't changed much of the project, because I didn't have time to investigate. The project is great and it works but you need some patience to get a network working.
To see how models work I downloaded some videos from youtube similar to my target data and just run the network on them. I found this much quicker to see if the model is learning correctly or not. Then I can change the config based on this.
To retrain from scratch I've used Adam and a learning rate of 1e-3. Using 1e-2 as the paper suggests makes the optimization diverge early on. I've found for datasets such as openimages using no_object_scale=5 worked well and had less false positives. I've done some experiments with different combinations of these parameters and they drastically change results. Concretely this was just running one or two epochs with debug=true, to see how the network reacted to different parameters. But this depends on the application, so I don't think there is a general advice. Please feel free to make issues to share your experiments.
Please read rodrigo's readme before trying to retrain. The NCS convertion is based on this repo: https://github.com/bastiaanv/Yolov2-tiny-tf-NCS I've made some changes but the general idea is the same. Basically you need to define the network in tensorflow, assign the weights and then compile to the NCS.
Create a conda environment using the environment.yml file.
python gen_anchors.py -c config.json
python train.py -c config.json
python predict.py -c config.json -w ncs_darknetreference/openimages_best4.h5 -i videos/uba.mp4
Each model that is converted to NCS has a separate folder. For example see how ncs_darknetreference/ is structured
python ncs_darknetreference/save_keras_graph.py
This script loads the weights from the ncs_darknetreference/config.json into the class DarknetReferenceNet. Read the class source code to understand the details but this is what is doing: The weights are extracted as numpy arrays and then assigned to the tensorflow graph. Batch norm is "fused" into the conv-net weights, this seems to be working ok. The process is quite akward but it's how I made it work. For some reason I couldn't simplify the process but feel free to make changes here.
python ncs_darknetreference/predict_converted.py
Before freezing the graph, delete any previously generated models and ckpt in the folder.
freeze_graph --input_graph=ncs_darknetreference/NN.pb --input_binary=true --input_checkpoint=ncs_darknetreference/NN.ckpt --output_graph=ncs_darknetreference/frozen.pb --output_node_name=Output;
docker run -v /mnt/yolo_retrain:/mnt/yolo_retrain -i -t ncsdk
mvNCCompile -s 12 ncs_darknetreference/frozen.pb -in=Input -on=Output -o ncs_darknetreference/graph -is 256 256
I've installed the API in a virtualenv. This setup is just because I couldn't make everything work in docker. This step works in a raspberry pi also if you have all the required dependencies.
source /opt/movidius/virtualenv-python/bin/activate
python ncs_smallmdl/ncs_predict.py
tensorflow=1.9.0 keras-gpu=2.1.5