Modify from DenseDepth and Android Object Detection
- Porting source from tensorflow1.x to tensorflow2.2
- Convert pre-trained keras model(.h5) to tensorflow2.2(.pb) and tensorflow-lite(.tflite)
- Android app with the tensorflow-lite quantized model
Difficulties in tflite convertion:
- The keras model can't be loaded from TFLiteConverter.from_keras_model() because custom_objects parsing isn't supported
- The keras model can't be loaded from TFLiteConverter.from_saved_model() because dynamic input shape [NONE, NONE, NONE, NONE] is not supported
Solution Steps:
- After loading .pb, converting its input shape to fixed shape [1, 480, 640, 3] in the session graph.
- Resave .pb from session graph. (Very improtant! converting and inferring directly without resave won't succeed...)
- Load .pb again for inference.
- Install Tensorflow 2.2, python3.6
- Download and put cudart64_101.dll under your cuda bin directory (ex: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin)
- Download Keras model based on NYU Depth V2 and put it under the project
Over NYU Depth V2
- Tensorflow2.2 (162 MB)
- Tensorflow-lite (162 MB)
- Tensorflow-lite quantized (41 MB)
After download and put the pre-trained keras model(nyu.h5) in the project
- Run
python convert.py
. to generate tensorflow2.2 and lite models with its corresponding depth map. - Rename nyuQuan.tflite with nyu.tflite and put it on Android/app/src/main/assets
- Build and run Android with android studio
you can also download and install DenseDepth apk with the Tensorflow-lite quantized model
Thanks for the authors. If using the code, please cite their paper:
@article{Alhashim2018,
author = {Ibraheem Alhashim and Peter Wonka},
title = {High Quality Monocular Depth Estimation via Transfer Learning},
journal = {arXiv e-prints},
volume = {abs/1812.11941},
year = {2018},
url = {https://arxiv.org/abs/1812.11941},
eid = {arXiv:1812.11941},
eprint = {1812.11941}
}