Skip to content

Easy way to visualize convolutional neural networks, through two visualizations : Reason & MaxOut. Final version : web app.

Notifications You must be signed in to change notification settings

maphdev/Deep_Visualization_Neural_Networks_Web_app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Visualization Neural Networks

Programming project carried out as part of the Master's degree in Computer Science at the University of Bordeaux.

The main purpose is to offer an easy way to visualize convolutional neural networks, through two visualizations types described in the paper of G. Strezoski et al. :

  • Reason uses the Grad-CAM technique to display which parts of the source image are most responsible for the classification decision. reason

  • MaxOut displays the maximum activation input for a specific target in any layer of a given model. maxout

Visualizations are implemented with Keras, a high-level neural networks API written in Python and capable of running on top of Tensorflow, and Keras-vis, a high-level toolkit for visualizing and debugging trained keras neural networks models.

These visualizations are proposed in the form of a web application developed with Flask.

Preview

models management page

models management page

models management page

Installation

Install Python 3

sudo apt-get update
sudo apt-get install python3.6

Create virtual environment

virtualenv -p python3 PDPDEEPVISENV

Activate virtual environment

source PDPDEEPVISENV/bin/activate

Install dependencies in virtual environment

pip install -r requirements.txt

Start development server

python run.py

The application is available at http://localhost:8080/.

Load deep learning models with pre-trained weights

If you want to quickly experiment or you don't have any trained model available, you can easily load a model with weights pre-trained on ImageNet :

  python ./models/generate_models.py <name_1> <name_2> ...

Three models are available with this command :

  • "VGG16" : a 22-layers network trained on ImageNet, with a default input size of 224x224.
  • "ResNet50" : a 175-layers network trained on ImageNet, with a default input size of 224x224.
  • "NASNetLarge" : a 1021-layers network trained on ImageNet, with a default input size of 331x331.

Models loaded with this command are generated in the "models" directory.