Skip to content

This is my implementation for the Airbus Ship Detection Challenge.

License

Notifications You must be signed in to change notification settings

lytmercy/asd_challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

asd_challenge

Airbus Ship Detection Challenge


About Project

That my implementation for Airbus Ship Detection Challenge. Dataset downloaded from Kaggle Challenge, you also can download it if you have Kaggle account. My model based on TensorFlow/Keras frameworks. Implementation containing model architecture such as a simple U-Net for image semantic segmentation. In project also used the libraries: NumPy, Pandas and Matplotlib.

Project have that files:

  • dataset_analysis.ipynb -- containing exploring of data from Challenge.
  • cfg/config.yaml -- for storing config variables for model.
  • model_debugger.py -- it's core code file with calling all function and classes for preprocessing data, building, training and using model.
  • model_builder.py -- file with function for building model with Keras functional API and classes of custom
  • metric and loss function.
  • training_process.py -- file with function for training model.
  • inference_process.py -- file with function for testing model.
  • data_handler.py -- file with class for building batch with images and masks.
  • utils.py -- file with all support functions.
  • requirements.txt -- file which store name all libraries that require for this project.

Model Architecture

Model architecture has:

  • Input layer with shape=(160, 160)
  • data_augmentation layer (for augmented data only in training process)

And standard U-Net architecture from code example in Keras documentation (Image Segmentation with a U-Net-like architecture):

My model build code for this project


Downloading and testing model

First get the repository

git clone https://github.com/lytmercy/asd_challenge.git

Next you need step into repo directory

cd asd_challenge

Next create new python environment (my python is 3.10).

python -m venv venv

Activate new environment

./venv/Scripts/activate

And install all requirements

pip install -r requirements.txt

And finally run the debugger.py in console

python src/model_debugger.py

If you want check training process you can move out files of weight from path models/trained/weights/*

mv models/trained/weights/* your/path/for/my/weights/*

And next run again model_debugger.py


Demonstrating the result

In order not to wait a long time for the training result, I used only 30% of the images from those with a ground-truth mask.

Dice Loss

For Dice Coefficient, I use this formula and subtract it from 1, to get dice loss. From this science paper (on page 6):

formula

Hyperparameters

For the training I use that hyperparameters:

  • learning_rate = 0.001
  • number of epochs = 4
  • batch size = 22 (because my GPU can't process more)

Results

In result of training, I got this:

  • dice_loss = 0.1890
  • dice_score = 0.8150
  • val_dice_loss = 0.3031
  • val_dice_score = 0.7109

training

And got this curves of loss and dice score:

Dice Loss Function

dice_loss

Dice Score Metric

dice_score

In result of evaluating, I got this:

  • dice_loss = 0.3924
  • dice_score = 0.6135

evaluating

Prediction from model

And take prediction from model, I got this:

Good results with one or more ships:

good_0 good_1 good_2

And bad results model predicts when getting an image with some terrain and some other stuff that recognize as ships on empty sea images.

bad_0 bad_1 bad_2

About

This is my implementation for the Airbus Ship Detection Challenge.

Topics

Resources

License

Stars

Watchers

Forks