Skip to content

Preprocessing my fall detection dataset using data standardisation and sliding windows, and splitting this data into train/validation/test sets. Modelling performed on PyTorch using LSTM and CNN networks. The final models were exported to `.tflite` files to be run on a mobile phone. The best performing model was the ResNet152 with 92.8% AUC.

Notifications You must be signed in to change notification settings

hwixley/Fall-Detection-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fall Detection Processing & Modelling

This was developed for my Undergraduate Thesis in AI & Computer Science at the University of Edinburgh. Please find the link to my report attached to this repository. Overall ResNet152 proved to be the best performing model on a standardised, and shuffled dataset with a 2s window size which achieved 92.8% AUC, 87.28% sensitivity, and 98.33% specificity.


Example of Live ECG Data During a Fall

live_ecg_fall_data
(The section between the green and red bars represents a fall)


Preprocessing

  1. Compile json data recording chunks into their relevant recording objects
  2. Parse these recording objects into sliding windows
  3. Standardise the dataset
  4. Randomly shuffle the dataset samples
  5. Split the dataset into train/validation/test sets
  6. Label the data by applying the sigmoid function to the average label in a sliding window (where 1 represents fall and 0 no fall)

Modelling

I first trained and tested some baseline models (such as K-Nearest Neighbours, Gaussian Naive Bayes, simple Neural Networks, etc.) to get an insight on the performance of my data on simple models. In the end they all seemed to average out at around 70% AUC, the best performing model here was K-Nearest Neighbours with k set to 3 with a test AUC of 72.17%.

After this I trained LSTM and ResNet deep learning models on my dataset using variable window sizes and tuning parameters. Overall ResNet152 proved to be the best performing model on a standardised, and shuffled dataset with a 2s window size which achieved 92.8% AUC, 87.28% sensitivity, and 98.33% specificity.

fall-detection-resnet-performance-graph

fall-detection-resnet-performance fall-detection-lstm-performance fall-detection-baseline-performance

Exporting:

Exported my PyTorch model to .tflite using the following conversions: PyTorch -> ONNX -> TensorFlow -> TFLite

This conversion was done to allow the ML model to be able to be run locally in the background on a user's mobile phone.

About

Preprocessing my fall detection dataset using data standardisation and sliding windows, and splitting this data into train/validation/test sets. Modelling performed on PyTorch using LSTM and CNN networks. The final models were exported to `.tflite` files to be run on a mobile phone. The best performing model was the ResNet152 with 92.8% AUC.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published