NOTE if you are simply interested in the emergency vehicle classifcation using images, that is what this repository contains!
Our sibling repository https://github.com/UVA-MLSys/MLCommons-AV contains how LiDAR and point-cloud objects works with our emergency vehicle detection model, feel free to take a look!
-
We used the Emergency Vehicles Identification Kaggle Dataset linked here: https://www.kaggle.com/datasets/abhisheksinghblr/emergency-vehicles-identification please download this dataset prior to attempting to run the codebase.
-To note, make sure your dataset is present on the same path, similar to this structure:
As you can see, the dataset is stored in emergency-vehicle-classification and not hidden away in any other file. This is because the Emergency_Vehicles file needs to be unzipped in EV_identification_updated.ipynb
Before running the code, make sure you have the following prerequisites installed:
- Python 3.x
- TensorFlow
- Keras
- EfficientNet
- Numpy
- OpenCV
- Matplotlib
- Pandas
- tqdm
Follow these steps to properly install this application:
- The EfficientNet model was used for training, you can install it using:
!pip install -U efficientnet
- Also, version 1.21 of numpy is required for compiling, you can install it using:
!pip install numpy==1.21
Both of these installations are present in the code, but it's just a reminder if changes are made to the code or errors are occurring
Training & Testing should be performed on a server with dedicated GPUs, it is not recommended to run on your own device as that will take a considerate amount of time!
Training and Testing was done through UVA Rivanna Servers
- The training data comes from the train file in Emergency-Vehicles, containing 1150 images for training
- The EfficientNet-B7 model pre-trained on ImageNet is loaded using the efficientnet.keras library.
- Data augmentation is applied to the training dataset to improve model generalization.
- The top layers of the EfficientNet-B7 model are replaced with custom layers for binary classification.
- The model is compiled using the Adam optimizer with a learning rate of 0.0001 and binary cross-entropy loss.
- Training is performed with early stopping to prevent overfitting.
- To train the model, execute the code provided in the notebook. The trained model will be saved as "custom_model.keras".
The results from the training & testing (graphs):
The results (predictions vs accuracy) of the EV_identification_updated.ipynb fo;e can be observered in the submission_effnet.csv file which is created after the program completely runs, and results can be compared with the half accurately labeled test(accurate).csv file if predictions match the actual.
Example:
For the EV_research.ipynb file, which is connected to the LiDAR and point-cloud objects repository, the model is saved at the end of the simulation. The model is then uploaded and used to classify the cropped images of the point-cloud detection algorithm.
You can also deploy the trained model to classify images of emergency vehicles. To classify an image, use the classify_image function provided in the EV_research notebook. Pass the path to the image you want to classify, and it will return 1 for emergency vehicles and 0 for non-emergency vehicles
The way of saving the model in EV_Research can also be done in the EV_identification_updated notebook, using the model.save() function.
Farhan Khan, Max Titov, Tanush Siotia, Xin Sun
Any issues can be commented in the issues section of Github (https://github.com/UVA-MLSys/emergency_vehicle_classification/issues)