Face-Expression-Detection-using-CNN-and-OpenCV
*Project Aim The main objective of this project is to construct a deep learning model capable of accurately recognizing and classifying human emotions based on facial expressions in real-time.
##Project Overview This project involves developing a Convolutional Neural Networks (CNN) model that processes facial images captured via a camera and accurately predicts the individual's emotion.
##Technologies Used The project is implemented using a stack of robust technologies, including TensorFlow, Keras, OpenCV, and Fast.AI.
##Dataset and Model Training The model is trained using the FER2013 dataset, consisting of a broad spectrum of facial images across various emotions. It aims to achieve high accuracy in emotion classification.
##GUI Development The project also includes the creation of an interactive GUI, designed to enable users to input their facial expressions via camera and receive real-time emotion prediction.
##Experimentation and Optimization The project involved extensive experimentation with different model architectures, activation functions, and regularization techniques to optimize the model's performance.
##Performance The model achieved an overall accuracy of around 93%, with precision and recall metrics around 60%.
##Final Thoughts and Future Scope The Facial Emotion Classification project effectively harnesses the power of CNN and various machine learning techniques to understand and classify human emotions accurately. The potential applications are vast, and the model promises further advancements in the field.