Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Emotion detection using transfer learning model. #38

Open
keshav340 opened this issue Jun 11, 2021 · 15 comments
Open

Emotion detection using transfer learning model. #38

keshav340 opened this issue Jun 11, 2021 · 15 comments

Comments

@keshav340
Copy link

I want to train different transfer learning model on emotion detection so that we get good accuracy and less loss.

@keshav340
Copy link
Author

if yes then please assign me.

@melikeey
Copy link

Did you find a method for more accuracy? @keshav340

@keshav340
Copy link
Author

yes, I hypertuned his model architecture and got better result

@melikeey
Copy link

How is hypertuned a model done? can you share the example :-)

@Alexander-K-byte
Copy link

How is hypertuned a model done? can you share the example :-)

I ran the original code and was only in the mid to high 80's for accuracy, altered the code slightly and now have the model at 95.87% accuracy.

What results do you currently have @melikeey, you can change some of the layer coding where the image is being manipulated, you might notice that in the original code he has his layers as 32/64/128 and he follows this with another 128 layer, simply changing that layer to 256 and leaving kernel size the same should get you a small boost in accuracy,

@swamivisal
Copy link

what about the validation test accuracy.Did it get overfit?

@Alexander-K-byte
Copy link

what about the validation test accuracy.Did it get overfit?

Oh yeah, there was a lot of overfit, my point was more that the OP has submitted their finished version, but we don't know if it is the optimal version unless we try changing some of the settings. I have been trying some slight variations on his model to try and maximise accuracy/validation but it's a spare time kind of thing so maybe trying once or twice per week in google colab to see if I can raise the accuracy from 89% to +95% whilst keeping the validation in a similar range.

@Alexander-K-byte
Copy link

what about the validation test accuracy.Did it get overfit?
Try running this for the layers.

model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(256, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))

@melikeey
Copy link

melikeey commented Apr 13, 2022

Which should (val)test accuracy or training accuracy be considered? As you said, while the training accuracy was 85% for the original code, the test(val) accuracy did not exceed 63%.
Which of val/training accuracy ​​interprets true accuracy? There is some confusion between these two for me.

5 times, I trained this code.
I added more layers to my model, I changed the ratios of test and training data. I even removed the maximum pooling layers (it took too long) and again I couldn't increase the test(val) accuracy above 63%.

Now I have started to train the model you gave. @Alexander-K-byte

@melikeey
Copy link

melikeey commented Apr 13, 2022

what about the validation test accuracy.Did it get overfit?
Try running this for the layers.

model = Sequential() model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25))

model.add(Conv2D(256, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25))

model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(7, activation='softmax'))

Which should (val)test accuracy or training accuracy be considered? As you said, while the training accuracy was 85% for the original code, the test(val) accuracy did not exceed 63%.
Which of val/training accuracy ​​interprets true accuracy. There is some confusion between these two for me.

5 times, I trained this code.
I added more layers to my model, I changed the ratios of test and training data. I even removed the maximum pooling layers (it took too long) and again I couldn't increase the test(val) accuracy above 63%.

Now I have started to train the model you gave.

can you also share with me for this model calculation table and accuracy and loss graphs?
when you execute;
python emotions.py --mode train

@swamivisal
Copy link

swamivisal commented Apr 13, 2022 via email

@melikeey
Copy link

Validation accuracy should be noted down. Because the training accuracy eventually become 2 at infinite epoch.But, the model will get overfitted after certain epochs,which would be indicated by the validation accuracy,not the train accuracy.

On Wed, Apr 13, 2022, 5:30 PM Melike Elif Yıldırım @.> wrote: what about the validation test accuracy.Did it get overfit? Try running this for the layers. model = Sequential() model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(128, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(256, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(7, activation='softmax')) Should (val)test accuracy or training accuracy be considered? As you said, while the training accuracy was 85% for the original code, the test(val) accuracy did not exceed 63%. Which of val/train accuracy ​​interprets true accuracy. There is some confusion between these two. 5 times, I trained this code. I added more layers to my model, I changed the ratios of test and training data. I even removed the maximum pooling layers (it took too long) and again I couldn't increase the test(val) accuracy above 63%. Now I have started to train the model you gave. — Reply to this email directly, view it on GitHub <#38 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUNXPAD4ZLW3WJKQEN73FDLVE2ZN3ANCNFSM46RH44HA . You are receiving this because you commented.Message ID: @.>

Did you achieve a high rate validation accuracy with this code? I changed model 5 times but I couldn't. my results are around 63%, 42%, 50%, 62%, 57%. I need to solve this :/

@melikeey
Copy link

Validation accuracy should be noted down. Because the training accuracy eventually become 2 at infinite epoch.But, the model will get overfitted after certain epochs,which would be indicated by the validation accuracy,not the train accuracy.

On Wed, Apr 13, 2022, 5:30 PM Melike Elif Yıldırım @.> wrote: what about the validation test accuracy.Did it get overfit? Try running this for the layers. model = Sequential() model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(128, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(256, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(7, activation='softmax')) Should (val)test accuracy or training accuracy be considered? As you said, while the training accuracy was 85% for the original code, the test(val) accuracy did not exceed 63%. Which of val/train accuracy ​​interprets true accuracy. There is some confusion between these two. 5 times, I trained this code. I added more layers to my model, I changed the ratios of test and training data. I even removed the maximum pooling layers (it took too long) and again I couldn't increase the test(val) accuracy above 63%. Now I have started to train the model you gave. — Reply to this email directly, view it on GitHub <#38 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUNXPAD4ZLW3WJKQEN73FDLVE2ZN3ANCNFSM46RH44HA . You are receiving this because you commented.Message ID: @.>

I change activation relu with sigmoid. It has %42 validation accuracy.

@melikeey
Copy link

%60-%70 ratio is poor model, %70-%80 is good model intervals. I tried to create a good model at least :D

@Alexander-K-byte
Copy link

%60-%70 ratio is poor model, %70-%80 is good model intervals. I tried to create a good model at least :D

The problem with the dataset is that disgust/anger are so alike.

Ideas that spring to mind are removing disgust from the equation and placing those files into anger.

Ensure the camera you are using for live video capture is better quality than a simple built in webcam on a laptop, also make sure that your face is well lit. Using a Pi cam v1.3 for a few opencv projects and the inbuilt haar cascade had issues picking out my face while wearing glasses if my face was poorly illuminated, I was picked up every 4th or 5th frame whilst wearing glasses, tooe them off and it was able to detect my face a lot easier.

Try taking some pictures of the different emotions, turn them in to a video montage with each image on display for 5 to 10 seconds and use this as a video stream through opencv so that the frames it looks at are not constantly changing.

I would suggest also looking into transfer learning, but I am guessing you are more time constrained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants