{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":666505640,"defaultBranch":"main","name":"EfficientNetV2_Quantization_CK","ownerLogin":"OmidGhadami95","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2023-07-14T17:37:13.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/137526526?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1689356234.0","currentOid":""},"activityList":{"items":[{"before":"b64376f625a25e9abeecf923a3bb00da17a42d64","after":"0f89f1b3e62ad0abe8275930116387a2ae3fb291","ref":"refs/heads/main","pushedAt":"2024-05-04T12:15:14.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Update README.md","shortMessageHtmlLink":"Update README.md"}},{"before":"031f4988b6f771beedcc8b83b6d28c7294d101ad","after":"b64376f625a25e9abeecf923a3bb00da17a42d64","ref":"refs/heads/main","pushedAt":"2024-03-08T18:56:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Update README.md","shortMessageHtmlLink":"Update README.md"}},{"before":"b977eaa96c7bbe26aea520195a3575b9332fe3a0","after":"031f4988b6f771beedcc8b83b6d28c7294d101ad","ref":"refs/heads/main","pushedAt":"2023-09-25T08:56:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Update README.md","shortMessageHtmlLink":"Update README.md"}},{"before":"951190949a139413d44bf0cae95e8dc4d01745ef","after":"b977eaa96c7bbe26aea520195a3575b9332fe3a0","ref":"refs/heads/main","pushedAt":"2023-09-25T08:55:56.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Update README.md","shortMessageHtmlLink":"Update README.md"}},{"before":"c8262c518f905101a7217cc5beedc4a20f753d09","after":"951190949a139413d44bf0cae95e8dc4d01745ef","ref":"refs/heads/main","pushedAt":"2023-07-15T17:41:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Update README.md","shortMessageHtmlLink":"Update README.md"}},{"before":"d0d0e1593ff646256623021b0f577a53750f8e6b","after":"c8262c518f905101a7217cc5beedc4a20f753d09","ref":"refs/heads/main","pushedAt":"2023-07-15T16:41:00.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Update README.md\n\n# EfficientNetV2_Quantization_CKplus\r\n# EfficientNetV2 (Efficientnetv2-b2) and quantization int8 and fp32 (QAT and PTQ) on CK+ dataset . fine-tuning, augmentation, solving imbalanced dataset and so on.\r\nReal-time facial emotion recognition using EfficientNetV2 CNN and quantization on CK+ dataset. This code includes:\r\n\r\n1-data loading steps (download and split dataset)\r\n\r\n2-preprocessing steps on CK+ dataset (normalization, resizing, augmentation and solving imbalanced dataset problem)\r\n\r\n3-fine-tuning (using pre-trained weights from imagenet dataset as initial weights for training step)\r\n\r\n4- quantization int8 and fp32 ( Quantization-aware training integer8 (QAT) and Post-training quantization float32 (PTQ) )\r\n\r\nNote that integer computation is much faster than float computation, especially in ARM architecture. Also, the size of Float32 data is four times bigger than Float8. So, quantization int8 has has some benefits in reducing inference time and model size. But, Sometimes, it leads to a lower accuracy (PTQ). If we want to compensate for this loss, we need to use quantization-aware training approach. It means we need fine-tuning after quantization to compensate for lost accuracy. Finally, we compared int8 QAT and fp32 PTQ in terms of accuracy and model size, and inference time. The important point is, it's not the correct inference time. Since, x86 architecture and ARM architecture are varied and our model is appropriate for ARM not x86. Thus, the inference time for int8 tflite model is slower than float32 tflite model or even slower than simple tf model without quantization. In fact, based on our observations on Samsung Galaxy A54 smartphone, int8 tflite model is roughly two times faster than fp32 tflite model.","shortMessageHtmlLink":"Update README.md"}},{"before":"c8c4abbb66d1b43b4caad8f3bdc5daa030aee59f","after":"d0d0e1593ff646256623021b0f577a53750f8e6b","ref":"refs/heads/main","pushedAt":"2023-07-14T17:41:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Created using Colaboratory","shortMessageHtmlLink":"Created using Colaboratory"}},{"before":"2195a141768b2fdb2865e9a1efd2d34df0001a03","after":"c8c4abbb66d1b43b4caad8f3bdc5daa030aee59f","ref":"refs/heads/main","pushedAt":"2023-07-14T17:38:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Created using Colaboratory","shortMessageHtmlLink":"Created using Colaboratory"}},{"before":null,"after":"2195a141768b2fdb2865e9a1efd2d34df0001a03","ref":"refs/heads/main","pushedAt":"2023-07-14T17:37:14.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"OmidGhadami95","name":"Omid Ghadami","path":"/OmidGhadami95","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/137526526?s=80&v=4"},"commit":{"message":"Initial commit","shortMessageHtmlLink":"Initial commit"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEQWe7nAA","startCursor":null,"endCursor":null}},"title":"Activity ยท OmidGhadami95/EfficientNetV2_Quantization_CK"}