Feat/batch predict age and gender#1396
Conversation
|
Bump |
|
I don't support having another predicts function. Instead, you can add that logic under predict. 1- predict accepts both single image and list of images as img: Union[np.ndarray, List[np.ndarray]]2- in predict function, you can check the type of img, and redirect it to your logic if it is list as if isinstance(img, np.ndarray):
# put old predict logic here
elif isinstance(img, np.ndarray):
# put your batch processing logic here3- this new logic is worth to have its own unit tests. possibly, you can add some unit tests here. 4- return type of predict should be Union[np.float64, np.ndarray] 5- You should also update the interface in DeepFace.py |
…or Age and Gender models
Feat/add multi face test
[update] make Race and Emotion batch prediction
[update] merge predicting funcs
|
Actions failed because of linting - link |
| imgs = np.expand_dims(imgs, axis=0) | ||
|
|
||
| # Batch prediction | ||
| age_predictions = self.model.predict_on_batch(imgs) |
There was a problem hiding this comment.
model.predict causes memory issue when it is called in a for loop, that is why we call it as self.model(img, training=False).numpy()[0, :]
in your design, if this is called in a for loop, still it will cause memory problem.
IMO, if it is single image, we should call self.model(img, training=False).numpy()[0, :], it is many faces then call self.model.predict_on_batch
There was a problem hiding this comment.
Thank you for sharing your perspective on this matter.
We found the issue you mentioned is also mentioned in this page: tensorflow/tensorflow#44711. We believe it’s being resolved.
Furthermore, if we can utilize the batch prediction method provided in this PR, we may be able to avoid repeatedly calling the predict function within a loop of unrolled batch images, which is the root cause of the memory issue you described.
We recommend that you consider retaining our batch prediction method.
There was a problem hiding this comment.
hey, even though this is sorted in newer tf versions, many users using old tf versions raise tickets about this problem. so, we should consider the people using older tf version. that is why, i recommend to use self.model(img, training=False).numpy()[0, :] for single images, and self.model.predict_on_batch for batches.
There was a problem hiding this comment.
Hi! 👋
Please take a look at our prediction function, which uses the legacy single prediction method you suggested, and also provides batch prediction if a batch of images is provided.
Please let us know if there’s anything else we can improve. Any advice you have is greatly appreciated.
deepface/deepface/models/Demography.py
Line 24 in a23893a
…ch-predict-age-and-gender
| img = "dataset/img4.jpg" | ||
| # Copy and combine the same image to create multiple faces | ||
| img = cv2.imread(img) | ||
| img = cv2.hconcat([img, img]) |
There was a problem hiding this comment.
hconcat makes a single image
input image shape before hconcat is (1728, 2500, 3)
input image shape after hconcat is (1728, 5000, 3)
to have a numpy array with (2, 1728, 2500, 3) shape, you should do something like:
img = np.stack((img, img), axis=0)
There was a problem hiding this comment.
Also please check that img is now having (2, x, x, x) shape
There was a problem hiding this comment.
Finally, unit tests failed for that input. The case you tested did not test what you did. It is still single image.
There was a problem hiding this comment.
Hi @serengil 👋
We have implemented batched images support in DeepFace::analysis, and the test cases have been modified as per your request. Please help us check if this matches your requirements.
Due to the complexity of designing a more efficient flow for analysis, we will prioritize extending the functionality of models that can accept batched images for now.
We will discuss enhancing the performance of the analysis function in separate threads or through pull requests. We would invite you to participate in these discussions once we are ready.
Please help us merge this PR if all the requirements are met.
| """ | ||
| image_batch = np.array(img) | ||
| # Remove batch dimension in advance if exists | ||
| image_batch = image_batch.squeeze() |
There was a problem hiding this comment.
i initially confused about why why squeeze first and expand dimensions second
would you please add a comment here something like:
we did perform squeeze and expand dimensions sequentially to have same behaviour for (224, 224, 3), (1, 224, 224, 3) and (n, 224, 224, 3)
There was a problem hiding this comment.
We took a look at the processing flow and discovered that the squeeze operation is unnecessary. Every single image input would have an expanded batch dimension of (1, 224, 224, 3), so there’s no need to handle inputs with this dimension.
The redundant squeeze process has been removed.
…nd `race` tests
…m/NatLee/deepface into feat/batch-predict-age-and-gender
|
I am not available to review it until early Feb. |
|
Tests failed |
|
The test seems passed. If you're okay, it's ready to merge. |
| - 'white': Confidence score for White ethnicity. | ||
| """ | ||
|
|
||
| if isinstance(img_path, np.ndarray) and len(img_path.shape) == 4: |
There was a problem hiding this comment.
this control should not be done in DeepFace.py
as you can see, we stored no logic in this file.
There was a problem hiding this comment.
Alright, I've moved the control logic into the analyze function.
|
LGTM thank you for your contribution |
|
Thank you so much @serengil ! |
|
@serengil Many thanks for your reviews! |
Tickets
#441
#678
#1069
#1101
What has been done
This PR introduces a new
predictfunction designed to support batch predictions.How to test
The
batch_analyzefunction allows users to load a model and perform batch predictions efficiently.In this case, the goal is to implement a batch method capable of handling a single image containing multiple faces.
My benchmark script is shown as below:
Summary:
The results indicate that batch processing improves efficiency, especially for a single face and larger datasets. While the speedup varies, with a peak of 1.19x for a single face, the overall trend suggests that batch processing is beneficial for optimizing processing time. This improvement is promising for scaling tasks efficiently.