You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a memory issue when using replace_all_objects. When using this function with a significant amount of documents (5 Million), I use an iterator to minimize memory consumption.
I expect the memory usage to stay flat during the operation, however it keeps increasing. (cf image below)
Upon investigation, it looks like the cause of this memory usage increase comes from the function SearchIndex._chunk, and more specifically the list raw_responses, which stores responses for every request sent.
Hey there, we completely rewrote the implementation in v4, if by any chance you still use the client, could you let us know if this exists? Thanks, feel free to re-open the issue!
Hello, I have a memory issue when using
replace_all_objects
. When using this function with a significant amount of documents (5 Million), I use an iterator to minimize memory consumption.I expect the memory usage to stay flat during the operation, however it keeps increasing. (cf image below)
Upon investigation, it looks like the cause of this memory usage increase comes from the function
SearchIndex._chunk
, and more specifically the listraw_responses
, which stores responses for every request sent.algoliasearch-client-python/algoliasearch/search_index.py
Lines 505 to 528 in 3bb9108
This is a problem because the response of
/1/indexes/{indexName}/batch
contains the list ofobjectIDs
With 5M documents, each with an objectID of ~15 characters, this accounts for 300MB.
Is there a request_option for the API not to return
objectIDs
, or for the code not to store them inraw_responses
?Thank you 🙏
The text was updated successfully, but these errors were encountered: