This repository has been archived by the owner on Nov 5, 2019. It is now read-only.
Maxconnection now just limits the concurrent connections to mongodb inst... #45
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
...ead of raising an exception when the maxconnections are reached.
As the title of the commit says, I modified ConnectionPool and Cursor so that when reaching the maxconnections, instead of raising an Exception, we just queue the requests in the connectionpool.
Whenever a connection goes back into the cache, the backlog of requests is checked, and if not empty, a (now) free connection is given to the leftmost callback in the backlog.
I did this patch because I ran into a file descriptor limit problem while using asyncmongo.
I have a system wired to RabbitMQ, that sometimes receives ~10000 messages in a few seconds, each of which triggering a request to the MongoDB server. Instead of throttling my own system, I thought it would be interesting to give asyncmongo the possibility of throttling it itself, since I doubt I will be the only one in such a case.
While I removed the raise TooManyConnections altogether, maybe if you prefer, I could add a variable to switch behaviours at will ?