-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipeline support ? #14
Comments
You can already do this
I'm happy to add a more convenient API, but I've not found one that I'm completely happy with yet |
Yes, that's true. But I was thinking about true pipelining for reducing network latency. In some context, I need to send a lot of request to redis. The multi ... exec command don't solve that problem. |
I'm not sure what you mean. The requests are asynchronous, so you can send requests as rapidly as you like. |
Well, sorry my english is so poor. OK let's try with an example. I have an API web app that returns json. My web server is goliath so I'm using em-hiredis in a em synchrony way. For one particular API, i have to make 100 request to redis so it means 100 hundred networks communications with redis (so it's slow). I would have liked to send juste one request to redis via pipelining. |
The way EventMachine works is that if you send data multiple times to the same connection during a 'tick' of the event loop, the data will be buffered and sent in one go at the end of the tick http://rubydoc.info/github/eventmachine/eventmachine/EventMachine/Connection:send_data. Therefore sending 1000 commands to redis will not result in 1000 TCP packets. As an example, this example will send all 5 calls to get in the same packet
Having said this, a little bit of packet capturing has revealed a little problem which I'm hoping @tmm1 will be able to cast some light upon. It appears that calls to
Results in these packets being sent to the dumb server on port 10000
Packet 2:
@tmm1 is there a reason for this behaviour? Should |
https://github.com/eventmachine/eventmachine/blob/master/ext/ed.cpp#L977 (the cause of the 16 batching) |
Thanks. Interesting. I did not know this eventmachine behavior. I still have the issue because I'm in a em-synchrony context and then each redis call is waiting for the answer. But it has more to do with em-synchrony and the way I use it than with em-hiredis. I get your point. But I think anyway that adding a way to buffer pipelined commands would be a nice add-on to em-hiredis (as jedis or redis-rb does). Thanks again for your answers. |
I think though there is a distinct difference between pipelining and using multi, that is that multi is transactional and pipelining is not: |
Because EM defers all I/O, the buffering is implicit. If you write 100 commands in the same tick, they are automatically pipelined (without taking into account the EM buffering artifact that @mloughran describes). |
@pietern precisely. I can see a weak argument to add a pipelining api in order to work around the EM buffering artefact, but as you say pipelining is automatic. My opinion is that the current API is good enough, and that if you want to ensure that all the commands are executed by the server consecutively, you can just use multi-exec (and probably should be). Is there a performance penalty on the server for using multi-exec when you don't really need atomicity? |
@pietern how does the implicit pipelining work on the response side of the equation? @mloughran There has to be some penalty for using multi-exec if you have multiple clients talking to the same redis server and you are in a high volume situation:
|
@talbright however redis is single threaded and only deals with a single client's request at any one time, multi-exec doesn't change this. On the response side, if there are many responses waiting on the socket, em-hiredis will process all available responses in the same eventloop tick, calling whatever callbacks you have defined on the commands. However if you want to guarantee that you get all responses at the 'same time' in your code then again multi-exec is the way to go. Does that make sense? |
Hi,
Do you plan to add support for pipelining ?
Thx
The text was updated successfully, but these errors were encountered: