-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrency #45
Comments
Wow, that's a really powerful execution flow! But it's almost hard for me to tell, where's the overlap between something like that and I guess lookup tasks would benefit from batching. That would be tricky with the current implementation, since it uses |
@rmosolgo Fair point! This kind of concurrency could definitely be used independently of |
Oh, right, I guess the resolve function is called on the main thread, so calls into subtasks could be batched! Interesting ... 🤔 |
@rmosolgo We could move this discussion to https://github.com/rmosolgo/graphql-ruby or Slack if you'd like to discuss more! Definitely some interesting gains to explore. |
If you just want to do some unbatched query then promises allows you to do that as mentioned in lgierth/promise.rb#23 (comment) I haven't prototyped concurrent executor in graphql-batch yet. Previously I though that maybe this could be done using a custom executor, which could be set using It looks like the only required methods for the executor are an accessor for a loaders hash, a Let me know if that works for you or whether there is monkey patching that you still think is needed. If you get something working, then let me know so that I know what needs to be kept stable to support it. |
@dylanahsmith Yep, sorry, this conversation started concurrently with that one 😉 I need to familiarize myself a bit more with |
I'm doing some rather hacky things to achieve concurrency (which I need b/c I am accessing third party APIs). It's less than Ideal. I spent an hour or so attempting to make the Executor concurrency-friendly, but could not get it working withing breaking a lot of other code. Here's an example: # ...
field :user, Types::UserType do
lazy_resolve (obj, args, ctx) do
# Omitting some other stuff here for brevity, such as ActiveSupport::Dependencies.interlock.permit_concurrent_loads...
promise.catch { |e| GraphQL::ExecutionError.new(e.message)}.value
end
resolve ->(obj, _, _) do
Concurrent::Promise.execute do
GraphQL::Batch.batch do
Loaders::UserLoader.for.load(obj.user_id)
end
end
end
end |
Since it came up in this issue, graphql-batch can now accept a custom executor class. See #68 |
Has there been any resolution on this issue of concurrently issuing batch queries? I am struggling with this exact problem. I can get concurrent futures to run in GraphQL, and I can get synchronous batch queries working as well. Getting the two to play nicely together is not trivial. Is the concurrency issue one that can be resolved in |
While I love that the A+ Promise gives us the ability to compose things in a functional context, I found that the current implementation requires quite a bit of monkey-patching and additional types/interfaces to take advantage of the natural concurrency that futures typically give us. Would the team be open to a PR around a basic future/task/IO type that can take advantage of nonblocking IO?
GraphQL queries are a natural fit for running concurrent fetches at each layer: multiple independent Active Record queries, web requests (e.g., Elasticsearch), etc. The interface could even be the same as
Promise
(though I'd recommend that#then
be sugar for#map
and#flat_map
, and a few additional methods to make things more predictable/composable).Here's an example for a single resolver:
This mutation can fire off Active Record queries for the current user and category at the same time using
Thread
. As soon as one fails it short-circuits by raising an exception and it doesn't need to wait for the other query to complete. When both queries succeed, it can immediately flat-map onto another task that creates the project, which maps into the final return value.A library like GraphQL Batch could further take advantage of concurrency by using
Task.all
internally at each nested node of the query tree. Fulfilled tasks are memoized, so later queries can be reduced to avoid fetching records that earlier queries have already fetched and combine them afterwards.There's lots of potential here, but a bit of work to get there. I'm curious about community interest, though, seeing as we're beginning to explore these benefits internally.
The text was updated successfully, but these errors were encountered: