-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(queryEngine): add limit argument to updateMany #5110
Conversation
CodSpeed Performance ReportMerging #5110 will not alter performanceComparing Summary
|
WASM Query Engine file Size
|
ee23587
to
0a4637f
Compare
run_query!( | ||
&runner, | ||
r#"{ | ||
findManyTestModel(orderBy: { id: asc }) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Using orderBy here to ensure stability of query output across different DBs. E.g. mongo and postgres returned opposite ordering by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done the same,
Maybe not in this PR, but it would be nice to be able to do an assertion like assert_has_the_same_elements
, which is not easy to do with the current test API, because the results come back as JSON.
That way we would actually be able to cover these cases without using orderBy which is valuable coverage
use quaint::ast::*; | ||
use query_structure::*; | ||
|
||
pub(crate) fn wrap_with_limit_subquery_if_needed<'a>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Using the same logic to create a subquery for updates and deletes => centralized that logic here.
🤔 Happy to learn whether we have a better place for such shared logic. :)
@@ -342,3 +358,9 @@ fn can_use_atomic_update( | |||
|
|||
true | |||
} | |||
|
|||
pub struct UpdateManyRecordNodeOptionals<'a> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ The linter was complaining about too many params on the update_many_record_node
method so I extracted the most recently introduced optional params into this param object.
🤔 Happy to learn if you have a stylistic better or more idiomatic approach for this. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's okay, but a slightly more idiomatic approach would be to refactor update_many_record_node
to a builder pattern (something like UpdateManyRecordNodeBuilder::new(graph, query_schema, filter, model, data_map).limit(1).build()
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Am not 100% sure about the methods in this file but since it mentions emulate
and cascade
I assume this is not really relevant to leverage a limit param.
query-engine/connectors/mongodb-query-connector/src/root_queries/write.rs
Outdated
Show resolved
Hide resolved
query-engine/connectors/sql-query-connector/src/database/operations/write.rs
Outdated
Show resolved
Hide resolved
@@ -136,7 +145,7 @@ pub(super) async fn update_many_from_ids_and_filter( | |||
|
|||
let updates = { | |||
let update = build_update_and_set_query(model, args, selected_fields, ctx); | |||
let ids: Vec<&SelectionResult> = ids.iter().collect(); | |||
let ids: Vec<&SelectionResult> = ids.iter().take(limit.unwrap_or(i64::MAX) as usize).collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i64::MAX as usize
will overflow on platforms where pointers are less than 8 bytes, I think we compile this to WASM where this would overflow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct and I am aware. We do these kind of typecasts unfortunately all over the place. Briefly discussed this also with @aqrln earlier this week in the scope of the deleteMany
changes. Not sure how we could best fix universally. Ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how common of an issue this is in the codebase, but in the two particular cases in this PR you can avoid using i64::MAX:
ids.iter().take(limit.map(|i| usize::try_from(i).unwrap()).unwrap_or(ids.len()))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this will not technically "overflow" in an unsafe way (thanks rust 👋 ) but just cap out defacto u32::MAX
should the provided limit be larger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as
casts in Rust do overflow (or a better term might be truncate), for example 4294967296u64 as u32
will give you 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one is an interesting case though, because casting a bigger MAX to smaller MAX will work fine because truncation will always produce the max value of the smaller type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably avoid the conversion-through-truncation here, but it's not a big deal, idk how others feel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated this with some early sanity checks and user facing error messages in this commit: d8c2f27
@@ -160,6 +161,7 @@ pub async fn update_records<'conn>( | |||
let ids: Vec<Bson> = if let Some(selectors) = record_filter.selectors { | |||
selectors | |||
.into_iter() | |||
.take(limit.unwrap_or(i64::MAX) as usize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR adds an optional
limit
argument to theupdateMany
call allowing to constraint the max number of affected rows by the update. As requested in prisma/prisma#6957.It uses a subquery approach similar to
deleteMany
from #5105.