-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solve and update well solution in network sub iterations #6011
base: master
Are you sure you want to change the base?
Conversation
jenkins build this please |
jenkins build this failure_report please |
The test failures are expected and reasonable. |
With this we should also consider changing the default maximum number of iterations. (e3c0b72) 3 seems to be a good number. That gives 3 outer iterations where gaslift and all group constrains are checked carefully all with 20 sub-iterations (=maximum of 60 network iterations in total) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this probably should be merged if it resolves issues in concrete cases, but eventually we should simplify the algorithm away from having two loops for the network balance. (It is currently being worked on.)
const auto& ws = this->wellState().well(well->indexOfWell()); | ||
const bool thp_is_limit = ws.production_cmode == Well::ProducerCMode::THP; | ||
if (thp_is_limit) { | ||
well->prepareWellBeforeAssembling(this->simulator_, dt, this->wellState(), this->groupState(), deferred_logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the function that solves the wells... We should change the name to something that says that more clearly (not in this PR).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree
} | ||
} | ||
} | ||
this->updateAndCommunicateGroupData(episodeIdx, iterationIdx, param_.nupcol_group_rate_tolerance_, deferred_logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before this PR, it is not clear to me if the sub-iterations had any effect at all? The damping of the network update would apply and we would update the network values until they were within tolerance of the true values (true in the sense of "what you get at the currently given rates"). With this PR, well rates will change from one iteration to the next, and the damping actually could matter.
My question is then: what do you see as the role for these inner iterations, compared to the outer iterations in updateWellControlsAndNetwork()? It seems to me that at the moment we are close enough here, we will return a flag that terminates the outer iterations, that seems a bit premature, as the group limits could still change in the outer loop? In practice on difficult cases, what is the typical iteration behaviour you see with this new code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not clear to me if the sub-iterations had any effect at all?
It had the unintentional effect of not applying the damping.
I think it make sense to have to level of iterations. A inner iteration where only the thp controlled wells are updated and damping is applied. A outer loop where gaslift and the full group/well checks are performed. Daming + gaslift optimization is not very efficient. This was my original intention, but it seems like I missed the well rate update hidden in prepareWellBeforeAssembling. This PR tries to fix this. But I agree that the outer loop just be rerun also when gaslift and/or group controls have changed. I will make an update and test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed this in: 647e7f8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That commit was a bit confusing, since it also included changes (that I did not quite understand the motivation for?) not related to that point, but I think it also achieves what you claim. We should change the names of return arguments to be precise though, now it is no longer right that the first one only says something about the network. Also, the order of the return arguments is flipping down the call stack, which should be fixed while here.
I do not see the commit as part of this or any other PR though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a testing PR that collect different PRs that includes it. #6015
if (it != this->node_pressures_.end()) { | ||
const auto& ws = this->wellState().well(well->indexOfWell()); | ||
const bool thp_is_limit = ws.production_cmode == Well::ProducerCMode::THP; | ||
if (thp_is_limit) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we not also check if the ws.thp is equal to the node pressure? If it is, than we do not need to re-solve this well (assuming that ws.thp is set to the limit in the course of the function called below, maybe good to check these states in the debugger). This could happen when there is more than one root of the network tree, and only one is changing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe. I am not sure how much effect that will have since the un-updated wells should be converged.
No description provided.