-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up N_MAX memory #222
Conversation
createSegmentArrayRanges(*modulesInGPU, *rangesInGPU, *mdsInGPU, nLowerModules, nTotalSegments, stream, N_MAX_SEGMENTS_PER_MODULE, N_MAX_PIXEL_SEGMENTS_PER_MODULE); | ||
// cout<<"nTotalSegments: "<<nTotalSegments<<std::endl; // for memory usage | ||
|
||
//problem here: didn't distinguish pixel segments and outtracker segments. so they use the same memory index, which should be different and allocate dynamically |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a problem of this PR or something general?
It's fine to leave a comment here for now but I think that creating an issue with more details would be more helpful in the long run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's the way that we define our objects, as we put the segments including both pixel segments and also outer segments. I'm thinking that it's not minor work if we want to separate the pixel segments and outer segments, so we may not want to do it in action term right now and put this into issue that we want to solve now. I think it should be a matter of redundant memory but not serious to algorithm. Maybe we just leave a comment here and if someone want to optimize it they can do later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it's not a problem but rather an optimization? I don't remember discussing it recently, so I am a bit surprised it just came up in a comment at a random place in the code. If anything, I would bring it up to be discussed with everyone and understand if this is something to aim for the future (in which case we should make it an issue).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Manos, I think regarding this comment I put here, there's no need for us to bother now about further optimization if we don't have clear goal on how much memory we want to fit in. If we separate the pixel segments and outer tracker segments, we can further dynamically allocate memories of the two and save more. But I think only ~100MB was saved last time when I do cleanings on segments, so I'm just leaving a comment here, but not wishing to solve it in a short time :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Then let's not call it "problem" then. Maybe optimization? Still, I would say that it makes more sense to not have comments on optimization at random points in the code but rather make an issue explaining with a few more words what the optimization can be and we can still have pointers to the code.
(By the way, please do "resolve" the conversations you reply to, without an indication that the other person has read them. GitHub collapses the comment threads and they can be easily missed. Apart from that, I still think that the conversation is not "resolved", since the comment, as it is written, is misleading and it is there also in the new PR.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll explain a bit in the code there and make an issue pointing to here. and sorry for resolving and collapsing the threads, won't do that again...
also, cuda-memcheck and nvprof works without issues |
these variables are cleaned for N_MAX md, sg, t3, t5 in the code. checked by using grep. |
PR 202 + PR 211 + New plotting scheme and various cleaning of the output and performance plot making workflow
Fix Input Directory for RelVal Files and Removal of Useless File
…ssors Object accessors
No, this PR was only waiting for 212 and 216, not waiting for others. I'll fix the conflict and then update this. Thanks! |
Hi Yanxi, |
Thanks Manos! I think you are right, I think I can close this PR and only cherry-pick the one I changed and make a new PR on this:) |
This PR is closed and another one replacing it will be pushed up soon |
as mentioned by title. There are some remaining codes regarding the N_MAX_MD, N_MAX_SG, N_MAX_T3, N_MAX_T5. These memories are already allocated dynamically, so these fixed value are no longer useful and create confusions.