-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate LLAMA #4003
base: dev
Are you sure you want to change the base?
Integrate LLAMA #4003
Conversation
Also please note that PIConGPU already moved on to boost 1.74 |
Awesome, I updated the PR description! |
.gitignore
Outdated
# Visual Studio configuration and output files | ||
out | ||
/.vs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this should go into the PIConGPU dev, since PIConGPU does not especially favour VS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is really a big problem for you, then I can remove it. However, you ignore so many other tools' temporary files (like MacOS stuff, VS Code, even Code::Blocks and Netbeans), I don't think those additional ones would hurt.
if(IN_SRC_POS GREATER -1) | ||
message(FATAL_ERROR | ||
"PIConGPU requires an out of source build. " | ||
"Please remove \n" | ||
" - CMakeCache.txt\n" | ||
" - CMakeFiles/\n" | ||
"and create a separate build directory. " | ||
"See: INSTALL.rst") | ||
endif() | ||
#if(IN_SRC_POS GREATER -1) | ||
# message(FATAL_ERROR | ||
# "PIConGPU requires an out of source build. " | ||
# "Please remove \n" | ||
# " - CMakeCache.txt\n" | ||
# " - CMakeFiles/\n" | ||
# "and create a separate build directory. " | ||
# "See: INSTALL.rst") | ||
#endif() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should not change this, it is fine for your pull request, but not for the PIConGPU in general
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this should not get merged here. However, some IDEs like Visual Studio and CLion, when they splot a CMakeLists.txt, start to configure the project in a subfolder of the working copy, so they can offer better help to the developer. This is still an out-of-source build btw. and you can build picongpu fine this way as well. It is just not the intended workflow when working on a cluster.
@@ -211,7 +211,7 @@ namespace picongpu | |||
const DataSpace<simDim> localCell(DataSpaceOperations<simDim>::template map<TVec>(particleCellIdx)); | |||
|
|||
Velocity velocity; | |||
const float3_X vel = velocity(particle[momentum_], attribute::getMass(weighting, particle)); | |||
const float3_X vel = velocity(static_cast<float3_X>(particle[momentum_]), attribute::getMass(weighting, particle)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this cast really necessary? As far as I know momentum_ already should be float3_X
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is necessary when particle[momentum_]
returns a proxy reference, which it can with certain LLAMA mappings.
@sbastrakov or @psychocoderHPC if you a little bit of free time could you take a look, I am not yet qualified to review this ;) |
@bernhardmgruber you also should execute something like, after having load/installed clang format, (or an llvm packet containing it)
on your pull request to do a clang format, and fix the formatting errors the CI is complaining about. Note: clang-fromat version is important, PIConGPU uses 12.0.1 |
I believe the plan is that @bernhardmgruber comes for some time to Dresden relatively soon to finish integratation of LLAMA to PIConGPU. |
I will resume on this work starting on the 28th of November. I will be at HZDR for two weeks to complete this work. |
8582027
to
d06c3cf
Compare
So, I updated the branch to LLAMA |
could you run clang-format-12 that we can see what the CI is saying
|
I know, I only have clang-format-15 locally on Windows. On my Ubuntu 22.10, it is also no longer available:
LLAMA apt repository only has versions dev, 15 and 14 (secretly, the older ones might still be reachable, I did not check). How about you update to clang-format-14? :) |
158f295
to
9317b83
Compare
// struct ParticleFrameMapping : llama::mapping::BindAoS<false> | ||
struct ParticleFrameMapping : llama::mapping::BindSoA<false> | ||
// struct ParticleFrameMapping : llama::mapping::BindAoSoA<16> | ||
// struct ParticleFrameMapping : llama::mapping::BindAoSoA<32> | ||
// struct ParticleFrameMapping : llama::mapping::BindAoSoA<64> | ||
{ | ||
inline static constexpr bool splitVector = false; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For:
Find a better way to choose which memory layout to use in the param files
I am using a quoted metafunction (ParticleFrameMapping
) now which has an additional flag whether the Vector
s should be split up into their components. That is good enough for most cases I guess. It can cover all LLAMA mappings which only depend on static/compile-time information.
Unfortunately updating the formation requirement has low priority. The only nice feature which would come with clang-format-14 is the east cost const. At the moment lifting the requirements is also hard because we have a few students currently trying to bring their extensions to PIConGPU they developed. Switching now the code style would maybe introduce too much pain for them. |
I can pull your branch and focre push the formatted version if you like. |
We can do that once I am finished with everything else. I am actively working on this branch ATM to make the IO work as well. |
9317b83
to
b6ef6fe
Compare
9f510dc
to
3ff629c
Compare
Our dev server still has a clang-format 12.0.1, in general you can install/load it using spack via the corresponding llvm pack Also be aware that picongpu is even patch level sensitive, you need clang-format 12.0.1 specifically |
3ff629c
to
3acaf12
Compare
bfa7c8c
to
e9f1a67
Compare
Can I prevent the CI from running if I push changes, but keep this PR open? |
fdadfa9
to
9ebecda
Compare
aa4e543
to
355d47b
Compare
@@ -22,6 +22,8 @@ | |||
|
|||
#include "picongpu/simulation_defines.hpp" | |||
|
|||
#include "picongpu/param/memory.param" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a good idea?
This is needed by VS to configure the project in open-folder mode.
656aab9
to
c8b4f3d
Compare
c8b4f3d
to
bd68665
Compare
Also support LLAMA frames in the IO.
bd68665
to
d008726
Compare
@ComputationalRadiationPhysics/picongpu-maintainers This is not planned anymore, is it? |
A certain @psychocoderHPC fearfully admitted in the past that LLAMA may be a solution too complex to be maintained within PIConGPU. Henceforth, I expect this PR to be closed without a merge eventually. |
I will create soon a issue for the LLAMA integration to keep a link to this PR before we close it :-( |
This PR integrates LLAMA for handling the memory layout of several core data structures.
Open tasks:
GIT_AUTHOR_NAME="Third Party" GIT_AUTHOR_EMAIL="[email protected]" git subtree pull --prefix thirdParty/llama [email protected]:alpaka-group/llama.git develop --squash
Pull upgrade to Boost 1.70 into separate PR(done in CMake request boost 1.74.0+ #4144)After checking out this branch, you need to run
git submodule update
to get LLAMA.