Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate LLAMA #4003

Draft
wants to merge 2 commits into
base: dev
Choose a base branch
from

Conversation

bernhardmgruber
Copy link
Contributor

@bernhardmgruber bernhardmgruber commented Mar 1, 2022

This PR integrates LLAMA for handling the memory layout of several core data structures.

Open tasks:

  • Integrate the LLAMA library as a subtree. For this, run the command: GIT_AUTHOR_NAME="Third Party" GIT_AUTHOR_EMAIL="[email protected]" git subtree pull --prefix thirdParty/llama [email protected]:alpaka-group/llama.git develop --squash
  • Pull upgrade to Boost 1.70 into separate PR (done in CMake request boost 1.74.0+ #4144)
  • Find a better way to choose which memory layout to use in the param files
  • Make IO compile
  • Test IO works
  • Test performance is the same as before

After checking out this branch, you need to run git submodule update to get LLAMA.

@BrianMarre
Copy link
Member

Also please note that PIConGPU already moved on to boost 1.74

@bernhardmgruber
Copy link
Contributor Author

Also please note that PIConGPU already moved on to boost 1.74

Awesome, I updated the PR description!

.gitignore Outdated
Comment on lines 37 to 39
# Visual Studio configuration and output files
out
/.vs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should go into the PIConGPU dev, since PIConGPU does not especially favour VS

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is really a big problem for you, then I can remove it. However, you ignore so many other tools' temporary files (like MacOS stuff, VS Code, even Code::Blocks and Netbeans), I don't think those additional ones would hurt.

Comment on lines -133 to +141
if(IN_SRC_POS GREATER -1)
message(FATAL_ERROR
"PIConGPU requires an out of source build. "
"Please remove \n"
" - CMakeCache.txt\n"
" - CMakeFiles/\n"
"and create a separate build directory. "
"See: INSTALL.rst")
endif()
#if(IN_SRC_POS GREATER -1)
# message(FATAL_ERROR
# "PIConGPU requires an out of source build. "
# "Please remove \n"
# " - CMakeCache.txt\n"
# " - CMakeFiles/\n"
# "and create a separate build directory. "
# "See: INSTALL.rst")
#endif()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should not change this, it is fine for your pull request, but not for the PIConGPU in general

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this should not get merged here. However, some IDEs like Visual Studio and CLion, when they splot a CMakeLists.txt, start to configure the project in a subfolder of the working copy, so they can offer better help to the developer. This is still an out-of-source build btw. and you can build picongpu fine this way as well. It is just not the intended workflow when working on a cluster.

@@ -211,7 +211,7 @@ namespace picongpu
const DataSpace<simDim> localCell(DataSpaceOperations<simDim>::template map<TVec>(particleCellIdx));

Velocity velocity;
const float3_X vel = velocity(particle[momentum_], attribute::getMass(weighting, particle));
const float3_X vel = velocity(static_cast<float3_X>(particle[momentum_]), attribute::getMass(weighting, particle));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this cast really necessary? As far as I know momentum_ already should be float3_X

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is necessary when particle[momentum_] returns a proxy reference, which it can with certain LLAMA mappings.

@BrianMarre
Copy link
Member

@sbastrakov or @psychocoderHPC if you a little bit of free time could you take a look, I am not yet qualified to review this ;)

@BrianMarre
Copy link
Member

BrianMarre commented Sep 7, 2022

@bernhardmgruber you also should execute something like, after having load/installed clang format, (or an llvm packet containing it)

find share test include -iname "*.def" -o -iname "*.h" -o -iname "*.cpp" -o -iname "*.cu" -o -iname "*.hpp" -o -iname "*.tpp" -o -iname "*.kernel" -o -iname "*.loader" -o -iname "*.param" -o -iname "*.unitless" | xargs clang-format -i

on your pull request to do a clang format, and fix the formatting errors the CI is complaining about.

Note: clang-fromat version is important, PIConGPU uses 12.0.1

@sbastrakov
Copy link
Member

I believe the plan is that @bernhardmgruber comes for some time to Dresden relatively soon to finish integratation of LLAMA to PIConGPU.

@bernhardmgruber
Copy link
Contributor Author

I will resume on this work starting on the 28th of November. I will be at HZDR for two weeks to complete this work.

@bernhardmgruber
Copy link
Contributor Author

So, I updated the branch to LLAMA develop and picongpu dev. I can compile the SPEC example again.

@psychocoderHPC
Copy link
Member

could you run clang-format-12 that we can see what the CI is saying

find src/include/mallocMC example share tests include test  -iname "*.def" \
  -o -iname "*.h" -o -iname "*.cpp" -o -iname "*.cu" \
  -o -iname "*.hpp" -o -iname "*.tpp" -o -iname "*.kernel" \
  -o -iname "*.loader" -o -iname "*.param" -o -iname "*.unitless" \
  | xargs clang-format-12 -i

@bernhardmgruber
Copy link
Contributor Author

could you run clang-format-12 that we can see what the CI is saying

I know, I only have clang-format-15 locally on Windows. On my Ubuntu 22.10, it is also no longer available:

bgruber@dixxi3:/mnt/c/dev/picongpu$ sudo apt install clang-format-12
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package clang-format-12

LLAMA apt repository only has versions dev, 15 and 14 (secretly, the older ones might still be reachable, I did not check).

How about you update to clang-format-14? :)

Comment on lines 115 to 119
// struct ParticleFrameMapping : llama::mapping::BindAoS<false>
struct ParticleFrameMapping : llama::mapping::BindSoA<false>
// struct ParticleFrameMapping : llama::mapping::BindAoSoA<16>
// struct ParticleFrameMapping : llama::mapping::BindAoSoA<32>
// struct ParticleFrameMapping : llama::mapping::BindAoSoA<64>
{
inline static constexpr bool splitVector = false;
};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For:

Find a better way to choose which memory layout to use in the param files

I am using a quoted metafunction (ParticleFrameMapping) now which has an additional flag whether the Vectors should be split up into their components. That is good enough for most cases I guess. It can cover all LLAMA mappings which only depend on static/compile-time information.

@psychocoderHPC
Copy link
Member

could you run clang-format-12 that we can see what the CI is saying

I know, I only have clang-format-15 locally on Windows. On my Ubuntu 22.10, it is also no longer available:

bgruber@dixxi3:/mnt/c/dev/picongpu$ sudo apt install clang-format-12
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package clang-format-12

LLAMA apt repository only has versions dev, 15 and 14 (secretly, the older ones might still be reachable, I did not check).

How about you update to clang-format-14? :)

Unfortunately updating the formation requirement has low priority. The only nice feature which would come with clang-format-14 is the east cost const.

At the moment lifting the requirements is also hard because we have a few students currently trying to bring their extensions to PIConGPU they developed. Switching now the code style would maybe introduce too much pain for them.

@psychocoderHPC
Copy link
Member

I can pull your branch and focre push the formatted version if you like.

@bernhardmgruber
Copy link
Contributor Author

I can pull your branch and focre push the formatted version if you like.

We can do that once I am finished with everything else. I am actively working on this branch ATM to make the IO work as well.

@BrianMarre
Copy link
Member

Our dev server still has a clang-format 12.0.1, in general you can install/load it using spack via the corresponding llvm pack

Also be aware that picongpu is even patch level sensitive, you need clang-format 12.0.1 specifically

@bernhardmgruber
Copy link
Contributor Author

Can I prevent the CI from running if I push changes, but keep this PR open?

@bernhardmgruber bernhardmgruber marked this pull request as draft May 4, 2023 06:44
@psychocoderHPC psychocoderHPC added the CI:no-compile CI is skipping compile/runtime tests but runs PICMI tests label May 4, 2023
@@ -22,6 +22,8 @@

#include "picongpu/simulation_defines.hpp"

#include "picongpu/param/memory.param"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a good idea?

This is needed by VS to configure the project in open-folder mode.
@chillenzer
Copy link
Contributor

@ComputationalRadiationPhysics/picongpu-maintainers This is not planned anymore, is it?

@bernhardmgruber
Copy link
Contributor Author

A certain @psychocoderHPC fearfully admitted in the past that LLAMA may be a solution too complex to be maintained within PIConGPU. Henceforth, I expect this PR to be closed without a merge eventually.

@psychocoderHPC
Copy link
Member

I will create soon a issue for the LLAMA integration to keep a link to this PR before we close it :-(
We do not have enough developers to maintain LLAMA and integrate it into PIConGPU :-(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI:no-compile CI is skipping compile/runtime tests but runs PICMI tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants