Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HPC Training Book v1.3.1 - Distributed computing challenges rework #65

Merged
merged 4 commits into from
May 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,5 +47,6 @@
- [What is Distributed Computing](./chapter5/distributed-computing.md)
- [Message Passing](./chapter5/message-passing.md)
- [OpenMPI](./chapter5/openmpi.md)
- [Challenges](./chapter5/challenges.md)

[Acknowledgements](./acknowledgements.md)
72 changes: 37 additions & 35 deletions src/chapter5/challenges.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,54 @@
# Challenges
# Distributed Computing Challenges

🚧 Under Construction! 🏗️
## Overview

## Install MPI for the tasks
- [Distributed Computing Challenges](#distributed-computing-challenges)
- [Overview](#overview)
- [Pre-Tasks](#pre-tasks)
- [Task 1 - Multinode 'Hello, world!'](#task-1---multinode-hello-world)
- [Task 2 - Ping Pong](#task-2---ping-pong)
- [Task 3 - Multinode Sum](#task-3---multinode-sum)
- [Task 4 - Multinode Mergesort](#task-4---multinode-mergesort)

- ```~/vf38/HPC_Training/spack/share/spack/setup-env.sh``` #configure spack
- ```spack load mpich``` #load MPICH through spack
- ```module load gcc``` #load gcc
- ```cp -r ~/vf38/HPC_Training/MPI_examples ~``` #copy tasks to your home directory
## Pre-Tasks

## Task 1: Hello World
For each task you will need to load MPICH using Spack from within your SLURM job script. There is a shared installation of Spack and MPICH within `vf38_scratch`. To load Spack and MPICH use the following to commands within you SLURM job script before any other command.

1. Go to folder ‘hello’
2. Modify the files
3. Compile mpi_hello_world.c using makefile
4. Execute the file
```sh
. ~/vf38_scratch/spack/share/spack/setup-env.sh
spack load mpich
```

## Task 2: Ping Pong
A template SLURM job file is given at the root of the distributed challenges directory. Copy this for each challenge into their respective sub-directories as every challenge will require running a SLURM job. If want to do some more experimenting, create multiple job scripts that use different amounts of nodes and test the execution time.

1. Go to ‘ping_pong’ folder
2. Modify the files
3. Compile the files
4. Run (world size must be two for this file)
You will also need to generate some input for the sum and mergesort challenges. This can be done by compiling and running the program in `generate.cpp`. Run the following commands to build an generate the inputs for your challenges.

Output should be similar to this. May be slightly different due to process scheduling
```sh
module load gcc/10.2.0
g++ -std=c++20 -o bin/generate generate.cpp
bin/generate 1000000000
```

![Ping pong](imgs/ping_pong.png)
> Note:
>
> - You do not have to worry about how to read the numbers from the file, this is handled for you already but it is recommended to look at the read function in `read.h` and understand what it is doing.
> - The expected output of the 'sum' challenge is found in the generated `output.txt` file within the challenges directory.
> The expected output of the 'mergesort' challenge is found in the generated `sorted.txt` file within the challenges directory however this will contain a lot of values so a check function is provided that compares a resorted version of your input to your sorted output.
> The sum and mergesort programs you will develop take a number as input. This is the size of the input data that you are performing your programs on. This should be the same number as the one used with the generator program. In the template programs for this challenge they are maked as an pointer to data called `input`.
> Given the above setup and configuration, the input data will contain ~8GB of data or ~8.0e9 bytes so make sure to allocate enough resources both in the programs an in the SLURM job scripts.

## Task 3: Monte Carlo
## Task 1 - Multinode 'Hello, world!'

- Run “./calcPiSeq 100000000” # make sure you use gcc for compiling serial code
- Modify calcPiMPI.c
- Run calcPiMPI 100000000 with mpi and see the difference. You can change the number of processes. However, please be mindful that you are in login node!
Hint: # <https://www.mpich.org/static/docs/v3.3/www3/MPI_Reduce.html>
Your first task is to say 'Hello, world!' from different nodes on M3. This involves printing the nodes name, rank (ID) and the total number of nodes in the MPI environment.

## Task 4: Parallel computing task on compute nodes
## Task 2 - Ping Pong

- Submit your parallelised Monte Carlo task on compute nodes with 8 tasks
For this next task you will play a Ping-Pong game of sorts between two nodes. This will involve passing a count between the two nodes and incrementing the count for each send and receive. This should increment the count to 10 in the end.

## Task 5: Trapezoidal Rule Integration
## Task 3 - Multinode Sum

- Run “./seq_trap 10000000000”
- Modify calcPiMPI.c
- Run seq_MPI 100000000000 with mpi and see the difference. You can change the number of processes. However, please be mindful that you are in login node!
Your next task is to sum the numbers in the generated `input.txt` file together across ten nodes. This will involve summing 1,000,000,000 floats together. The rough expected output is contained in the `output.txt` file. Remember the input array is already given in the template file.

## Task 6: Bonus: Merge Sort
## Task 4 - Multinode Mergesort

- This task is quite challenging to parallelise it yourself
- Please refer to the answer and check if you can understand it <https://selkie-macalester.org/csinparallel/modules/MPIProgramming/build/html/mergeSort/mergeSort.html>

Additional resources: <https://selkie-macalester.org/csinparallel/modules/MPIProgramming/build/html/index.html>
Your final task is to sort the numbers from the input file `unsorted.txt` using a distributed version of mergesort. This will involve ten nodes running their won mergesorts on chunks of the input data individually and then a final mergesort of the intermediate results. Remember the input array is already given in the template file.