From 0e1e5b988a5846d60ed11fdc622cc4efb216f540 Mon Sep 17 00:00:00 2001 From: oraqlle <41113853+oraqlle@users.noreply.github.com> Date: Mon, 15 May 2023 19:19:42 +1000 Subject: [PATCH 1/3] Added Chapter 5 challenges in book SUMMARY --- src/SUMMARY.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/SUMMARY.md b/src/SUMMARY.md index 567bcf0..ff51d94 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -47,5 +47,6 @@ - [What is Distributed Computing](./chapter5/distributed-computing.md) - [Message Passing](./chapter5/message-passing.md) - [OpenMPI](./chapter5/openmpi.md) + - [Challenges](./chapter5/challenges.md) [Acknowledgements](./acknowledgements.md) From 1985e794509de87e7d093f5e623666efe72af332 Mon Sep 17 00:00:00 2001 From: oraqlle <41113853+oraqlle@users.noreply.github.com> Date: Wed, 17 May 2023 09:01:07 +1000 Subject: [PATCH 2/3] Updated distributed computing challenges README. --- src/chapter5/challenges.md | 70 +++++++++++++++++++------------------- 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/src/chapter5/challenges.md b/src/chapter5/challenges.md index cc62473..9e9c54a 100644 --- a/src/chapter5/challenges.md +++ b/src/chapter5/challenges.md @@ -1,52 +1,52 @@ -# Challenges +# Distributed Computing Challenges -🚧 Under Construction! 🏗️ +## Overview -## Install MPI for the tasks +- [Distributed Computing Challenges](#distributed-computing-challenges) + - [Overview](#overview) + - [Pre-Tasks](#pre-tasks) + - [Task 1 - Multinode 'Hello, world!'](#task-1---multinode-hello-world) + - [Task 2 - Ping Pong](#task-2---ping-pong) + - [Task 3 - Multinode Sum](#task-3---multinode-sum) + - [Task 4 - Multinode Mergesort](#task-4---multinode-mergesort) -- ```~/vf38/HPC_Training/spack/share/spack/setup-env.sh``` #configure spack -- ```spack load mpich``` #load MPICH through spack -- ```module load gcc``` #load gcc -- ```cp -r ~/vf38/HPC_Training/MPI_examples ~``` #copy tasks to your home directory +## Pre-Tasks -## Task 1: Hello World +For each task you will need to load MPICH using Spack from within your SLURM job script. There is a shared installation of Spack and MPICH within `vf38_scratch`. To load Spack and MPICH use the following to commands within you SLURM job script before any other command. -1. Go to folder ‘hello’ -2. Modify the files -3. Compile mpi_hello_world.c using makefile -4. Execute the file +```sh +. ~/vf38_scratch/spack/share/spack/setup-env.sh +spack load mpich +``` -## Task 2: Ping Pong +A template SLURM job file is given at the root of the distributed challenges directory. Copy this for each challenge into their respective sub-directories as every challenge will require running a SLURM job. If want to do some more experimenting, create multiple job scripts that use different amounts of nodes and test the execution time. -1. Go to ‘ping_pong’ folder -2. Modify the files -3. Compile the files -4. Run (world size must be two for this file) +You will also need to generate some input for the sum and mergesort challenges. This can be done by compiling and running the program in `generate.cpp`. Run the following commands to build an generate the inputs for your challenges. -Output should be similar to this. May be slightly different due to process scheduling +```sh +module load gcc/10.2.0 +g++ -std=c++20 -o bin/generate generate.cpp +bin/generate 1000000000 +``` -![Ping pong](imgs/ping_pong.png) +> Note: +> +> - You do not have to worry about how to read the numbers from the file, this is handled for you already but it is recommended to look at the read function in `read.h` and understand what it is doing. +> - The expected output of the 'sum' challenge is found in the generated `output.txt` file within the challenges directory. +> The expected output of the 'mergesort' challenge is found in the generated `sorted.txt` file within the challenges directory however this will contain a lot of values so a check function is provided that compares a resorted version of your input to your sorted output. -## Task 3: Monte Carlo +## Task 1 - Multinode 'Hello, world!' -- Run “./calcPiSeq 100000000” # make sure you use gcc for compiling serial code -- Modify calcPiMPI.c -- Run calcPiMPI 100000000 with mpi and see the difference. You can change the number of processes. However, please be mindful that you are in login node! -Hint: # +Your first task is to say 'Hello, world!' from different nodes on M3. This involves printing the nodes name, rank (ID) and the total number of nodes in the MPI environment. -## Task 4: Parallel computing task on compute nodes +## Task 2 - Ping Pong -- Submit your parallelised Monte Carlo task on compute nodes with 8 tasks +For this next task you will play a Ping-Pong game of sorts between two nodes. This will involve passing a count between the two nodes and incrementing the count for each send and receive. This should increment the count to 10 in the end. -## Task 5: Trapezoidal Rule Integration +## Task 3 - Multinode Sum -- Run “./seq_trap 10000000000” -- Modify calcPiMPI.c -- Run seq_MPI 100000000000 with mpi and see the difference. You can change the number of processes. However, please be mindful that you are in login node! +Your next task is to sum the numbers in the generated `input.txt` file together across ten nodes. This will involve summing 1,000,000,000 floats together. The rough expected output is contained in the `output.txt` file. Remember the input array is already given in the template file. -## Task 6: Bonus: Merge Sort +## Task 4 - Multinode Mergesort -- This task is quite challenging to parallelise it yourself -- Please refer to the answer and check if you can understand it - -Additional resources: +Your final task is to sort the numbers from the input file `unsorted.txt` using a distributed version of mergesort. This will involve ten nodes running their won mergesorts on chunks of the input data individually and then a final mergesort of the intermediate results. Remember the input array is already given in the template file. From c5efccf4233c3ac634cdfc4c39b593b23feb4abf Mon Sep 17 00:00:00 2001 From: oraqlle <41113853+oraqlle@users.noreply.github.com> Date: Thu, 18 May 2023 00:40:48 +1000 Subject: [PATCH 3/3] Small update to distrib. challenges spec --- src/chapter5/challenges.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/chapter5/challenges.md b/src/chapter5/challenges.md index 9e9c54a..e3c3cc5 100644 --- a/src/chapter5/challenges.md +++ b/src/chapter5/challenges.md @@ -34,6 +34,8 @@ bin/generate 1000000000 > - You do not have to worry about how to read the numbers from the file, this is handled for you already but it is recommended to look at the read function in `read.h` and understand what it is doing. > - The expected output of the 'sum' challenge is found in the generated `output.txt` file within the challenges directory. > The expected output of the 'mergesort' challenge is found in the generated `sorted.txt` file within the challenges directory however this will contain a lot of values so a check function is provided that compares a resorted version of your input to your sorted output. +> The sum and mergesort programs you will develop take a number as input. This is the size of the input data that you are performing your programs on. This should be the same number as the one used with the generator program. In the template programs for this challenge they are maked as an pointer to data called `input`. +> Given the above setup and configuration, the input data will contain ~8GB of data or ~8.0e9 bytes so make sure to allocate enough resources both in the programs an in the SLURM job scripts. ## Task 1 - Multinode 'Hello, world!'