Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 95 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,101 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Yuru Wang
* [email protected]
* Tested on: Windows 10, i7-7700HQ @ 2.5GHz 128GB, GTX 1050 Ti 8GB (personal computer)
* Modified CMakeList.txt: changed sm_20 to sm_61 inside cuda_add_library

### (TODO: Your README)
## Project Description ##
This project aims at implementing GPU stream compaction algorithm in CUDA. In this project, the compaction algorithm simply removes all zeros from an array of int s. To compare and analyze the performance of GPU and CPU computation, a few different versions of Scan (Prefix Sum) algorithms are implemented. Then they are used in the scatter algorithm to do stream compaction.

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
A list of features included in this project is as follows:
* CPU Scan: single for loop implementation
* CPU Stream Compaction: single for loop implementation
* Naive GPU Scan: GPU version of naive parallel reduction
* Work-Efficient GPU Scan: GPU version of work efficient parallel reduction with up sweep and down sweep phases
* Work-Efficient GPU Stream Compaction: GPU work efficient scan algorithm along with scatter algorithm
* Thrust's Implementation: using built-in thrust::exclusive_scan function

Extra Credit:
Part 5 (Optimize GPU work efficient scan) was implemented. The optimized efficient scan algorithm has obvious better performance than old one. The detail analysis and performance comparison are described at Next section.

## Performance Analysis and Questions ##
![](img/blockSize.jpg)
As shown in above diagram, the block size does not affect the performance much, so a decent block size of 512 was chosen for comparing performance of various implementations.


![](img/scan_comparison.jpg)
The above plot demonstrates a rough ranking among various versions of implementations: Thrust has best performance, and GPU work efficient scan algorithm comes next. The GPU naive scan algorithm ranks the third place, and the CPU scan performs worst.

When the array size is small (less than 2^20), there is no big performance difference between those implementations and we can even observe that CPU scan has best performance. I guess this is probably because GPU parallel computation would have more overhead than CPU when the array is too small. While the array size is getting larger, performances start diverging. GPU Efficient scan starts working better than CPU scan since the advantage of parallel computing exceeds its overhead.

Thrust implementation works very well even for large array size, I guess that is because thrust implementation uses shared memory, which results in faster memory access compare to global memory.


![](img/compact.jpg)
From this graph, it is clear that the GPU compact with scan has best performance than other implementations.


![](img/optimization.jpg)
Above graph shows the performance improvement for the GPU work efficient scan after optimizing. About more than twice scan efficiency improvement can be observed especially for large array size that exceeds 2^26. This is achieved by decreasing hanging threads at up sweep and down sweep phases. The old implementation is slow because some threads are not working at each iteration of sweeping, which wastes the resources of SM. After decreasing removing those threads and compacting all working threads with indices hacks, the computing power of SM is fully used and the performance is thus improved.

## Output ##
blockSize = 512, ArraySize = 2^26

```
****************
** SCAN TESTS **
****************
[ 29 28 49 36 34 5 16 37 28 9 17 18 23 ... 21 0 ]
==== cpu scan, power-of-two ====
elapsed time: 343.553ms (std::chrono Measured)
[ 0 29 57 106 142 176 181 197 234 262 271 288 306 ... 1643656586 1643656607 ]
==== cpu scan, non-power-of-two ====
elapsed time: 170.328ms (std::chrono Measured)
[ 0 29 57 106 142 176 181 197 234 262 271 288 306 ... 1643656538 1643656552 ]
passed
==== naive scan, power-of-two ====
elapsed time: 229.179ms (CUDA Measured)
[ 0 29 57 106 142 176 181 197 234 262 271 288 306 ... 1643656586 1643656607 ]
passed
==== naive scan, non-power-of-two ====
elapsed time: 228.234ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 59.0043ms (CUDA Measured)
[ 0 29 57 106 142 176 181 197 234 262 271 288 306 ... 1643656586 1643656607 ]
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 59.0076ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 12.3873ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 9.8304ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 2 0 2 3 3 0 2 0 2 2 2 0 3 ... 3 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 191.089ms (std::chrono Measured)
[ 2 2 3 3 2 2 2 2 3 2 2 3 1 ... 3 3 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 213.995ms (std::chrono Measured)
[ 2 2 3 3 2 2 2 2 3 2 2 3 1 ... 2 3 ]
passed
==== cpu compact with scan ====
elapsed time: 536.685ms (std::chrono Measured)
[ 2 2 3 3 2 2 2 2 3 2 2 3 1 ... 3 3 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 80.1091ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 80.2937ms (CUDA Measured)
passed
```
Binary file added img/blockSize.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/compact.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/optimization.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/scan_comparison.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 8 additions & 8 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 26; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand Down Expand Up @@ -51,14 +51,14 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, power-of-two");
StreamCompaction::Naive::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

/* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan
onesArray(SIZE, c);
printDesc("1s array for finding bugs");
StreamCompaction::Naive::scan(SIZE, c, a);
printArray(SIZE, c, true); */
//For bug-finding only: Array of 1s to help find bugs in stream compaction or scan
//onesArray(SIZE, c);
//printDesc("1s array for finding bugs");
//StreamCompaction::Naive::scan(SIZE, c, a);
//printArray(SIZE, c, true);

zeroArray(SIZE, c);
printDesc("naive scan, non-power-of-two");
Expand All @@ -71,7 +71,7 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
Expand Down
2 changes: 1 addition & 1 deletion stream_compaction/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ set(SOURCE_FILES

cuda_add_library(stream_compaction
${SOURCE_FILES}
OPTIONS -arch=sm_20
OPTIONS -arch=sm_61
)
20 changes: 20 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,17 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}

if (idata[index] == 0) {
bools[index] = 0;
}
else {
bools[index] = 1;
}
}

/**
Expand All @@ -33,6 +44,15 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}

if (bools[index] == 1) {
int idx = indices[index];
odata[idx] = idata[index];
}
}

}
Expand Down
2 changes: 2 additions & 0 deletions stream_compaction/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@
#define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
#define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__)

const int blockSize = 512;

/**
* Check for CUDA errors; print and exit if there was a problem.
*/
Expand Down
55 changes: 47 additions & 8 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
#include <cstdio>
#include "cpu.h"

#include "common.h"
#include "common.h"

namespace StreamCompaction {
namespace CPU {
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
}

/**
Expand All @@ -20,6 +20,11 @@ namespace StreamCompaction {
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
odata[0] = 0;
for (int i = 1; i < n; ++i) {
odata[i] = odata[i - 1] + idata[i - 1];
}

timer().endCpuTimer();
}

Expand All @@ -31,8 +36,16 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int count = 0;
for (int i = 0; i < n; ++i) {
if (idata[i] != 0) {
odata[count] = idata[i];
count++;
}
}

timer().endCpuTimer();
return -1;
return count;
}

/**
Expand All @@ -43,8 +56,34 @@ namespace StreamCompaction {
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int *mapping = new int[n];
for (int i = 0; i < n; ++i) {
if (idata[i] == 0) {
mapping[i] = 0;
}
else {
mapping[i] = 1;
}
}

int *scanned = new int[n];
scanned[0] = 0;
for (int i = 1; i < n; ++i) {
scanned[i] = scanned[i - 1] + mapping[i - 1];
}

int lastIdx = 0;
for (int i = 0; i < n; ++i) {
if (mapping[i] != 0) {
int idx = scanned[i];
odata[idx] = idata[i];
lastIdx = idx;
}
}


timer().endCpuTimer();
return -1;
return lastIdx + 1;
}
}
}
Loading