Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 119 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,125 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Name: Bowen Yang
* [LinkedIn](https://www.linkedin.com/in/%E5%8D%9A%E6%96%87-%E6%9D%A8-83bba6148)
* [GitHub](https://github.com/Grillnov)
* [Facebook](https://www.facebook.com/yang.bowen.7399)
* [Steam](https://steamcommunity.com/id/grillnov)
* Tested on: Windows 10 x64, i7-6800K @ 3.40GHz 32GB, GTX 1080 8GB (Personal computer at home)

### (TODO: Your README)
**Description**

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
Stream compaction from scratch. In our case, we're to filter out all the elements that equals to 0. Analogly it's the dead paths in the list of all the rays.

*Part 1: CPU implementation*
*Part 2: Brute force naive reduction on GPU*
*Part 3: Efficient implementation that's actually not so efficient*
*Part 4: Thrust*

**Issues**

Just as always, I modified the `sm_20` option to `sm_60` to make it compile on nvcc 9.2.

**Performance Test**

**Scan**

*When element number is exactly 2-power*

![](img/2Power.png)

*When element number is not exactly 2-powered*

![](img/n2power.png)

**Compaction**

*When element number is exactly 2-powered*

![](img/compact2Power.png)

*When element number is not exactly 2-powered*

![](img/compactn2power.png)

**Thrust implementation**

*When element number is exactly 2-powered*

![](img/thrust2power.png)

*When element number is not exactly 2-powered*

![](img/thrustn2power.png)

**The results

```
****************
** SCAN TESTS **
****************
[ 34 23 45 34 24 43 35 44 26 22 13 28 37 ... 47 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0.001204ms (std::chrono Measured)
[ 0 34 57 102 136 160 203 238 282 308 330 343 371 ... 6116 6163 ]
==== cpu scan, non-power-of-two ====
elapsed time: 0.001205ms (std::chrono Measured)
[ 0 34 57 102 136 160 203 238 282 308 330 343 371 ... 6064 6085 ]
passed
==== naive scan, power-of-two ====
elapsed time: 0.033792ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.032896ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 0.123904ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 0.126976ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 13.1891ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.96256ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 0 3 3 0 1 2 0 2 2 2 1 2 2 ... 3 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 0.001506ms (std::chrono Measured)
[ 3 3 1 2 2 2 2 1 2 2 3 2 1 ... 3 3 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 0.002108ms (std::chrono Measured)
[ 3 3 1 2 2 2 2 1 2 2 3 2 1 ... 2 3 ]
passed
==== cpu compact with scan ====
elapsed time: 0.003313ms (std::chrono Measured)
[ 3 3 1 2 2 2 2 1 2 2 3 2 1 ... 3 3 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.23984ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.238592ms (CUDA Measured)
passed
```

**The explanation for the efficiency of the efficient implementation**

From the charts we find the surprising fact that, *efficient* implementation is actually not that efficient.
2 main reasons for this to happen:

**I/O intensive**
In our efficient implementation, we have to write the initial element to the GPU buffer or read the last element of the buffer back. This causes lots of system interrupts and therefore is harmful to performance.

**The reduce algorithm itself**
With the layer going even deeper, stride becomes larger and larger, which, is a behavior that all caches hate. The spatial locality is horrible when the layer goes deep.

**Something wrong with the thread scheduling**
With the layer going deeper, more and more threads become idle and does nothing useful, and even worse, with the stride growing larger, branch divergence inside warps is getting unacceptable.
Binary file added img/2Power.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/compact2Power.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/compactn2power.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/n2power.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/thrust2power.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/thrustn2power.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion stream_compaction/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ set(SOURCE_FILES

cuda_add_library(stream_compaction
${SOURCE_FILES}
OPTIONS -arch=sm_20
OPTIONS -arch=sm_60
)
29 changes: 28 additions & 1 deletion stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,20 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx >= n)
{
return;
}

if (idata[idx] == MEETCRITERION)
{
bools[idx] = HAS_MET;
}
else
{
bools[idx] = NOT_MET;
}
}

/**
Expand All @@ -33,7 +47,20 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
}
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx >= n)
{
return;
}

if (idx == n - 1)
{
odata[indices[idx]] = idata[idx];
}
else if (indices[idx] != indices[idx + 1])
{
odata[indices[idx]] = idata[idx];
}
}
}
}
6 changes: 6 additions & 0 deletions stream_compaction/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -130,3 +130,9 @@ namespace StreamCompaction {
};
}
}

// Used as a boolean function for elaborating if a certain element meets criterion
// 1 for not met, 0 for met
enum ElementAttribute { HAS_MET, NOT_MET };
# define MEETCRITERION 0
# define BLOCKSIZE 1024
72 changes: 63 additions & 9 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
#include <cstdio>
#include "cpu.h"

#include "common.h"
#include "common.h"

namespace StreamCompaction {
namespace CPU {
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
}

/**
Expand All @@ -20,6 +20,14 @@ namespace StreamCompaction {
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
// Insert I
odata[0] = 0;
// Compute the sums
for (int i = 1; i != n; ++i)
{
odata[i] = idata[i - 1] + odata[i - 1];
}

timer().endCpuTimer();
}

Expand All @@ -31,8 +39,18 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int remaining = 0;
for (int i = 0; i != n; ++i)
{
if (idata[i] != 0)
{
odata[remaining] = idata[i];
++remaining;
}
}

timer().endCpuTimer();
return -1;
return remaining;
}

/**
Expand All @@ -41,10 +59,46 @@ namespace StreamCompaction {
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();

int* hasChanged = new int[n];
int* sum = new int[n];

timer().startCpuTimer();
// TODO

for (int i = 0; i != n; ++i)
{
if (idata[i] == MEETCRITERION)
{
hasChanged[i] = HAS_MET;
}
else
{
hasChanged[i] = NOT_MET;
}
}

sum[0] = 0;
for (int i = 1; i != n; ++i)
{
sum[i] = hasChanged[i - 1] + sum[i - 1];
}
int remaining = 0;
for (int i = 0; i < n; ++i)
{
if (hasChanged[i] == NOT_MET)
{
odata[sum[i]] = idata[i];
++remaining;
}
}

timer().endCpuTimer();
return -1;

delete[] hasChanged;
delete[] sum;

return remaining;
}
}
}
Loading