-
Notifications
You must be signed in to change notification settings - Fork 0
/
rss2.xml
105 lines (78 loc) · 141 KB
/
rss2.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:content="http://purl.org/rss/1.0/modules/content/">
<channel>
<title>CVision</title>
<link>http://yoursite.com/</link>
<atom:link href="/rss2.xml" rel="self" type="application/rss+xml"/>
<description>Dreams don't work unless you DO</description>
<pubDate>Wed, 08 Mar 2023 09:22:50 GMT</pubDate>
<generator>http://hexo.io/</generator>
<item>
<title>Running Note for Rigid 3D Scene Flow</title>
<link>http://yoursite.com/2022/08/20/Rigid3DSceneFlow/</link>
<guid>http://yoursite.com/2022/08/20/Rigid3DSceneFlow/</guid>
<pubDate>Fri, 19 Aug 2022 16:00:00 GMT</pubDate>
<description>
<blockquote>
<p>Abstract: Scene flow, which is a 3D version of optical flow, represents how each point in an image or point cloud changes in the two preceding and following frames. One paper in CVPR 2021 illustrates a weakly supervised learning of Rigid 3D Scene Flow approach. This article focuses on documenting running process of the official code.</p>
</blockquote>
</description>
<content:encoded><![CDATA[<blockquote><p>Abstract: Scene flow, which is a 3D version of optical flow, represents how each point in an image or point cloud changes in the two preceding and following frames. One paper in CVPR 2021 illustrates a weakly supervised learning of Rigid 3D Scene Flow approach. This article focuses on documenting running process of the official code.</p></blockquote><a id="more"></a><h1 id="一、安装环境"><a href="#一、安装环境" class="headerlink" title="一、安装环境"></a>一、安装环境</h1><p>首先按照 <a href="https://github.com/zgojcic/Rigid3DSceneFlow" target="_blank" rel="noopener">Rigid3DSceneFlow的官方文档</a>安装环境:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">export CXX=g++-7</span><br><span class="line">conda config --append channels conda-forge</span><br><span class="line">conda create --name rigid_3dsf python=3.7</span><br><span class="line">source activate rigid_3dsf</span><br><span class="line">conda install --file requirements.txt</span><br><span class="line">conda install -c open3d-admin open3d=0.9.0.0</span><br><span class="line">conda install -c intel scikit-learn</span><br></pre></td></tr></table></figure><p>注:在我电脑里,所有的安装环境是安装在了<code>py3-mink</code>这个环境名里<br><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">conda create -n py3-mink python=3.7</span><br><span class="line">conda activate py3-mink</span><br></pre></td></tr></table></figure></p><h1 id="二、编译MinkowskiEngine"><a href="#二、编译MinkowskiEngine" class="headerlink" title="二、编译MinkowskiEngine"></a>二、编译MinkowskiEngine</h1><p>接下来是安装<a href="https://github.com/NVIDIA/MinkowskiEngine" target="_blank" rel="noopener">MinkowskiEngine</a>:</p><p>首先安装Pytorch,我的电脑安装的CUDA 11.3,所以需要安装对应版本的Pytorch。</p><p>第一个需要注意的点是不能用<a href="https://pytorch.org/" target="_blank" rel="noopener">Pytorch官网</a>提供的conda安装,因为安装上的是CPU版本,即使你选择的是GPU,所以需要用pip安装方式。</p><p>第二个需要注意的点是,不能安装最新版的Pytorch(1.11.0)和CUDA11.3版本,因为MinkowskiEngine会编译不过(这个问题折腾了一周多的时间)……实践证明,可以编译得过的是如下版本,可以在这里找到:<a href="https://pytorch.org/get-started/previous-versions/" target="_blank" rel="noopener">INSTALLING PREVIOUS VERSIONS OF PYTORCH</a></p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">conda install openblas-devel -c anaconda</span><br><span class="line">pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html</span><br></pre></td></tr></table></figure><p>在正式编译之前还需要确认g++版本,需要g++-7才可以(我的电脑之前是g++-5),具体操作和切换方法参考:<a href="https://blog.csdn.net/YuYunTan/article/details/84205462" target="_blank" rel="noopener">安装g++版本7与g++多版本共存</a></p><p>然后就可以编译MinkowskiEngine了:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#</span> Install MinkowskiEngine</span><br><span class="line"></span><br><span class="line"><span class="meta">#</span> Uncomment the following line to specify the cuda home. Make sure `$CUDA_HOME/nvcc --version` is 11.X</span><br><span class="line"><span class="meta">#</span> export CUDA_HOME=/usr/local/cuda-11.1</span><br><span class="line">pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas"</span><br><span class="line"></span><br><span class="line"><span class="meta">#</span> Or if you want local MinkowskiEngine</span><br><span class="line">git clone https://github.com/NVIDIA/MinkowskiEngine.git</span><br><span class="line">cd MinkowskiEngine</span><br><span class="line">python setup.py install --blas_include_dirs=${CONDA_PREFIX}/include --blas=openblas</span><br></pre></td></tr></table></figure><h1 id="三、运行Rigid3DSceneFlow"><a href="#三、运行Rigid3DSceneFlow" class="headerlink" title="三、运行Rigid3DSceneFlow"></a>三、运行Rigid3DSceneFlow</h1><p>以上便是Rigid3DSceneFlow的环境安装部分,接下来的数据与模型下载和运行就可以按照Rigid3DSceneFlow的描述继续进行了,没啥大问题。</p><p>唯一需要注意的一点是,在运行<code>eval.py</code>的时候会报错<code>AttributeError: module 'open3d' has no attribute 'pipelines'</code>,是open3d版本更新导致的。解决方法是:搜索文件中出现的 o3d.piplines.xxx ,把.pipelines 删掉,改成o3d.xxx。参考<a href="https://www.freesion.com/article/34651255448/" target="_blank" rel="noopener">UBUNTU16中OPEN3D算法测试笔记</a></p><p>但是这个工程没法可视化,也没提供可视化代码,<strong>需要自己编写代码【TODO】</strong>,用<a href="https://www.keyshot.com/" target="_blank" rel="noopener">KeyShot</a>显示点云,大致流程参考:<a href="https://github.com/zgojcic/Rigid3DSceneFlow/issues/3" target="_blank" rel="noopener">https://github.com/zgojcic/Rigid3DSceneFlow/issues/3</a></p>]]></content:encoded>
<comments>http://yoursite.com/2022/08/20/Rigid3DSceneFlow/#disqus_thread</comments>
</item>
<item>
<title>Summary of Deep Learning Acceleration Methods</title>
<link>http://yoursite.com/2020/08/01/deep-learningl-acceleration/</link>
<guid>http://yoursite.com/2020/08/01/deep-learningl-acceleration/</guid>
<pubDate>Fri, 31 Jul 2020 16:00:00 GMT</pubDate>
<description>
<blockquote>
<p>Abstract: In order to improve the performance of deep learning inference, the deep learning inference acceleration technology has made remarkable progress in recent years. It effectively accelerates the speed of deep learning inference by optimizing the network structure, data, algorithm and hardware. This article will introduce several common deep learning acceleration techniques.</p>
</blockquote>
</description>
<content:encoded><![CDATA[<blockquote><p>Abstract: In order to improve the performance of deep learning inference, the deep learning inference acceleration technology has made remarkable progress in recent years. It effectively accelerates the speed of deep learning inference by optimizing the network structure, data, algorithm and hardware. This article will introduce several common deep learning acceleration techniques.</p></blockquote><a id="more"></a><table> <tr align="center"> <td>优化方向</td> <td>优化方法</td> <td>适用场景</td> </tr> <tr> <td rowspan="4">计算优化</td> <td>模型结构优化</td> <td>在CNN模型上应用较多,但需要根据模型和业务特点优化计算组件</td> </tr> <tr> <td>模型剪枝</td> <td>受底层计算平台限制较多,精度损失较明显,适用于精度不敏感的模型推理应用场景</td> </tr> <tr> <td>模型量化</td> <td>量化技术适用场景较广,加速效果明显</td> </tr> <tr> <td>知识蒸馏</td> <td>适用于大模型的推理应用,不同模型蒸馏的效果差异明显</td> </tr> <tr> <td rowspan="2">系统优化</td> <td>通信机制优化</td> <td>PS框架适用于推荐类模型DNN浅层模型, RingAllReduce适用于DNN深度模型 </td> </tr> <tr> <td>通信数据量优化</td> <td>适用于通信占比高的应用场景</td> </tr> <tr> <td rowspan="1">硬件优化</td> <td>GPU/TPU/TensorRT</td> <td>TensorRT目前仅支持推理</td></tr></table><h2 id="一、计算优化"><a href="#一、计算优化" class="headerlink" title="一、计算优化"></a>一、计算优化</h2><h3 id="1-模型结构优化"><a href="#1-模型结构优化" class="headerlink" title="1.模型结构优化"></a>1.模型结构优化</h3><p>模型结构优化的方法大部分还是基于人工经验去设计一些具有相似功效的“轻型”计算组件来替换原模型中“重型”计算组件。</p><p>这一点在CNN神经网络的进化历史上显得尤为突出。比如CNN神经网络基于图像的局部感知原理设计滤波器计算组件来替代全连接神经网络,以局部计算和权值共享的方式实现了模型瘦身。而随后的NIN、VGG、GoogleNet、SqueezeNet、MobileNets、ShuffleNets等则是在滤波器上动刀,用小组件替换大组件做进一步优化,具体优化方法如下:</p><div class="table-container"><table><thead><tr><th style="text-align:center">模型</th><th style="text-align:center">优化结构</th><th style="text-align:left">优化方式</th></tr></thead><tbody><tr><td style="text-align:center">NIN</td><td style="text-align:center">MLP-conv模块</td><td style="text-align:left">通过1x1卷积核及Avg-pooling代替fully connected layers减小参数</td></tr><tr><td style="text-align:center">VGG</td><td style="text-align:center">3x3卷积核</td><td style="text-align:left">采用连续的3x3的小卷积核代替AlexNet中的大卷积核</td></tr><tr><td style="text-align:center">GoogleNet</td><td style="text-align:center">Inception模块</td><td style="text-align:left">使用多个不同尺寸的卷积层,以一种结构化的方式来捕捉不同尺寸的信息</td></tr><tr><td style="text-align:center">SqueezeNet</td><td style="text-align:center">Fire模块</td><td style="text-align:left">Squeeze层用1x1的卷积核实现数据压缩, Expand层用1x1和3x3卷积核进行特征抽取</td></tr><tr><td style="text-align:center">MobileNets</td><td style="text-align:center">Depth-wise conv模块</td><td style="text-align:left">用深度可分离的卷积方式代替传统卷积方式</td></tr><tr><td style="text-align:center">ShuffleNets</td><td style="text-align:center">Group-conv模块<br>Channel-shuffle模块</td><td style="text-align:left">用Channel-shuffle模块优化组间信息交换</td></tr></tbody></table></div><p>上述这些优化操作都依赖于人工经验,费时费力,组合优化这种事情更适合让机器来做,于是<strong>神经网络结构搜索(NAS)技术</strong>就应运而生。</p><h3 id="2-模型剪枝"><a href="#2-模型剪枝" class="headerlink" title="2. 模型剪枝"></a>2. 模型剪枝</h3><p>模型剪枝的初衷就是深度学习模型的过度参数化,说白了就是你的模型太胖了,跑不动,需要减肥。根据模型剪枝的方法可以分为两大类:一类是结构化剪枝,另一类是非结构化剪枝。</p><p>所谓结构化剪枝是对参数矩阵做有规律的裁剪,比如按行或列裁剪,使得裁剪后的参数矩阵仍然是一个规则的的矩阵。结构化裁剪主流的方法有Channel-level、Vector-level、Group-level、Filter-level级别的裁剪。</p><p>非结构化剪枝是将原本稠密的参数矩阵裁剪为稀疏的参数矩阵,一般矩阵大小不变,其效果类似于参数正则化。因为目前大部分计算平台不支持稀疏矩阵的计算,只有结构化剪枝才能真正减少计算量。</p><p>根据业界的实践经验,非结构化模型剪枝的模型精度损失较小,但受限于底层计算框架,计算加速效果有限。而结构化模型剪枝可以较大幅度地减少模型参数,实现可观的计算加速,但容易造成明显的性能损失。</p><h3 id="3-模型量化"><a href="#3-模型量化" class="headerlink" title="3.模型量化"></a>3.模型量化</h3><p>模型量化是通过减少表示每个权重参数所需的比特数来压缩原始网络,从而实现计算加速。</p><p>半浮点精度(FP16)和混合精度是一种常见的做法,不过需要底层计算框架支持,否则无法实现计算加速。另一种是INT8量化,即将模型的权重参数从FP32转换为INT8,以及使用INT8 进行推理。量化的加速主要得益于定点运算比浮点运算快,但从FP32量化为INT8会损失模型精度。量化不会改变权重参数的分布,只是将权重参数从一个值域映射到另一个值域,过程类似于数值的归一化。</p><p>采用普通量化方法时,靠近零的浮点值在量化时没有精确地用定点值表示。因此,量化后的模型预测准确度会显著下降,如均一量化,会将具有动态值密度的浮点映射成具有恒定值密度的定点。其中一种的做法是在量化过程中做值域调整。</p><p>值域调整的目标是学习能在量化后更准确地运行网络的超参数min/max,即归一化参数。根据调整的时机,可以进一步划分为训练后量化和训练时量化,代表分别为 Nvidia Calibration 和 TensorFlow Quantization-aware Training。</p><h3 id="4-模型蒸馏"><a href="#4-模型蒸馏" class="headerlink" title="4.模型蒸馏"></a>4.模型蒸馏</h3><p>模型蒸馏本质上和迁移学习类似,只是它还多了一个模型压缩的目的,即通过模型蒸馏,将大模型压缩为小模型,使小模型可以跑得又快又好。所以,最基本的想法就是将大模型学习得到的知识作为先验,将先验知识传递到小规模的神经网络中,并在实际应用中部署小规模的神经网络。在实际研究中,我们将大模型称为Teacher,小模型称为Student,模型蒸馏的过程就是让Student去学习Teacher的知识。目前大部分模型蒸馏框架都是基于Teacher-Student的模式,只是有些方法会请多几个Teacher或者配个Assistant。</p><h2 id="二、系统优化"><a href="#二、系统优化" class="headerlink" title="二、系统优化"></a>二、系统优化</h2><p>加速深度学习模型训练速度最有效的方法便是增加计算资源,将单机训练的模型扩展为多机训练。目前各大主流框架的多GPU分布式训练一般存在两种模式:<strong>模型并行和数据并行</strong>。</p><p>模型并行是将模型切分为多个子模块,每个计算设备负责其中一个子模块的计算。数据并行则是对训练数据进行切分,将数据分片分配到不同设备上进行并行计算。随着内存和显存容量的扩大,大部分深度学习模型可以直接存放在单个节点上,以数据并行的方式运行。</p><p>分布式机器学习系统的核心是参数的同步和更新,而Parameter Server(PS)是目前主流的深度学习系统默认参数同步方案。该方案中包含两类节点,一类是Server,负责存储模型参数;一类是Worker,负责模型计算。Worker在每次训练迭代开始前,从Server拉取最新的模型参数,并在本地完成模型的计算;Worker在每次训练迭代结束后,发送梯度给Server,由Server来更新参数。</p><p>整个模型训练过程涉及了大量的数据交换,由于深度学习模型具有大量的参数,即便使用10GbE网络,参数的传输也会成为瓶颈。实验证明,在8个节点进行Tensorflow分布式训练,对于VGG19网络,90%的时间花在等待网络传输上面。因此,大量的研究工作集中在消除网络通信瓶颈。</p><p>目前主要有两大类方向,一类是研发新的通信机制,实现最优的参数交换,如RingAllReduce。RingAllReduce将计算节点组织成环状结构,每个节点只和邻居节点进行通信,每次交换一部分数据,通过2*(N-1) 次通信就可以完成数据的同步。</p><p>另一类是试图减少节点之间的数据交换量,从而消除网络通信瓶颈,如梯度压缩,其核心思想是每次只选择“重要”的梯度进行更新,从而减少通信开销。</p><p>梯度累计和补偿是目前最常见的一种做法,目前大部分研究都是围绕这一点在展开。从单机单卡扩展为多机多卡是实现模型加速最有效的途径,消除网络通信瓶颈的研究只是为了追求更高的计算性价比。</p><h2 id="三、硬件优化"><a href="#三、硬件优化" class="headerlink" title="三、硬件优化"></a>三、硬件优化</h2><ul><li>深度学习应用开发的两个阶段<ul><li>训练:利用训练数据生成和优化网络模型</li><li>推理:把网络模型集成到应用程序,输入现实数据,得到推理结果</li></ul></li><li>TensorRT深度优化了推理的运行效率<ul><li>自动选取最优 kernel<ul><li>矩阵乘法、卷积有多种UDA实现方式,根据数据大小和形状自动选取最优实现</li></ul></li><li>计算图优化<ul><li>通过 kerne融合、减少数据拷贝等手段,生成网络的优化计算图</li></ul></li><li>支持fp16/int8<ul><li>对数值进行精度转换与缩放,充分利用硬件的低精度高通量计算能力</li></ul></li></ul></li></ul><p>TensorRT能够优化重构由不同深度学习框架训练的深度学习模型:</p><ul><li>对于Caffe与TensorFlow训练的模型,若包含的操作都是TensorRT支持的,则可以直接由TensorRT优化重构;</li><li>对于MXnet, PyTorch或其他框架训练的模型,若包含的操作都是TensorRT支持的,可以采用TensorRT API重建网络结构,并间接优化重构;</li><li>其他框架训练的模型,转换为ONNX中间格式后,若包含的操作是TensorRT支持的,可采用TensorRT-ONNX接口予以优化;</li><li>若训练的网络模型包含TensorRT不支持的操作:<ul><li>TensorFlow模型可通过tf.contrib.tensorrt转换,其中不支持的操作会保留为TensorFlow计算节点;MXNet也支持类似的计算图转换方式。</li><li>不支持的操作可通过Plugin API实现自定义并添加进TensorRT计算图,例如Faster Transformer的自定义扩展;</li><li>将深度网络划分为两个部分,一部分包含的操作都是TensorRT支持的,可以转换为TensorRT计算图。另一部则采用其他框架实现,如MXnet或PyTorch,并建议使用C++ API实现,以确保更高效的runtime。</li></ul></li><li>TensorRT的INT8量化需要校准(calibration)数据集,一般至少包含1000个样本(反映真实应用场景),且要求GPU的计算功能集sm >= 6.1;</li></ul>]]></content:encoded>
<comments>http://yoursite.com/2020/08/01/deep-learningl-acceleration/#disqus_thread</comments>
</item>
<item>
<title>Install TensorFlow and MXNet on an Ubuntu</title>
<link>http://yoursite.com/2018/07/09/install-tensorflow-mxnet/</link>
<guid>http://yoursite.com/2018/07/09/install-tensorflow-mxnet/</guid>
<pubDate>Sun, 08 Jul 2018 16:00:00 GMT</pubDate>
<description>
<blockquote>
<p>Abstract: The blog will introduce the whole process of how to install CUDA, TensorFlow, and MXNet environment on a Ubuntu computer. Demos of testing if the installations are successfull will also be showed. The edition of the software are CUDA 9.0, TensorFlow 1.5, MXNet-cu90 1.2 and Ubuntu 17.10.</p>
</blockquote>
</description>
<content:encoded><![CDATA[<blockquote><p>Abstract: The blog will introduce the whole process of how to install CUDA, TensorFlow, and MXNet environment on a Ubuntu computer. Demos of testing if the installations are successfull will also be showed. The edition of the software are CUDA 9.0, TensorFlow 1.5, MXNet-cu90 1.2 and Ubuntu 17.10.</p></blockquote><a id="more"></a><p>工欲善其事,必先利其器。为了更便捷地搭建人工神经网络,选择一个深度学习框架还是十分必要的。经历了数次的重装系统和各个版本不兼容的问题,终于成功让CUDA、TensorFlow和MXNet运行在了我的Ubuntu操作系统之上。</p><p>数次重装的经历也让我明白了一个小道理。其实我一直的习惯就是啥软件都用最新的,所以截止2018年6月,Ubuntu的最新版本是18.04,CUDA的最新版本是9.2,所以我就毫不犹豫的安装上了,然后就开始了TensorFlow和MXNet的安装,结果安装了一天,都有问题,程序都无法运行。最后各种查阅确认了问题的原因是,Ubuntu版本太新,CUDA不支持,CUDA版本太新,俩框架TensorFlow和MXNet也都支持的不好。所以人生经验get:</p><blockquote><p>最新的不一定是最好的。</p></blockquote><p>经过以上的折腾,最后选定了如下的各个软件版本:</p><blockquote><p>Ubuntu 17.10<br>CUDA 9.0<br>TensorFlow 1.5<br>MXNet 1.2</p></blockquote><p>经过测试,这个版本组合是可以的,所以推荐。</p><p>怎么安装Ubuntu操作系统这里就不再赘述了,这不是本文的重点,所以接下来的叙述我们就默认我们已经拥有一台安装有Ubuntu 17.10操作系统的电脑。</p><h2 id="1-安装NVIDIA显卡驱动"><a href="#1-安装NVIDIA显卡驱动" class="headerlink" title="1 安装NVIDIA显卡驱动"></a>1 安装NVIDIA显卡驱动</h2><p>安装好的Ubuntu系统自带X server驱动,而我们要想安装CUDA以及后续的一些软件,我们需要有NVIDIA显卡驱动。</p><p>我们先看看我们看看我们的显卡是什么型号,以及推荐的显卡驱动:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ubuntu-drivers devices</span><br></pre></td></tr></table></figure><p>我的计算机显示如下:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">modalias : pci:v000010DEd00001380sv000010DEsd00001380bc03sc00i00</span><br><span class="line">vendor : NVIDIA Corporation</span><br><span class="line">model : GM107 [GeForce GTX 750 Ti]</span><br><span class="line">driver : nvidia-384 - distro non-free recommended</span><br><span class="line">driver : nvidia-340 - distro non-free</span><br><span class="line">driver : xserver-xorg-video-nouveau - distro free builtin</span><br></pre></td></tr></table></figure><p>我们看到,系统推荐了三款驱动,我们可以使用以下命令自动安装推荐的驱动:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo ubuntu-drivers autoinstall</span><br></pre></td></tr></table></figure><p>你也可以选择只安装其中一个驱动,命令如下</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo apt install nvidia-384</span><br></pre></td></tr></table></figure><p>经过一段时间后,显卡驱动就安装好了。打开Software&Updates就可以看到,驱动已经由原来的X server变为了 NVIDIA 384.<br><img src="https://s1.ax1x.com/2018/10/24/is9MZj.jpg" alt="3-4.jpg"></p><h2 id="2-安装CUDA与cuDNN"><a href="#2-安装CUDA与cuDNN" class="headerlink" title="2 安装CUDA与cuDNN"></a>2 安装CUDA与cuDNN</h2><p>我们去NVIDIA官网,找到CUDA9.0的<a href="https://developer.nvidia.com/cuda-90-download-archive" target="_blank" rel="noopener">下载链接</a></p><p>由于我们的操作系统是Ubuntu17.10,所以我们很容易找对对用的CUDA版本,如下图。我们需要下载runfile(local)和三个补丁(Patch1-3)。</p><p><img src="https://s1.ax1x.com/2018/10/24/is93iq.jpg" alt="3-1.jpg"></p><p>下载完成后,我们再找到cuDNN的<a href="https://developer.nvidia.com/cudnn" target="_blank" rel="noopener">下载链接</a>,需要注册后再能下载。注册后我们便可以进入到下载页面,找到cuDNN的v7.1.4 for CUDA9.0进行下载即可。下载完毕后,切到默认的Downloads文件夹,可以看到 cudnn-9.0-linux-x64-v7.tgz压缩包。接下来就可以对他们进行安装了。<br><img src="https://s1.ax1x.com/2018/10/24/is9lon.jpg" alt="3-2.jpg"></p><h3 id="2-1-gcc降级"><a href="#2-1-gcc降级" class="headerlink" title="2.1 gcc降级"></a>2.1 gcc降级</h3><p>由于CUDA 9.0仅支持GCC 6.0及以下版本,而现在的最新版本是7.0版本,所以我们需要对gcc进行降级。</p><p>我们可以先在命令行输入<code>gcc</code>命令,以确定系统是否已经安装好gcc,如果安装了,则需要降级,如果未安装,我们可以直接安装低版本。假设我们的系统已经安装有gcc-7.3版本,我们现在演示降级方法。我们需要安装6.0版本。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo apt-get install gcc-6</span><br><span class="line">sudo apt-get install g++-6</span><br></pre></td></tr></table></figure><p>装完后进入到/usr/bin目录下</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ls -l gcc*</span><br></pre></td></tr></table></figure><p>会显示以下结果</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">lrwxrwxrwx 1 root root 7th May 16 18:16 /usr/bin/gcc -> gcc-7</span><br></pre></td></tr></table></figure><p>发现gcc链接到gcc-7.0, 需要将它改为链接到gcc-6.0,方法如下:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo mv gcc gcc.bak #备份 </span><br><span class="line">sudo ln -s gcc-6 gcc #重新链接</span><br></pre></td></tr></table></figure><p>同理,对g++也做同样的修改(如果没安装g++,需要先安装g++):</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">ls -l g++*</span><br></pre></td></tr></table></figure><p>会显示以下结果</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">lrwxrwxrwx 1 root root 7th May 16 18:16 /usr/bin/g++ -> g++-7</span><br></pre></td></tr></table></figure><p>需要将g++链接改为g++-6.0:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo mv g++ g++.bak </span><br><span class="line">sudo ln -s g++-6 g++</span><br></pre></td></tr></table></figure><p>再查看gcc和g++版本号:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">gcc -v</span><br><span class="line">g++ -v</span><br></pre></td></tr></table></figure><p>均显示gcc version 6.0 ,说明gcc 6.0安装成功。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">COLLECT_GCC=gcc</span><br><span class="line">gcc version 6.4.0 20171010 (Ubuntu 6.4.0-8ubuntu1)</span><br><span class="line">COLLECT_GCC=g++</span><br><span class="line">gcc version 6.4.0 20171010 (Ubuntu 6.4.0-8ubuntu1)</span><br></pre></td></tr></table></figure><h3 id="2-2-安装CUDA"><a href="#2-2-安装CUDA" class="headerlink" title="2.2 安装CUDA"></a>2.2 安装CUDA</h3><p>输入命令安装</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo sh cuda_9.0.176_384.81_linux.run</span><br></pre></td></tr></table></figure><p>需要注意的是,之前已经安装过显卡驱动程序,故在提问是否安装显卡驱动时选择no,其他 选择默认路径或者yes即可。<br>然后,继续执行以下操作安装3个 patch :</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">sudo sh cuda_9.0.176.1_linux.run</span><br><span class="line">sudo sh cuda_9.0.176.2_linux.run</span><br><span class="line">sudo sh cuda_9.0.176.3_linux.run</span><br></pre></td></tr></table></figure><p>安装完毕之后,将以下两条加入.bashrc文件中:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">gedit ~/.bashrc</span><br></pre></td></tr></table></figure><p>在文件最后加入如下语句:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">export PATH=/usr/local/cuda-9.0/bin${PATH:+:$PATH}} #注意,根据自己的版本,修改cuda-9.2/8.0...</span><br><span class="line">export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} #注意,根据自己的版本,修改cuda-9.2/8.0...</span><br></pre></td></tr></table></figure><p>运行命令使其生效</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">source ~/.bashrc</span><br></pre></td></tr></table></figure><p>那么,到这一步,cuda 就安装完成了,可以在终端输入<code>nvcc -V</code>,看看是否显示正确信息。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">nvcc: NVIDIA (R) Cuda compiler driver</span><br><span class="line">Copyright (c) 2005-2017 NVIDIA Corporation</span><br><span class="line">Built on Fri_Sep__1_21:08:03_CDT_2017</span><br><span class="line">Cuda compilation tools, release 9.0, V9.0.176</span><br></pre></td></tr></table></figure><h3 id="2-3-安装cuDNN"><a href="#2-3-安装cuDNN" class="headerlink" title="2.3 安装cuDNN"></a>2.3 安装cuDNN</h3><p>cuDNN的安装,就是将cuDNN包内的文件,拷贝到cuda文件夹中即可。下载完毕后,切到默认的Downloads文件夹,可以看到cudnn-9.0-linux-x64-v7.1.tgz压缩包。先解压,然后将其中的内容复制到CUDA安装文件夹里面。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">sudo cp cuda/include/cudnn.h /usr/local/cuda/include </span><br><span class="line"><span class="meta">#</span>注意,解压后的文件夹名称为cuda,将对应文件复制到 /usr/local中的cuda内</span><br><span class="line">sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 </span><br><span class="line">sudo chmod a+r /usr/local/cuda/include/cudnn.h</span><br><span class="line">sudo chmod a+r /usr/local/cuda/lib64/libcudnn*</span><br></pre></td></tr></table></figure><p>到此处,所有的安装就完成了。</p><blockquote><p>一些提示:<br>由于安装过程比较容易出现问题,所以最好将文件<code>cuda_9.0.176_384.81_linux.run</code>,<code>cuda_9.0.176.1_linux.run</code>,<br><code>cuda_9.0.176.2_linux.run</code>,<code>cuda_9.0.176.3_linux.run</code>,<code>cudnn-9.0-linux-x64-v7.1.tgz</code>拷到优盘或其他移动存储设备,当需要重装系统的时候,可以省去重新下载的时间耗费。</p></blockquote><h2 id="3-安装TensorFlow"><a href="#3-安装TensorFlow" class="headerlink" title="3 安装TensorFlow"></a>3 安装TensorFlow</h2><p>官网参考文档地址:<a href="https://www.tensorflow.org/install/" target="_blank" rel="noopener">https://www.tensorflow.org/install/</a> ,安装的方式也有好几种,通过pip, docker, Anacodnda等,这里给出的是pip的安装方式。</p><h3 id="3-1-确定python及pip的版本"><a href="#3-1-确定python及pip的版本" class="headerlink" title="3.1 确定python及pip的版本"></a>3.1 确定python及pip的版本</h3><p>输入命令<code>python -V</code>确认python的版本,需要2.7或者是3.3+。</p><p>输入命令pip -V或pip3 -V确认pip的版本,建议pip和pip3在8.1以上,如果不是则使用<code>sudo apt-get install python-pip python-dev</code>进行更新。如果系统里还没有安装pip或者pip3,则用以下命令安装:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo apt install pip</span><br><span class="line">sudo apt install pip3</span><br></pre></td></tr></table></figure><p>我安装的是python3,所以以下我选用的都是pip3安装方式。</p><h3 id="3-2-安装tensorflow"><a href="#3-2-安装tensorflow" class="headerlink" title="3.2 安装tensorflow"></a>3.2 安装tensorflow</h3><p>根据自己的情况选择以下命令之一进行安装:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">pip install tensorflow # Python 2.7; 仅支持CPU</span><br><span class="line">pip3 install tensorflow # Python 3.n; 仅支持CPU</span><br><span class="line">pip install tensorflow-gpu # Python 2.7; 支持GPU</span><br><span class="line">pip3 install tensorflow-gpu # Python 3.n; 支持GPU</span><br></pre></td></tr></table></figure><p>该步骤为可选步骤,如果上一步失败了,可以通过以下命令来安装:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo pip install --upgrade TF_PYTHON_URL # Python 2.7</span><br><span class="line">sudo pip3 install --upgrade TF_PYTHON_URL # Python 3.N</span><br></pre></td></tr></table></figure><p>其中,<code>TF_PYTHON_URL</code>为TensrorFlow的python包,不同的操作系统、python版本、GPU支持状况需要选择不同的包,例如OS为Linux,python版本为3.4,仅支持CPU的情况下,<code>TF_PYTHON_URL</code>应当替换为 <a href="https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp34-cp34m-linux_x86_64.whl" target="_blank" rel="noopener">https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp34-cp34m-linux_x86_64.whl</a> 。</p><p>但是,<code>storage.googleapis.com</code>的链接下载还是非常慢,所以我们这里采用从清华镜像站下载:<a href="https://mirrors.tuna.tsinghua.edu.cn/help/tensorflow/" target="_blank" rel="noopener">https://mirrors.tuna.tsinghua.edu.cn/help/tensorflow/</a> ,目前GPU版本的TensorFlow清华镜像站只提供到1.5版本(官方是1.8版本),所有我们这里安装1.5版本。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo pip3 install --upgrade https://mirrors.tuna.tsinghua.edu.cn/tensorflow/linux/gpu/tensorflow_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</span><br></pre></td></tr></table></figure><h3 id="3-3-验证tensorflow是否安装成功"><a href="#3-3-验证tensorflow是否安装成功" class="headerlink" title="3.3 验证tensorflow是否安装成功"></a>3.3 验证tensorflow是否安装成功</h3><p>启动终端,输入<code>python3</code>,输入以下代码:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> tensorflow <span class="keyword">as</span> tf</span><br><span class="line">hello = tf.constant(<span class="string">'Hello, TensorFlow!'</span>)</span><br><span class="line">sess = tf.Session()</span><br><span class="line">print(sess.run(hello))</span><br></pre></td></tr></table></figure><p>如果输出<code>Hello, TensorFlow!</code>则代表安装成功。</p><h2 id="4-安装MXNet-Gluon"><a href="#4-安装MXNet-Gluon" class="headerlink" title="4 安装MXNet(Gluon)"></a>4 安装MXNet(Gluon)</h2><h3 id="4-1-安装Miniconda"><a href="#4-1-安装Miniconda" class="headerlink" title="4.1 安装Miniconda"></a>4.1 安装Miniconda</h3><p>根据操作系统下载并安装Miniconda(网址:<a href="https://conda.io/miniconda.html" target="_blank" rel="noopener">https://conda.io/miniconda.html</a> )。<br><img src="https://s1.ax1x.com/2018/10/24/is9Qds.jpg" alt="3-3.jpg"></p><p>安装时需要回答问题,均回答yes即可。</p><p>安装完成后,我们需要让conda生效。需要运行一次<code>source ~/.bashrc</code>或重启命令行。</p><h3 id="4-2-安装MXNet-Gluon"><a href="#4-2-安装MXNet-Gluon" class="headerlink" title="4.2 安装MXNet(Gluon)"></a>4.2 安装MXNet(Gluon)</h3><p>下载包含本书全部代码的包,解压后进入文件夹。运行如下命令。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">mkdir gluon_tutorials_zh-1.0 && cd gluon_tutorials_zh-1.0</span><br><span class="line">curl https://zh.gluon.ai/gluon_tutorials_zh-1.0.tar.gz -o tutorials.tar.gz</span><br><span class="line">tar -xzvf tutorials.tar.gz && rm tutorials.tar.gz</span><br></pre></td></tr></table></figure><p>但是,这个下载的特别慢,所以直接用下载软件对下载链接进行下载:<a href="https://zh.gluon.ai/gluon_tutorials_zh.tar.gz" target="_blank" rel="noopener">https://zh.gluon.ai/gluon_tutorials_zh.tar.gz</a> ,然后解压缩,并进入文件夹。此链接也可以用于将来代码的更新。</p><p>安装运行所需的依赖包并激活该运行环境。我们可以先通过运行下面命令来配置下载源,从而使用国内镜像加速下载:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#</span> 优先使用清华 conda 镜像。</span><br><span class="line">conda config --prepend channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/</span><br><span class="line"><span class="meta">#</span> 或者选用科大 conda 镜像。</span><br><span class="line">conda config --prepend channels http://mirrors.ustc.edu.cn/anaconda/pkgs/free/</span><br></pre></td></tr></table></figure><p>使用文本编辑器打开之前解压得到的代码包里的文件“gluon_tutorials_zh-1.0/environment.yml”。如果电脑上装的是9.0版本的CUDA,将该文件中的字符串“mxnet”改为“mxnet-cu90”。如果电脑上安装了其他版本的CUDA(比如7.5、8.0、9.2等),对该文件中的字符串“mxnet”做类似修改(比如改为“mxnet-cu75”、“mxnet-cu80”、“mxnet-cu92”等)。之后安装的便是MXNet的GPU版本。</p><p>然后运行以下命令安装运行环境。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">conda env create -f environment.yml</span><br></pre></td></tr></table></figure><p>接下来就会自动安装做需要的环境。其中mxnet-cu80下载的非常慢,所以我们找到豆瓣的镜像源,单独对其进行下载安装:<a href="http://pypi.doubanio.com/simple/mxnet-cu90/" target="_blank" rel="noopener">http://pypi.doubanio.com/simple/mxnet-cu90/</a><br>选择了<code>mxnet_cu90-1.2.0b20180428-py2.py3-none-manylinux1_x86_64.whl</code>进行下载。<br>然后安装:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo pip3 install mxnet_cu90-1.2.0b20180428-py2.py3-none-manylinux1_x86_64.whl</span><br></pre></td></tr></table></figure><p>安装完毕后,然后运行以下命令安装还未安装的部分并激活运行环境。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">conda env create -f environment.yml</span><br><span class="line">source activate gluon</span><br></pre></td></tr></table></figure><p>打开Juputer notebook。运行下面命令。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">jupyter notebook</span><br></pre></td></tr></table></figure><p>这时在浏览器打开 <a href="http://localhost:8888" target="_blank" rel="noopener">http://localhost:8888</a> (通常会自动打开)就可以查看和运行本书中每一节的代码了。</p><p>如需退出激活环境,运行以下命令。</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">source deactivate</span><br></pre></td></tr></table></figure><h3 id="4-3-验证MXNet-Gluon-是否安装成功"><a href="#4-3-验证MXNet-Gluon-是否安装成功" class="headerlink" title="4.3 验证MXNet(Gluon)是否安装成功"></a>4.3 验证MXNet(Gluon)是否安装成功</h3><p>终端输入命令:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">nvidia-smi</span><br></pre></td></tr></table></figure><p>看看是否显示正确信息</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">Mon Jul 9 16:16:28 2018 </span><br><span class="line">+-----------------------------------------------------------------------------+</span><br><span class="line">| NVIDIA-SMI 384.130 Driver Version: 384.130 |</span><br><span class="line">|-------------------------------+----------------------+----------------------+</span><br><span class="line">| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |</span><br><span class="line">| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |</span><br><span class="line">|===============================+======================+======================|</span><br><span class="line">| 0 GeForce GTX 750 Ti Off | 00000000:01:00.0 On | N/A |</span><br><span class="line">| 31% 41C P8 1W / 38W | 353MiB / 1995MiB | 0% Default |</span><br><span class="line">+-------------------------------+----------------------+----------------------+</span><br><span class="line"> </span><br><span class="line">+-----------------------------------------------------------------------------+</span><br><span class="line">| Processes: GPU Memory |</span><br><span class="line">| GPU PID Type Process name Usage |</span><br><span class="line">|=============================================================================|</span><br><span class="line">| 0 842 G /usr/lib/xorg/Xorg 24MiB |</span><br><span class="line">| 0 899 G /usr/bin/gnome-shell 48MiB |</span><br><span class="line">| 0 1099 G /usr/lib/xorg/Xorg 159MiB |</span><br><span class="line">| 0 1243 G /usr/bin/gnome-shell 117MiB |</span><br><span class="line">+-----------------------------------------------------------------------------+</span><br></pre></td></tr></table></figure><p>启动终端,输入<code>python3</code>,输入以下代码:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> mxnet <span class="keyword">as</span> mx</span><br><span class="line"><span class="keyword">from</span> mxnet <span class="keyword">import</span> nd</span><br><span class="line"><span class="keyword">from</span> mxnet.gluon <span class="keyword">import</span> nn</span><br><span class="line">a = nd.array([<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>], ctx=mx.gpu())</span><br><span class="line">a</span><br></pre></td></tr></table></figure><p>如果输出</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[ <span class="number">1.</span> <span class="number">2.</span> <span class="number">3.</span>]</span><br><span class="line"><NDArray <span class="number">3</span> @gpu(<span class="number">0</span>)></span><br></pre></td></tr></table></figure><p>则安装成功。</p><p>Tip:安装依赖库的时候如果直接应用(以安装opencv为例)<code>pip install python-opencv</code> 语句安装太慢的时候,可以尝试清华源镜像<code>pip install -i https://pypi.tuna.tsinghua.edu.cn/simple python-opencv</code> 。</p>]]></content:encoded>
<comments>http://yoursite.com/2018/07/09/install-tensorflow-mxnet/#disqus_thread</comments>
</item>
<item>
<title>Summary and Usage of Parallel Computing</title>
<link>http://yoursite.com/2018/07/01/parallel-computing/</link>
<guid>http://yoursite.com/2018/07/01/parallel-computing/</guid>
<pubDate>Sat, 30 Jun 2018 16:00:00 GMT</pubDate>
<description>
<blockquote>
<p>Abstract: Parallel computing can speed up our programme. From data parallel to process parallel, there are many parallel tool and way that we can use. The blog will introduce The Parallel Patterns Library (PPL), Open Multi-Processing (OpenMP) and Open Computing Language (OpenCL). Other important parallel tools such as CUDA and Hadoop will be introduced as topics in later blogs.</p>
</blockquote>
</description>
<content:encoded><![CDATA[<blockquote><p>Abstract: Parallel computing can speed up our programme. From data parallel to process parallel, there are many parallel tool and way that we can use. The blog will introduce The Parallel Patterns Library (PPL), Open Multi-Processing (OpenMP) and Open Computing Language (OpenCL). Other important parallel tools such as CUDA and Hadoop will be introduced as topics in later blogs.</p></blockquote><a id="more"></a><p>In general, there are four kinds of levels parallelism in computing, which are bit-level parallelism (BLP), instruction level parallelism (ILP), data level parallelism (DLP), and task level parallelism (more commonly Thread-level parallelism, short as TLP).</p><p>For data level parallelism, there are four kinds of modes for computer to perform parallel calculations. They are SISD, MISD, SIMD, and MIMD, and the meaning of them is illustrated as follows.</p><p><img src="http://images2015.cnblogs.com/blog/46139/201604/46139-20160406205648093-1178215697.jpg" alt="MIMD-mean"></p><p>There are some tools to achieve the parallel computing, and the parallel scale ranges from multi-thread, multi-processe, multi-core to multi-computer. The follows are some rough classifications.</p><p><strong>MPI</strong><br>The MPI is a process level parallelism. MPI adopts distributed memory system and explicitly (data distribution method) realizes parallel execution. Message transmission is performed between processes. The scalability of MPI is good, but programming model is complicated.</p><p><strong>Pthreads</strong><br>Pthreads is a thread level parallelism. It uses shared memory system and is valid only on POSIX systems (linux, mac OS X, Solaris, HPUX, etc.). It is a library that can be connected to C programs. Currently, the standard C++ shared memory thread library is still under development. It may be more convenient to use this library in future to implement C++ programs.</p><p><strong>OpenMP</strong><br>OpenMP is a thread level parallelism. It uses shared-memory-distribution system and implicitly (data allocation) implements parallel execution with poor scalability. Hence OpenMP only applies to SMP (Symmetric Multi- Processing symmetric multi-processing architecture and DSM (Distributed Shared Memory) system, but not suitable for clustering.</p><p><strong>OpenCL</strong><br>OpenCL is a framework for heterogeneous platforms and can applied on CPU, GPU or other types of processors. OpenCL is a language (based on C99) for writing kernels (functions running on OpenCL devices) and consists of set of APIs for defining and controlling the platform. It provides parallel computing based on task partitioning and data segmentation. OpenCL is similar to the other two open industry standards, OpenGL and OpenAL, which are used for 3D graphics and computer audio, respectively.</p><p><strong>CUDA</strong><br>The GPU is designed to perform complex math and set calculations. There are many stream multiprocessor(SM) in a GPU, which are similar to CPU cores. CUDA is the usefull tool Launched by Nvidia to programm the GPU. The similarities and differences between OpenCL and CUDA:</p><ul><li>Differences: <ul><li>OpenCL is a general-purpose heterogeneous platform programming language that is cumbersome to take into account different devices.</li><li>CUDA is a framework invented by nvidia specifically for programming on GPGPU. It is easy to use and easy to get started.</li></ul></li><li>Similarities:<ul><li>They are based on task parallelism and data parallelism.</li></ul></li></ul><p><strong>Hadoop</strong><br>Hadoop is a distributed system infrastructure developed by the Apache Foundation. Users can develop distributed programs without understanding the underlying details of the distributed architecture. Hadoop makes full use of the power of clusters for high-speed computing and storage.</p><h2 id="1-PPL"><a href="#1-PPL" class="headerlink" title="1 PPL"></a>1 PPL</h2><p>The Parallel Patterns Library (PPL) provides algorithms that concurrently perform work on collections of data. These algorithms resemble those provided by the Standard Template Library (STL).</p><h3 id="1-1-The-parallel-for-Algorithm"><a href="#1-1-The-parallel-for-Algorithm" class="headerlink" title="1.1 The parallel_for Algorithm"></a>1.1 The <code>parallel_for</code> Algorithm</h3><p>You can convert many <code>for</code> loops to use <code>parallel_for</code>. However, the <code>parallel_for</code> algorithm differs from the <code>for</code> statement in the following ways:</p><ul><li><p>The <code>parallel_for</code> algorithm does not execute the tasks in a pre-determined order.</p></li><li><p>The <code>parallel_for</code> algorithm does not support arbitrary termination conditions. The parallel_for algorithm stops when the current value of the iteration variable is one less than last.</p></li><li><p>The <code>_Index_type</code> type parameter must be an <strong>integral</strong> type. This integral type can be signed or unsigned.</p></li><li><p>The loop iteration must be forward. The <code>parallel_for</code> algorithm throws an exception of type <code>std::invalid_argument</code> if the <code>_Step</code> parameter is less than 1.</p></li><li><p>The exception-handling mechanism for the <code>parallel_for</code> algorithm differs from that of a for loop. If multiple exceptions occur simultaneously in a parallel loop body, the runtime propagates only one of the exceptions to the thread that called <code>parallel_for</code>. In addition, when one loop iteration throws an exception, the runtime does not immediately stop the overall loop. Instead, the loop is placed in the cancelled state and the runtime discards any tasks that have not yet started.</p></li></ul><h3 id="1-2-The-parallel-for-each-Algorithm"><a href="#1-2-The-parallel-for-each-Algorithm" class="headerlink" title="1.2 The parallel_for_each Algorithm"></a>1.2 The <code>parallel_for_each</code> Algorithm</h3><p>The <code>parallel_for_each</code> algorithm resembles the STL <code>std::for_each</code> algorithm, except that the <code>parallel_for_each</code> algorithm executes the tasks concurrently. Like other parallel algorithms, <code>parallel_for_each</code> does not execute the tasks in a specific order.</p><h3 id="1-3-The-parallel-invoke-Algorithm"><a href="#1-3-The-parallel-invoke-Algorithm" class="headerlink" title="1.3 The parallel_invoke Algorithm"></a>1.3 The <code>parallel_invoke</code> Algorithm</h3><p>The <code>concurrency::parallel_invoke</code> algorithm executes a set of tasks in parallel. It does not return until each task finishes. This algorithm is useful when you have several independent tasks that you want to execute at the same time.</p><h3 id="1-4-The-parallel-transform-Algorithm"><a href="#1-4-The-parallel-transform-Algorithm" class="headerlink" title="1.4 The parallel_transform Algorithm"></a>1.4 The <code>parallel_transform</code> Algorithm</h3><p>You can use the parallel transform algorithm to perform many data parallelization operations. For example, you can:</p><ul><li>Adjust the brightness of an image, and perform other image processing operations.</li><li>Sum or take the dot product between two vectors, and perform other numeric calculations on vectors.</li><li>Perform 3-D ray tracing, where each iteration refers to one pixel that must be rendered.</li></ul><blockquote><p>Important: The iterator that you supply for the output of <code>parallel_transform</code> must completely overlap the input iterator or not overlap at all. The behavior of this algorithm is unspecified if the input and output iterators partially overlap.</p></blockquote><h3 id="1-5-The-parallel-reduce-Algorithms"><a href="#1-5-The-parallel-reduce-Algorithms" class="headerlink" title="1.5 The parallel_reduce Algorithms"></a>1.5 The <code>parallel_reduce</code> Algorithms</h3><p>The <code>parallel_reduce</code> algorithm is useful when you have a sequence of operations that satisfy the associative property. Here are some of the operations that you can perform with parallel_reduce:</p><ul><li>Multiply sequences of matrices to produce a matrix.</li><li>Multiply a vector by a sequence of matrices to produce a vector.</li><li>Compute the length of a sequence of strings.</li><li>Combine a list of elements, such as strings, into one element.</li></ul><h3 id="1-6-Code-Example-for-PPL-Algorithms"><a href="#1-6-Code-Example-for-PPL-Algorithms" class="headerlink" title="1.6 Code Example for PPL Algorithms"></a>1.6 Code Example for PPL Algorithms</h3><p>Here is the code example for above algorithms.<br>Please add header <code>#include <ppl.h></code><br><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><ppl.h></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><vector></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><random></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><sstream></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><iostream></span></span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">struct</span> <span class="title">my_data</span></span></span><br><span class="line"><span class="class">{</span></span><br><span class="line"><span class="keyword">int</span> num;</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">string</span> note;</span><br><span class="line">};</span><br><span class="line"></span><br><span class="line"><span class="comment">// Returns the result of adding a value to itself.</span></span><br><span class="line"><span class="keyword">template</span> <<span class="keyword">typename</span> T></span><br><span class="line"><span class="function">T <span class="title">twice</span><span class="params">(<span class="keyword">const</span> T& t)</span> </span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">return</span> t + t;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_for algorithm</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_for</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">concurrency::parallel_for(<span class="number">1</span>, <span class="number">6</span>, [](<span class="keyword">int</span> value) {</span><br><span class="line"><span class="built_in">std</span>::wstringstream ss;</span><br><span class="line">ss << value << L' <span class="string">';</span></span><br><span class="line">std::wcout << ss.str();</span><br><span class="line">});</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_for_each algorithm: regular</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_for_each1</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><<span class="built_in">std</span>::<span class="built_in">string</span>> my_str = { <span class="string">"hi"</span>, <span class="string">"world"</span>, <span class="string">"hello"</span>, <span class="string">"c"</span>, <span class="string">"language"</span>};</span><br><span class="line"></span><br><span class="line">concurrency::parallel_for_each(begin(my_str), end(my_str), [](<span class="built_in">std</span>::<span class="built_in">string</span> value) {</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << value<<<span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line">});</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_for_each algorithm: wrong use, data should be independent</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_for_each2</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><<span class="built_in">std</span>::<span class="built_in">string</span>> my_str = { <span class="string">"hi"</span>, <span class="string">"world"</span>, <span class="string">"hello"</span>, <span class="string">"c"</span>, <span class="string">"language"</span> };</span><br><span class="line"><span class="keyword">int</span> mark = <span class="number">100</span>;</span><br><span class="line">concurrency::parallel_for_each(begin(my_str), end(my_str), [&mark](<span class="built_in">std</span>::<span class="built_in">string</span> value) {</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << value <<<span class="string">" "</span><<mark<< <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line">});</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_for_each algorithm: test class data parallel</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_for_each3</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">my_data data1 = { <span class="number">100</span>,<span class="string">"hi"</span> };</span><br><span class="line">my_data data2 = { <span class="number">200</span>,<span class="string">"world"</span> };</span><br><span class="line">my_data data3 = { <span class="number">300</span>,<span class="string">"language"</span> };</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><my_data> my_str = { data1,data2,data3 };</span><br><span class="line">concurrency::parallel_for_each(begin(my_str), end(my_str), [](my_data value) {</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << value.num << <span class="string">" "</span> <<value.note << <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line">});</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_invoke algorithm</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_invoke</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">int</span> n = <span class="number">54</span>;</span><br><span class="line"><span class="keyword">double</span> d = <span class="number">5.6</span>;</span><br><span class="line"><span class="built_in">std</span>::wstring s = <span class="string">L"Hello"</span>;</span><br><span class="line"></span><br><span class="line">concurrency::parallel_invoke(</span><br><span class="line">[&n] { n = twice(n); },</span><br><span class="line">[&d] { d = twice(d); },</span><br><span class="line">[&s] { s = twice(s);}</span><br><span class="line">);</span><br><span class="line"><span class="built_in">std</span>::wcout << n <<<span class="string">" "</span><< d<<<span class="string">" "</span><<s<<<span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_transform algorithm</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_transform</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><<span class="keyword">int</span>> values(<span class="number">1250000</span>);</span><br><span class="line"><span class="built_in">std</span>::generate(begin(values), end(values), <span class="built_in">std</span>::mt19937(<span class="number">42</span>));</span><br><span class="line"></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><<span class="keyword">int</span>> results(values.size());</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><<span class="keyword">int</span>> results2(values.size());</span><br><span class="line"></span><br><span class="line"><span class="comment">// Negate each element in parallel.</span></span><br><span class="line">concurrency::parallel_transform(begin(values), end(values), begin(results), [](<span class="keyword">int</span> n) {</span><br><span class="line"><span class="keyword">return</span> -n;</span><br><span class="line">});</span><br><span class="line"></span><br><span class="line"><span class="comment">// Alternatively, use the negate class to perform the operation.</span></span><br><span class="line">concurrency::parallel_transform(begin(values), end(values),</span><br><span class="line">begin(results), <span class="built_in">std</span>::negate<<span class="keyword">int</span>>());</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">// Demonstrate use of parallel_transform together with a binary function.</span></span><br><span class="line"><span class="comment">// This example uses a lambda expression.</span></span><br><span class="line">concurrency::parallel_transform(begin(values), end(values), begin(results),</span><br><span class="line">begin(results2), [](<span class="keyword">int</span> n, <span class="keyword">int</span> m) {</span><br><span class="line"><span class="keyword">return</span> n - m;</span><br><span class="line">});</span><br><span class="line"></span><br><span class="line"><span class="comment">// Alternatively, use the multiplies class:</span></span><br><span class="line">concurrency::parallel_transform(begin(values), end(values), begin(results),</span><br><span class="line">begin(results2), <span class="built_in">std</span>::multiplies<<span class="keyword">int</span>>());</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test Parallel_reduce algorithm</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_Parallel_reduce</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><<span class="built_in">std</span>::wstring> words;</span><br><span class="line">words.push_back(<span class="string">L"Hello "</span>);</span><br><span class="line">words.push_back(<span class="string">L"i "</span>);</span><br><span class="line">words.push_back(<span class="string">L"like "</span>);</span><br><span class="line">words.push_back(<span class="string">L"c "</span>);</span><br><span class="line">words.push_back(<span class="string">L"language, "</span>);</span><br><span class="line">words.push_back(<span class="string">L"and "</span>);</span><br><span class="line">words.push_back(<span class="string">L"parallel "</span>);</span><br><span class="line">words.push_back(<span class="string">L"programming."</span>);</span><br><span class="line"></span><br><span class="line"><span class="comment">// Reduce the vector to one string in parallel.</span></span><br><span class="line"><span class="built_in">std</span>::wcout << concurrency::parallel_reduce(begin(words), end(words), <span class="built_in">std</span>::wstring()) << <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="comment">// Alternatively, use one of following test function</span></span><br><span class="line"></span><br><span class="line"><span class="comment">//test_Parallel_for();</span></span><br><span class="line"><span class="comment">//test_Parallel_for_each1();</span></span><br><span class="line"><span class="comment">//test_Parallel_for_each2();</span></span><br><span class="line"><span class="comment">//test_Parallel_for_each3();</span></span><br><span class="line"><span class="comment">//test_Parallel_invoke();</span></span><br><span class="line"><span class="comment">//test_Parallel_transform();</span></span><br><span class="line">test_Parallel_reduce();</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><h2 id="2-OpenMP"><a href="#2-OpenMP" class="headerlink" title="2 OpenMP"></a>2 OpenMP</h2><p>The basic format of Compiler Directive(<code>#pragma omp</code>) is as follows:<br><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp directive-name [clause[ []] clause]]</span></span><br></pre></td></tr></table></figure></p><p>Among them, “[]” indicates optional, and each Compiler Directive acts on the followed statement (the bracketed portion of “{}” in C++ is a compound statement).</p><p>Directive-name can be: <code>paralle</code>l, <code>for</code>, <code>sections</code>, <code>single</code>, <code>atomic</code>, <code>barrier</code>, <code>critical</code>, <code>flush</code>, <code>master</code>, <code>ordered</code>, <code>threadprivate</code> (11 in total, only the first 4 have optional clauses).</p><p>The clause (sentence) is equivalent to the modification of Directive, which defines some parameters of Directive. The clause can be: <code>copyin(variable-list)</code>, <code>copyprivate(variable-list)</code>, <code>default(shared | none)</code>, <code>firstprivate(variable-list)</code>, <code>if(expression)</code>, <code>lastprivate(variable-list)</code>, <code>nowait</code>, <code>num_threads(num)</code> , <code>ordered</code>, <code>private(variable-list)</code>, <code>reduction(operation: variable-list)</code>, <code>schedule(type[,size])</code>, <code>shared(variable-list)</code> (13 total).</p><p>For example, <code>#pragma omp parallel</code> means that the subsequent statements will be executed in parallel by multiple threads, and the number of threads is preset by the system (generally equal to the number of logical processors, for example, i5 4 cores and 8 threaded CPUs have 8 logical processors). Optional clauses can be added to the directive. For example, <code>#pragma omp parallel num_threads(4)</code> still means that subsequent statements will be executed in parallel by multiple threads, but the number of threads is 4.</p><h3 id="2-1-parallel"><a href="#2-1-parallel" class="headerlink" title="2.1 parallel"></a>2.1 <code>parallel</code></h3><p><code>parallel</code> indicates that the subsequent statement will be executed in parallel by multiple threads, which is already known. The statement (or block of statements) after <code>#pragma omp parallel</code> is called a parallel region.</p><p>You can use num_threads clause to change the default number of threads.</p><h3 id="2-2-for"><a href="#2-2-for" class="headerlink" title="2.2 for"></a>2.2 <code>for</code></h3><p><code>for</code> directive divides multiple iterations of the <code>for</code> loop into multiple threads, that is, the iterations performed by each thread are not repeated, and the iterations of all threads are exactly all the iterations of the C++ for loop. Here the C++ for loop requires some restrictions so that you can determine the number of loops before executing, for example, C++ for should not contain <code>breaks</code>.</p><h3 id="2-3-section"><a href="#2-3-section" class="headerlink" title="2.3 section"></a>2.3 <code>section</code></h3><p>The <code>section</code> directive is used for task parallelism, which indicates that the following code block contains a section block that will be executed in parallel by multiple threads.</p><h3 id="2-4-critical"><a href="#2-4-critical" class="headerlink" title="2.4 critical"></a>2.4 <code>critical</code></h3><p>When one line of code is executed by one thread, other threads cannot execute (the line of code is critical).</p><h3 id="2-5-Code-Example-for-OpenMP-Algorithms"><a href="#2-5-Code-Example-for-OpenMP-Algorithms" class="headerlink" title="2.5 Code Example for OpenMP Algorithms"></a>2.5 Code Example for OpenMP Algorithms</h3><p>Here is the code example for above algorithms.<br>Please add header <code>#include<omp.h></code></p><blockquote><p>Important: Please turn on the OpenMP in configuration <code>Property~C/C++~Language~OpenMP Support~Yes</code></p></blockquote><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span><span class="meta-string"><omp.h></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span><span class="meta-string"><iostream></span></span></span><br><span class="line"></span><br><span class="line"><span class="comment">//Test parallel directive</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_openmp_parallel1</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp parallel</span></span><br><span class="line">{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << omp_get_thread_num();</span><br><span class="line">}</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test parallel directive with thread number</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_openmp_parallel2</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp parallel num_threads(3)</span></span><br><span class="line">{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << omp_get_thread_num();</span><br><span class="line">}</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test for directive: one format</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_openmp_parallel_for1</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">int</span> data[<span class="number">1000</span>];</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp parallel</span></span><br><span class="line">{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp for</span></span><br><span class="line"><span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < <span class="number">1000</span>; ++i)</span><br><span class="line">data[i] = i;</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">////Test parallel directive: the othe format</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_openmp_parallel_for2</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">int</span> data[<span class="number">1000</span>];</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp parallel for</span></span><br><span class="line"><span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < <span class="number">1000</span>; ++i)</span><br><span class="line"> data[i] = i;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">////Test section directive</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_openmp_parallel_section</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp parallel sections</span></span><br><span class="line">{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp section</span></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << omp_get_thread_num();</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp section</span></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << omp_get_thread_num();</span><br><span class="line">}</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Test critical directive</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">test_openmp_parallel_critical</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp parallel num_threads(4)</span></span><br><span class="line">{</span><br><span class="line"><span class="meta">#<span class="meta-keyword">pragma</span> omp critical</span></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << omp_get_thread_num() << omp_get_thread_num();</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="comment">//Alternatively, use one of following test function</span></span><br><span class="line"></span><br><span class="line"><span class="comment">//test_openmp_parallel1();</span></span><br><span class="line"><span class="comment">//test_openmp_parallel2();</span></span><br><span class="line"><span class="comment">//test_openmp_parallel_for1();</span></span><br><span class="line"><span class="comment">//test_openmp_parallel_for2();</span></span><br><span class="line"><span class="comment">//test_openmp_parallel_section();</span></span><br><span class="line">test_openmp_parallel_critical();</span><br><span class="line"> <span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h2 id="3-OpenCL"><a href="#3-OpenCL" class="headerlink" title="3 OpenCL"></a>3 OpenCL</h2><p>The main design goal of OpenCL is to provide a parallel computing platform that is suitable for a variety of different devices. However, the universal property is the sacrifice of convenience, so OpenCL programming is very tedious. So here, only an example of adding two numbers is provided for demonstration.<br>The following is the code example for OpenCL programming.</p><p>For environment configuration:</p><ul><li>Add header <code>#include <CL/cl.h></code></li><li>The header file and library file can be found in CUDA tool kit, so add the additional include directory as follows(my configuration, you can look for your tool kit path):<ul><li><code>Property~C/C++~Regular~additional include directory~D:\NVIDIA\CUDA\CUDAToolkit\include</code></li><li><code>Property~Linker~Regular~additional library directory~D:\NVIDIA\CUDA\CUDAToolkit\lib\x64</code></li><li><code>Property~Linker~Import~additional dependencies~OpenCL.lib</code></li></ul></li></ul><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br><span class="line">188</span><br><span class="line">189</span><br><span class="line">190</span><br><span class="line">191</span><br><span class="line">192</span><br><span class="line">193</span><br><span class="line">194</span><br><span class="line">195</span><br><span class="line">196</span><br><span class="line">197</span><br><span class="line">198</span><br><span class="line">199</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><iostream></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><fstream></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><sstream></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><CL/cl.h></span></span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> <span class="keyword">int</span> ARRAY_SIZE = <span class="number">1000</span>;</span><br><span class="line"></span><br><span class="line"><span class="comment">//1. Select the OpenCL platform and create a context</span></span><br><span class="line"><span class="function">cl_context <span class="title">CreateContext</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">cl_int errNum;</span><br><span class="line">cl_uint numPlatforms;</span><br><span class="line">cl_platform_id firstPlatformId;</span><br><span class="line">cl_context context = <span class="literal">NULL</span>;</span><br><span class="line"></span><br><span class="line"><span class="comment">//Select the first of the available platforms</span></span><br><span class="line">errNum = clGetPlatformIDs(<span class="number">1</span>, &firstPlatformId, &numPlatforms);</span><br><span class="line"><span class="keyword">if</span> (errNum != CL_SUCCESS || numPlatforms <= <span class="number">0</span>)</span><br><span class="line">{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cerr</span> << <span class="string">"Failed to find any OpenCL platforms."</span> << <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line"><span class="keyword">return</span> <span class="literal">NULL</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Create an OpenCL context</span></span><br><span class="line">cl_context_properties contextProperties[] =</span><br><span class="line">{</span><br><span class="line">CL_CONTEXT_PLATFORM,</span><br><span class="line">(cl_context_properties)firstPlatformId,</span><br><span class="line"><span class="number">0</span></span><br><span class="line">};</span><br><span class="line">context = clCreateContextFromType(contextProperties, CL_DEVICE_TYPE_GPU,</span><br><span class="line"><span class="literal">NULL</span>, <span class="literal">NULL</span>, &errNum);</span><br><span class="line"></span><br><span class="line"><span class="keyword">return</span> context;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">//2. Create a device and create a command queue</span></span><br><span class="line"><span class="function">cl_command_queue <span class="title">CreateCommandQueue</span><span class="params">(cl_context context, cl_device_id *device)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">cl_int errNum;</span><br><span class="line">cl_device_id *devices;</span><br><span class="line">cl_command_queue commandQueue = <span class="literal">NULL</span>;</span><br><span class="line"><span class="keyword">size_t</span> deviceBufferSize = <span class="number">-1</span>;</span><br><span class="line"></span><br><span class="line"><span class="comment">// Get device buffer size</span></span><br><span class="line">errNum = clGetContextInfo(context, CL_CONTEXT_DEVICES, <span class="number">0</span>, <span class="literal">NULL</span>, &deviceBufferSize);</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> (deviceBufferSize <= <span class="number">0</span>)</span><br><span class="line">{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cerr</span> << <span class="string">"No devices available."</span>;</span><br><span class="line"><span class="keyword">return</span> <span class="literal">NULL</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">// Allocate buffer space for devices</span></span><br><span class="line">devices = <span class="keyword">new</span> cl_device_id[deviceBufferSize / <span class="keyword">sizeof</span>(cl_device_id)];</span><br><span class="line">errNum = clGetContextInfo(context, CL_CONTEXT_DEVICES, deviceBufferSize, devices, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line"><span class="comment">//Pick the first of the available devices</span></span><br><span class="line">commandQueue = clCreateCommandQueue(context, devices[<span class="number">0</span>], <span class="number">0</span>, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line">*device = devices[<span class="number">0</span>];</span><br><span class="line"><span class="keyword">delete</span>[] devices;</span><br><span class="line"><span class="keyword">return</span> commandQueue;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">// 3.Create and build program</span></span><br><span class="line"><span class="function">cl_program <span class="title">CreateProgram</span><span class="params">(cl_context context, cl_device_id device, <span class="keyword">const</span> <span class="keyword">char</span>* fileName)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">cl_int errNum;</span><br><span class="line">cl_program program;</span><br><span class="line"></span><br><span class="line"><span class="built_in">std</span>::<span class="function">ifstream <span class="title">kernelFile</span><span class="params">(fileName, <span class="built_in">std</span>::ios::in)</span></span>;</span><br><span class="line"><span class="keyword">if</span> (!kernelFile.is_open())</span><br><span class="line">{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cerr</span> << <span class="string">"Failed to open file for reading: "</span> << fileName << <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line"><span class="keyword">return</span> <span class="literal">NULL</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">ostringstream</span> oss;</span><br><span class="line">oss << kernelFile.rdbuf();</span><br><span class="line"></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">string</span> srcStdStr = oss.str();</span><br><span class="line"><span class="keyword">const</span> <span class="keyword">char</span> *srcStr = srcStdStr.c_str();</span><br><span class="line">program = clCreateProgramWithSource(context, <span class="number">1</span>,</span><br><span class="line">(<span class="keyword">const</span> <span class="keyword">char</span>**)&srcStr,</span><br><span class="line"><span class="literal">NULL</span>, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line">errNum = clBuildProgram(program, <span class="number">0</span>, <span class="literal">NULL</span>, <span class="literal">NULL</span>, <span class="literal">NULL</span>, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line"><span class="keyword">return</span> program;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Create and build objects</span></span><br><span class="line"><span class="function"><span class="keyword">bool</span> <span class="title">CreateMemObjects</span><span class="params">(cl_context context, cl_mem memObjects[<span class="number">3</span>],</span></span></span><br><span class="line"><span class="function"><span class="params"><span class="keyword">float</span> *a, <span class="keyword">float</span> *b)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">memObjects[<span class="number">0</span>] = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR,</span><br><span class="line"><span class="keyword">sizeof</span>(<span class="keyword">float</span>) * ARRAY_SIZE, a, <span class="literal">NULL</span>);</span><br><span class="line">memObjects[<span class="number">1</span>] = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR,</span><br><span class="line"><span class="keyword">sizeof</span>(<span class="keyword">float</span>) * ARRAY_SIZE, b, <span class="literal">NULL</span>);</span><br><span class="line">memObjects[<span class="number">2</span>] = clCreateBuffer(context, CL_MEM_READ_WRITE,</span><br><span class="line"><span class="keyword">sizeof</span>(<span class="keyword">float</span>) * ARRAY_SIZE, <span class="literal">NULL</span>, <span class="literal">NULL</span>);</span><br><span class="line"><span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">// Release OpenCL Resources</span></span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">Cleanup</span><span class="params">(cl_context context, cl_command_queue commandQueue,</span></span></span><br><span class="line"><span class="function"><span class="params">cl_program program, cl_kernel kernel, cl_mem memObjects[<span class="number">3</span>])</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < <span class="number">3</span>; i++)</span><br><span class="line">{</span><br><span class="line"><span class="keyword">if</span> (memObjects[i] != <span class="number">0</span>)</span><br><span class="line">clReleaseMemObject(memObjects[i]);</span><br><span class="line">}</span><br><span class="line"><span class="keyword">if</span> (commandQueue != <span class="number">0</span>)</span><br><span class="line">clReleaseCommandQueue(commandQueue);</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> (kernel != <span class="number">0</span>)</span><br><span class="line">clReleaseKernel(kernel);</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> (program != <span class="number">0</span>)</span><br><span class="line">clReleaseProgram(program);</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> (context != <span class="number">0</span>)</span><br><span class="line">clReleaseContext(context);</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">(<span class="keyword">int</span> argc, <span class="keyword">char</span>** argv)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line">cl_context context = <span class="number">0</span>;</span><br><span class="line">cl_command_queue commandQueue = <span class="number">0</span>;</span><br><span class="line">cl_program program = <span class="number">0</span>;</span><br><span class="line">cl_device_id device = <span class="number">0</span>;</span><br><span class="line">cl_kernel kernel = <span class="number">0</span>;</span><br><span class="line">cl_mem memObjects[<span class="number">3</span>] = { <span class="number">0</span>, <span class="number">0</span>, <span class="number">0</span> };</span><br><span class="line">cl_int errNum;</span><br><span class="line"></span><br><span class="line"><span class="comment">// 1.Select the OpenCL platform and create a context</span></span><br><span class="line">context = CreateContext();</span><br><span class="line"></span><br><span class="line"><span class="comment">// 2. Create a device and create a command queue</span></span><br><span class="line">commandQueue = CreateCommandQueue(context, &device);</span><br><span class="line"></span><br><span class="line"><span class="comment">//Create and build program objects</span></span><br><span class="line">program = CreateProgram(context, device, <span class="string">"Add.cl"</span>);</span><br><span class="line"></span><br><span class="line"><span class="comment">// 4.Create OpenCL kernel and allocate memory space</span></span><br><span class="line">kernel = clCreateKernel(program, <span class="string">"add_kernel"</span>, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line"><span class="comment">//Create data to process</span></span><br><span class="line"><span class="keyword">float</span> result[ARRAY_SIZE];</span><br><span class="line"><span class="keyword">float</span> a[ARRAY_SIZE];</span><br><span class="line"><span class="keyword">float</span> b[ARRAY_SIZE];</span><br><span class="line"><span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < ARRAY_SIZE; i++)</span><br><span class="line">{</span><br><span class="line">a[i] = (<span class="keyword">float</span>)i;</span><br><span class="line">b[i] = (<span class="keyword">float</span>)(ARRAY_SIZE - i);</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">//Create a memory object</span></span><br><span class="line"><span class="keyword">if</span> (!CreateMemObjects(context, memObjects, a, b))</span><br><span class="line">{</span><br><span class="line">Cleanup(context, commandQueue, program, kernel, memObjects);</span><br><span class="line"><span class="keyword">return</span> <span class="number">1</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment">// 5.Set kernel data and execute kernel</span></span><br><span class="line">errNum = clSetKernelArg(kernel, <span class="number">0</span>, <span class="keyword">sizeof</span>(cl_mem), &memObjects[<span class="number">0</span>]);</span><br><span class="line">errNum |= clSetKernelArg(kernel, <span class="number">1</span>, <span class="keyword">sizeof</span>(cl_mem), &memObjects[<span class="number">1</span>]);</span><br><span class="line">errNum |= clSetKernelArg(kernel, <span class="number">2</span>, <span class="keyword">sizeof</span>(cl_mem), &memObjects[<span class="number">2</span>]);</span><br><span class="line"></span><br><span class="line"><span class="keyword">size_t</span> globalWorkSize[<span class="number">1</span>] = { ARRAY_SIZE };</span><br><span class="line"><span class="keyword">size_t</span> localWorkSize[<span class="number">1</span>] = { <span class="number">1</span> };</span><br><span class="line"></span><br><span class="line">errNum = clEnqueueNDRangeKernel(commandQueue, kernel, <span class="number">1</span>, <span class="literal">NULL</span>,</span><br><span class="line">globalWorkSize, localWorkSize,</span><br><span class="line"><span class="number">0</span>, <span class="literal">NULL</span>, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line"><span class="comment">//6.Read the execution result and release OpenCL resources</span></span><br><span class="line">errNum = clEnqueueReadBuffer(commandQueue, memObjects[<span class="number">2</span>], CL_TRUE,</span><br><span class="line"><span class="number">0</span>, ARRAY_SIZE * <span class="keyword">sizeof</span>(<span class="keyword">float</span>), result,</span><br><span class="line"><span class="number">0</span>, <span class="literal">NULL</span>, <span class="literal">NULL</span>);</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < ARRAY_SIZE; i++)</span><br><span class="line">{</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << result[i] << <span class="string">" "</span>;</span><br><span class="line">}</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">cout</span> << <span class="string">"Executed program succesfully."</span> << <span class="built_in">std</span>::<span class="built_in">endl</span>;</span><br><span class="line">getchar();</span><br><span class="line">Cleanup(context, commandQueue, program, kernel, memObjects);</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>The kernel file is <code>Add.cl</code> is</p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">__<span class="function">kernel <span class="keyword">void</span> <span class="title">add_kernel</span><span class="params">(__global <span class="keyword">const</span> <span class="keyword">float</span> *a, __global <span class="keyword">const</span> <span class="keyword">float</span> *b, __global <span class="keyword">float</span> *result)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">int</span> gid = get_global_id(<span class="number">0</span>);</span><br><span class="line">result[gid] = a[gid] + b[gid];</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>The all code files can be found in my <a href="https://github.com/ChampionLi/Parallel-Computing" target="_blank" rel="noopener">Github</a></p>]]></content:encoded>
<comments>http://yoursite.com/2018/07/01/parallel-computing/#disqus_thread</comments>
</item>
<item>
<title>Introduction to Decision Tree Algorithm</title>
<link>http://yoursite.com/2018/06/09/decision-tree/</link>
<guid>http://yoursite.com/2018/06/09/decision-tree/</guid>
<pubDate>Fri, 08 Jun 2018 16:00:00 GMT</pubDate>
<description>
<blockquote>
<p>Abstract: Decision tree algorithm is a very basic kind of algorithm in machine learning, and it is also an important tag learning method . This blog explains the basic principles of the decision tree algorithm and several algorithm variants. A simple example is also implemented to show how to construct a decision tree.</p>
</blockquote>
</description>
<content:encoded><![CDATA[<blockquote><p>Abstract: Decision tree algorithm is a very basic kind of algorithm in machine learning, and it is also an important tag learning method . This blog explains the basic principles of the decision tree algorithm and several algorithm variants. A simple example is also implemented to show how to construct a decision tree.</p></blockquote><a id="more"></a><p>我们知道,在机器学习中有两类十分重要的问题,一类是分类问题,一类是回归问题。我们今天所要探讨的就是在分类和回归问题中所用到的一种非常基本的方法,叫决策树。决策树也是重要的标签学习方法。</p><p>从名字来看,决策的的意思就是在众多类别中我们需要决策出我们分类的东西是属于哪一个类别,决策离散型的值的叫决策树,决策连续型值的叫回归树。用学术一点的语言就是决策树的输出是离散型随机变量,回归树的输出是连续型随机变量,这篇文章的重点是讲解输出是离散型随机变量的决策树,当你明白决策树的运行机理后,回归树也就触类旁通了。</p><p>名字中的树,顾名思义,就是模型的结构是树形结构,树形结构的主要优点就是可读性较强,分类速度较快。树是由躯干和叶子组成,决策树中的有向边和结点与之对应,其中结点也有两种类型,一种是内部结点,一种是叶结点。</p><p>上面的介绍的都是从字面上可以理解出的一些概念,性质上来讲,决策树是一个预测模型,它代表的是对象属性与对象值之间的一种映射关系。树中每个结点表示某个对象,内部结点表示一个特征或属性,叶结点表示一个类,而每个分叉路径则代表某个可能的属性值,而每个叶结点则对应从根节点到该叶节点所经历的路径所表示的对象的值。</p><p>我们可以认为决策树就是一种 if-then规则的集合,也可以理解为它是定义在特征空间与类空间上的条件概率分布。既然是if-then规则,那么决策树具有一个重要的性质就是:<strong>互斥并且完备</strong>,也就是说每一个实例都被一条路径或一条规则所覆盖,而且只被一条路径或一条规则所覆盖。</p><p>说了这么多抽象的概念,那决策树到底可以用来处理什么样的问题,那我们通过一个实际的例子来展开决策树的讲解,并且为了让大家更好入门,我也选择了一个十分简单的情景。</p><p>假如小明上班可以选择两种交通工具,一种是网约车打车上班,一种是骑共享单车上班。采取这两种途径中的哪一种取决于三个因素,一个是天气情况,天气假设可分为恶劣天气和非恶劣天气,另一个因素是小明的心情,心情分为好心情和坏心情,最后一个因素是小明是否快要迟到。假设三个因素对应的小明上班方式的情况如下表:</p><div class="table-container"><table><thead><tr><th style="text-align:center">天气</th><th style="text-align:center">心情</th><th style="text-align:center">是否快要迟到</th><th style="text-align:center">上班方式</th></tr></thead><tbody><tr><td style="text-align:center">非恶劣</td><td style="text-align:center">好</td><td style="text-align:center">否</td><td style="text-align:center">骑车</td></tr><tr><td style="text-align:center">非恶劣</td><td style="text-align:center">好</td><td style="text-align:center">是</td><td style="text-align:center">打车</td></tr><tr><td style="text-align:center">非恶劣</td><td style="text-align:center">坏</td><td style="text-align:center">否</td><td style="text-align:center">骑车</td></tr><tr><td style="text-align:center">非恶劣</td><td style="text-align:center">坏</td><td style="text-align:center">是</td><td style="text-align:center">打车</td></tr><tr><td style="text-align:center">恶劣</td><td style="text-align:center">好</td><td style="text-align:center">否</td><td style="text-align:center">打车</td></tr><tr><td style="text-align:center">恶劣</td><td style="text-align:center">好</td><td style="text-align:center">是</td><td style="text-align:center">打车</td></tr><tr><td style="text-align:center">恶劣</td><td style="text-align:center">坏</td><td style="text-align:center">是</td><td style="text-align:center">打车</td></tr></tbody></table></div><p>上面这个表格就是我们所说的样本集,细心的读者可能会发现,上面的样本集少了一种情况,即天气恶劣、小明心情不好但是上班时间又比较充裕的这种情况,没错,我故意省去这一组就是想让这一组成为测试集,让大家通过构建一个决策树来预测在这种情况下,小明会采取哪一种方式上班。 </p><p>现在我们已经有了数据,那么我们该如何构建一颗决策树呢?</p><p><strong>在构建一颗决策树的时候我们需要解决的问题有三个:</strong></p><ul><li><p><strong>根结点放置哪个条件属性</strong>;</p></li><li><p><strong>下面的结点放置哪个属性</strong>;</p></li><li><p><strong>什么时候停止树的生长</strong>。</p></li></ul><p>为了解决上面三个问题,我们需要引入一些概念。</p><p>第一个引入的概念叫信息熵,英文名为 Entropy。在 Tom Mitchell 的书中是这样解释信息熵的:</p><blockquote><p>它确定了要编码集合 S 中任意成员(即以均匀的概率随机抽出的一个成员)的分类所需要的最少二进制位数。</p></blockquote><p>说实话,当时的我理解这句话是费了不少劲,其实把它转化成通俗点的语言就是说,<strong>信息熵就是“预测随机变量Y的取值”的难度,或者说度量“随机变量Y”的不确定性</strong>。</p><p>通过两个例子来解释。假如你在地球上,手里握着一个铁块,当你不对铁块施力而直接松手的情况下,请你判断它是会向下坠落,还是向上飞去,根据我们的常识我们能很容易判断出石块会下落,那么判断这个事情的结果就非常容易,那么此时的信息熵就可以认为是0。</p><p>再举一个例子,假如让你判断一枚匀质的硬币抛出后正面朝上还是反面朝上,这个问题我们就比较难回答了,因为正面朝上和反面朝上的概率均等,我们不能有一个很确定的判断硬币到底哪个面朝上,那么这个时候判断就比较难了,所以此时的信息熵就可以认为是1。</p><p>但是上面这段话怎么转化成数学的语言进行定义和描述呢,有很多学者都提出了他们认为的信息熵表达式,我们可以通过下面这个表格看一下目前的一些信息熵的定义。</p><div class="table-container"><table><thead><tr><th style="text-align:center">熵的名字</th><th style="text-align:center">表达式</th></tr></thead><tbody><tr><td style="text-align:center">Shannon Entropy</td><td style="text-align:center">$H_{sha}(\pi) = \sum_{i=1}^{m}p_i log_2 \frac{1}{p_i}$</td></tr><tr><td style="text-align:center">Pal Entropy</td><td style="text-align:center">$H_{pal}(\pi) = \sum_{i=1}^{m}p_i e^{1-p_i}$</td></tr><tr><td style="text-align:center">Gini Index</td><td style="text-align:center">$H_{gin}(\pi) = \sum_{i=1}^{m}p_i (1-p_i)$</td></tr><tr><td style="text-align:center">Goodman-Kruskal Coefficient</td><td style="text-align:center">$H_{goo}(\pi) = 1-\max_{i=1}^{m} p_i$</td></tr></tbody></table></div><p>虽然有这么多的定义,但我们平时很多情况下用的都是香农信息熵,所以接下来我也采用香农信息熵对下面的其他定义进行表述。</p><p>当我们有了信息熵的表达式我们就可以得出一个二分类问题的信息熵图像,如下图所示。</p><p><img src="https://s1.ax1x.com/2018/10/24/is9idA.jpg" alt="2-1.jpg"></p><p>我们可以看到,这个图像所表达出来的信息和我们之前举的例子完全对应,当一个事情非常容易判断的时候,也就是我们以很大的概率认为它会发生或者不会发生,那么它的信息熵就偏向0,当一个事情非常难判断的时候,我们可以认为最难的时候就是这个事件的所有可能性均相等的时候,那么它的信息熵为1. </p><p>现在我们已经有了信息熵的概念,那么我们再引入第二个概念,这个概念需要建立在信息熵之上。那就是条件信息熵。有了信息熵的概念之后,我们自然而然就可以得出条件信息熵的概念,<strong>条件信息熵就是度量“在随机变量X的前提下,预测随机变量Y”的难度</strong>。</p><p>信息熵表示判断难度,有了条件两个字就是说我们已经知道了一个条件之后,再让你判断变量结果,这时候的难度就是就是条件信息熵。就像上面的例子,我们发现只要小明发现他要迟到了,那么他就会打车上班,所以当我得知了小明今天快要迟到了,那么我判断他是否打车这件事就非常容易了,那么此时的条件信息熵就可以认为是0,这就是条件信息熵。如果仍然采用香农的定义方法,那么条件信息熵的数学表达式就是</p><p>已知$P(Y|X)$,$P(X)$,</p><script type="math/tex; mode=display">H(Y|X):Y,X\in y\rightarrow \Re \\H(Y|X)=\sum _i P(x_i)H(Y|x_i)</script><p>有了信息熵和条件信息熵的概念,那我们就自然而然地就可以引出第三个概念,那就是信息增益,信息增益的数学定义是</p><script type="math/tex; mode=display">Gain(X,Y)=H(Y)-H(Y|X)</script><p>我们通过看这个数学表达式不难看出信息增益所表达的意思。被减数是信息熵,也就是在没人给我们通风报信的时候判断结果的难度;减数是条件信息熵,也就是当我们知道了一个条件后,判断结果的难度。<strong>信息增益这个变量表达的意思就是条件x对判断结果减少了多少难度,即度量X对预测Y的能力的影响</strong>。</p><p>就像有一档电视节目叫开心辞典,当答题选手无法判断答案的时候会选择三种求助方式,其实求助方式就是一种条件,当选手用过了求助方式后对回答问题的难度的减少量,就是信息增益。如果难度降低很大,那么我们就可以说信息增益很大。</p><p>介绍了上面三个概念,我们就可以回答在构造决策树的时候遇到的第一个问题了:根结点放置哪个条件属性。</p><p><strong>我们的放置方法是:选择信息增益最大的一个属性作为根结点。</strong></p><p>因为一个数据集的信息熵是固定的,所以这个问题就转化为选择条件信息熵最小的属性,所以我们只要求出条件信息熵最小的属性就知道根结点了。 </p><p>通过对例子的计算我们可以分别计算出单个特性的条件信息熵,计算过程如下图:</p><script type="math/tex; mode=display">H(way|weather)=\frac{4}{7}(-\frac{2}{4}\log_2 \frac{2}{4}- \frac{2}{4}\log_2 \frac{2}{4})+\frac{3}{7}\times 0 = 0.57143 \\H(way|mood)=\frac{4}{7}(-\frac{1}{4}\log_2 \frac{1}{4}- \frac{3}{4}\log_2 \frac{3}{4})+\frac{3}{7}(-\frac{2}{3}\log_2 \frac{2}{3}- \frac{1}{3}\log_2 \frac{1}{3}) = 0.85714 \\H(way|time)=\frac{3}{7}(-\frac{1}{3}\log_2 \frac{1}{3}- \frac{2}{3}\log_2 \frac{2}{3})+\frac{4}{7}\times 0 = 0.39356</script><p>通过计算,我们看到小明是否迟到这个属性的条件信息熵最小,那么我们就将这个属性作为根结点。所以决策树的的雏形如下图。</p><p><img src="https://s1.ax1x.com/2018/10/24/is9Ait.jpg" alt="2-1.jpg"></p><p>知道了根结点的放置方法,那么第二个问题也就迎刃而解了,下面的结点放置哪个属性。我们只需要将已经得到的结点看做一个新的根结点,利用最小化条件信息熵的方法即可。我们将小明并不会快要迟到作为一个条件,那么表格如下:</p><div class="table-container"><table><thead><tr><th style="text-align:center">天气</th><th style="text-align:center">心情</th><th style="text-align:center">上班方式</th></tr></thead><tbody><tr><td style="text-align:center">非恶劣</td><td style="text-align:center">好</td><td style="text-align:center">骑车</td></tr><tr><td style="text-align:center">非恶劣</td><td style="text-align:center">坏</td><td style="text-align:center">骑车</td></tr><tr><td style="text-align:center">恶劣</td><td style="text-align:center">好</td><td style="text-align:center">打车</td></tr></tbody></table></div><p>然后再次计算条件信息熵,计算过程如下图:</p><script type="math/tex; mode=display">H(way|weather)=0 \\H(way|mood)=\frac{2}{3}(-\frac{1}{2}\log_2 \frac{1}{2}- \frac{1}{2}\log_2 \frac{1}{2})+\frac{1}{3}\times 0 = 0.66667</script><p>我们看到天气因素的条件信息熵最小,为0,那么我们下一个节点就方式天气因素。这个时候其实我们就可以结束决策树的生长了,为什么呢?那么我们怎么判断什么时候结束决策树的生长呢?</p><p>因为我们在一直最小化条件信息熵,所以<strong>当我们发现所有特征的信息增益均很小,或者我们没有特征可以选择了就可以停止了</strong>。至此我们就构建出了我们的决策树。</p><p>那么我们最终的决策树如下图所示:</p><p><img src="https://s1.ax1x.com/2018/10/24/is9EJP.jpg" alt="2-3.jpg"></p><p>通过决策树我们很容易判断出天气恶劣、小明心情不好但是上班时间又比较充裕的情况下,小明的出行方式是选择打车。</p><p>所以,如何构建一个决策树的方法截止现在已经基本上全部介绍给了大家,在学术上,常用的算法有<strong>ID3算法</strong>,<strong>C4.5算法</strong>和<strong>CART算法</strong>,其实这些算法和我上面介绍的方法和思想基本上完全一样,只是在选择目标函数的时候有一些差别,我说的是最小化条件信息熵,ID3 用的是信息增益,C4.5算法用的是信息增益比,CART算法用的是基尼指数,这个指数在上面介绍信息熵的表格中就有,可以参考。</p><p>决策树的原理和算法部分就基本上介绍完毕,因为防止模型过拟合也是机器学习中的一个重要议题,所以,我再简单介绍一下决策树的剪枝。</p><p>之所以会发生过拟合,是因为我们在学习的过程中过多地考虑如何提高对训练数据的正确分类上,所以有的时候就会构建出过于复杂的决策树。而决策树一旦复杂,对测试数据的分类就没那么精确了,也就是过拟合。所以根据奥卡姆剃刀的精神,要对决策树进行简化,这个过程就叫做<strong>剪枝</strong>。</p><p>决策树剪枝是通过最小化决策树整体的损失函数完成的。决策树的损失函数定义为:</p><script type="math/tex; mode=display">C_{\alpha}(T)=C(T)+\alpha |T|</script><p>其中,树$T$的叶节点个数为$|T|$,$C(T)$表示模型对训练数据的预测误差,即模型与训练数据的拟合程度,$|T|$表示模型复杂度,参数$\alpha$是一个非负数,控制两者之间的影响。</p><p>$C(T)$的计算方法是</p><script type="math/tex; mode=display">C(T)=-\sum _{t=1}^{|T|}\sum _{k=1}^{K} N_{tk}log\frac{N_{tk}}{N_t}</script><p>其中,$t$是树$T$的叶结点,该叶结点有$N_t$ 个样本,其中$k$类的样本点有$N_{tk}$ 个,$k=1,2,…,K$。</p><p>有个上面的表达式就可以进行最小化损失函数的计算了,从叶结点开始递归地向上计算损失函数,如果一组叶结点回到其父结点之前与之后的整体树分别为$T_B$与$T_A$,其对应的损失函数分别为 $C_{\alpha}(T_B)$与$C_{\alpha}(T_A)$,如果</p><script type="math/tex; mode=display">C_{\alpha}(T_B) \leq C_{\alpha}(T_A)</script><p>则进行剪枝,即将父结点变为新的叶结点。</p><p>因为决策树的生成在开源库 OpenCV 已经有实现,最后我再附上一段利用 OpenCV 来训练我上面例子的代码,目的也是让大家自己实现一个类似 Hello World 的程序。OpenCV 的配置方法在这里不再赘述,大家可以利用下面的代码自己作为练习。OpenCV 的内部实现过程感兴趣的同学也可以对源码进行学习,源码也可以在 OpenCV 的官网上下载到。 </p><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string">"stdafx.h"</span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string">"opencv2/core/core_c.h"</span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string">"opencv2/ml/ml.hpp"</span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><iostream></span></span></span><br><span class="line"></span><br><span class="line"><span class="keyword">using</span> <span class="keyword">namespace</span> cv;</span><br><span class="line"><span class="keyword">using</span> <span class="keyword">namespace</span> <span class="built_in">std</span>;</span><br><span class="line"></span><br><span class="line"><span class="keyword">int</span> _tmain(<span class="keyword">int</span> argc, _TCHAR* argv[])</span><br><span class="line">{</span><br><span class="line"><span class="comment">//init data</span></span><br><span class="line"><span class="keyword">float</span> fdata[<span class="number">7</span>][<span class="number">3</span>] = {{<span class="number">0</span>,<span class="number">0</span>,<span class="number">0</span>,},{<span class="number">0</span>,<span class="number">0</span>,<span class="number">1</span>},{<span class="number">0</span>,<span class="number">1</span>,<span class="number">0</span>},{<span class="number">0</span>,<span class="number">1</span>,<span class="number">1</span>},{<span class="number">1</span>,<span class="number">0</span>,<span class="number">0</span>},{<span class="number">1</span>,<span class="number">0</span>,<span class="number">1</span>},{<span class="number">1</span>,<span class="number">1</span>,<span class="number">1</span>}};</span><br><span class="line"><span class="function">Mat <span class="title">data</span><span class="params">(<span class="number">7</span>,<span class="number">3</span>,CV_32F,fdata)</span></span>;</span><br><span class="line"><span class="keyword">float</span> fresponses[<span class="number">7</span>] ={<span class="number">0</span>,<span class="number">1</span>,<span class="number">0</span>,<span class="number">1</span>,<span class="number">1</span>,<span class="number">1</span>,<span class="number">1</span>};</span><br><span class="line"><span class="function">Mat <span class="title">responses</span><span class="params">(<span class="number">7</span>,<span class="number">1</span>,CV_32F,fresponses)</span></span>;</span><br><span class="line"><span class="keyword">float</span> priors[]={<span class="number">1</span>,<span class="number">1</span>,<span class="number">1</span>};</span><br><span class="line">CvDTree *tree;</span><br><span class="line"><span class="function">CvDTreeParams <span class="title">params</span><span class="params">( <span class="number">8</span>, <span class="comment">// max depth</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="number">1</span>, <span class="comment">// min sample count</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="number">0</span>, <span class="comment">// regression accuracy: N/A here</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="literal">true</span>, <span class="comment">// compute surrogate split, as we have missing data</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="number">15</span>, <span class="comment">// max number of categories (use sub-optimal algorithm for larger numbers)</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="number">0</span>, <span class="comment">// the number of cross-validation folds</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="literal">true</span>, <span class="comment">// use 1SE rule => smaller tree</span></span></span></span><br><span class="line"><span class="function"><span class="params"><span class="literal">true</span>, <span class="comment">// throw away the pruned tree branches</span></span></span></span><br><span class="line"><span class="function"><span class="params">priors <span class="comment">// the array of priors, the bigger p_weight, the more attention</span></span></span></span><br><span class="line"><span class="function"><span class="params">)</span></span>;</span><br><span class="line">tree = <span class="keyword">new</span> CvDTree;</span><br><span class="line">tree->train (data,CV_ROW_SAMPLE,responses,Mat(),</span><br><span class="line">Mat(),Mat(),Mat(),</span><br><span class="line">params);</span><br><span class="line"><span class="comment">//try predict</span></span><br><span class="line"><span class="keyword">float</span> sample[<span class="number">1</span>][<span class="number">3</span>] = {<span class="number">1</span>,<span class="number">1</span>,<span class="number">0</span>};</span><br><span class="line">Mat pred_sample = Mat(<span class="number">1</span>,<span class="number">3</span>,CV_32F,sample);</span><br><span class="line"><span class="keyword">double</span> prediction = tree->predict (pred_sample,Mat())->value;</span><br><span class="line"><span class="keyword">if</span>(prediction == <span class="number">0</span>)</span><br><span class="line"><span class="built_in">cout</span> << <span class="string">"Ming will go to work by bike!\n"</span><< <span class="built_in">endl</span>;</span><br><span class="line"><span class="keyword">else</span></span><br><span class="line"><span class="built_in">cout</span> << <span class="string">"Ming will go to work by taxi!\n"</span><< <span class="built_in">endl</span>;</span><br><span class="line">tree->save (<span class="string">"tree.xml"</span>,<span class="string">"test_tree"</span>);</span><br><span class="line"><span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>需要进行解释的一点就是,我们需要将上面的情景进行了数据化,我们将上面的情况都作为0和1来代表进行决策树的构建。所以新的表格如下所示: </p><div class="table-container"><table><thead><tr><th style="text-align:center">天气</th><th style="text-align:center">心情</th><th style="text-align:center">是否快要迟到</th><th style="text-align:center">上班方式</th></tr></thead><tbody><tr><td style="text-align:center">0</td><td style="text-align:center">0</td><td style="text-align:center">0</td><td style="text-align:center">0</td></tr><tr><td style="text-align:center">0</td><td style="text-align:center">0</td><td style="text-align:center">1</td><td style="text-align:center">1</td></tr><tr><td style="text-align:center">0</td><td style="text-align:center">1</td><td style="text-align:center">0</td><td style="text-align:center">0</td></tr><tr><td style="text-align:center">0</td><td style="text-align:center">1</td><td style="text-align:center">1</td><td style="text-align:center">1</td></tr><tr><td style="text-align:center">1</td><td style="text-align:center">0</td><td style="text-align:center">0</td><td style="text-align:center">1</td></tr><tr><td style="text-align:center">1</td><td style="text-align:center">0</td><td style="text-align:center">1</td><td style="text-align:center">1</td></tr><tr><td style="text-align:center">1</td><td style="text-align:center">1</td><td style="text-align:center">1</td><td style="text-align:center">1</td></tr></tbody></table></div><p>利用这段程序大家可以看一下这颗决策树对天气恶劣,心情不好,但是时间还充足的情况下小明会选择哪种交通工具进行出行进行的预测。算法给出的答案如下图</p><p><img src="https://s1.ax1x.com/2018/10/24/is9nsg.jpg" alt="2-4.jpg"></p><p>这和我们推导的结果一样。</p>]]></content:encoded>
<comments>http://yoursite.com/2018/06/09/decision-tree/#disqus_thread</comments>
</item>
</channel>
</rss>