-
Notifications
You must be signed in to change notification settings - Fork 0
/
search.xml
2655 lines (1534 loc) · 691 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>Focal Loss in RPN</title>
<link href="/2019/06/13/focal-loss-in-rpn/"/>
<url>/2019/06/13/focal-loss-in-rpn/</url>
<content type="html"><![CDATA[<h3 id="参考文献"><a href="#参考文献" class="headerlink" title="参考文献"></a>参考文献</h3><ol><li><a href="https://link.springer.com/chapter/10.1007/978-3-030-03335-4_32" target="_blank" rel="noopener">Focal Loss for Region Proposal Network</a></li><li><a href="https://github.com/unsky/focal-loss" target="_blank" rel="noopener">focal-loss</a></li></ol><a id="more"></a><p>参考文献[1]详细阐述了focal loss在faster rcnn中的应用实验。<br>参考文献[2]是focal loss在faster rcnn中的代码。</p>]]></content>
</entry>
<entry>
<title>windows10离线安装pytorch</title>
<link href="/2019/05/27/windows10%E7%A6%BB%E7%BA%BF%E5%AE%89%E8%A3%85pytorch/"/>
<url>/2019/05/27/windows10%E7%A6%BB%E7%BA%BF%E5%AE%89%E8%A3%85pytorch/</url>
<content type="html"><![CDATA[<p>最近多机同步跑,本以为安装不了旧版本pytorch,之后柳暗花明,幸甚幸甚。。。</p><a id="more"></a><p>开发环境:</p><ol><li>windows 10</li><li>visual studio community 2015</li><li>anaconda 4 python 3.6</li><li>pycharm 2019 community</li></ol><p>简单步骤如下</p><p>打开pytorch的<a href="https://anaconda.org/pytorch/pytorch" target="_blank" rel="noopener">anaconda官方packages</a>即可下载指定版本的pytorch。</p><p>下载完成之后,cmd切换到下载文件夹后,命令:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">conda install --offline pytorch-1.0.1-py3.6_cuda100_cudnn7_1.tar.bz2</span><br></pre></td></tr></table></figure><p>之后自动安装pytorch,安装完成之后,再输入:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">pip install torchvision==0.2.2.post3</span><br></pre></td></tr></table></figure><p>即可自动安装torchvision。</p><p>最近网络不稳定,最好在科学上网的环境下下载,下载比较快。</p><p>之后就可以测试使用了,到此博主是安装成功并成功运行代码了。</p>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> pytorch </tag>
<tag> windows10 </tag>
</tags>
</entry>
<entry>
<title>创建自己的COCO数据集</title>
<link href="/2019/05/22/%E5%88%9B%E5%BB%BA%E8%87%AA%E5%B7%B1%E7%9A%84COCO%E6%95%B0%E6%8D%AE%E9%9B%86/"/>
<url>/2019/05/22/%E5%88%9B%E5%BB%BA%E8%87%AA%E5%B7%B1%E7%9A%84COCO%E6%95%B0%E6%8D%AE%E9%9B%86/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="waspinator" data-repo="pycococreator" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><h3 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h3><ol><li><a href="https://patrickwasp.com/create-your-own-coco-style-dataset/" target="_blank" rel="noopener">Create your own COCO-style datasets</a></li><li><a href="https://github.com/waspinator/pycococreator" target="_blank" rel="noopener">pycococreator</a></li></ol><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>anchor-free论文</title>
<link href="/2019/05/21/anchor-free%E8%AE%BA%E6%96%87/"/>
<url>/2019/05/21/anchor-free%E8%AE%BA%E6%96%87/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="VCBE123" data-repo="AnchorFreeDetection" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><a id="more"></a>]]></content>
</entry>
<entry>
<title>CNN里的layer到底用add还是concat</title>
<link href="/2019/05/20/CNN%E9%87%8C%E7%9A%84layer%E5%88%B0%E5%BA%95%E7%94%A8add%E8%BF%98%E6%98%AFconcat/"/>
<url>/2019/05/20/CNN%E9%87%8C%E7%9A%84layer%E5%88%B0%E5%BA%95%E7%94%A8add%E8%BF%98%E6%98%AFconcat/</url>
<content type="html"><![CDATA[<p>层间连接似乎concat要比add更好用,就数学表达式来说,add是concat的一种特殊表示,densenet的结果比resnet要好,所以得出如此结论。</p><a id="more"></a><h3 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h3><ol><li><a href="https://stats.stackexchange.com/questions/361018/when-to-add-layers-and-when-to-concatenate-in-neural-networks" target="_blank" rel="noopener">When to “add” layers and when to “concatenate” in neural networks?</a></li><li><a href="https://github.com/facebook/fb.resnet.torch/issues/97" target="_blank" rel="noopener">What about concatenating the residual instead of adding?</a></li><li><a href="https://blog.csdn.net/u012193416/article/details/79479935" target="_blank" rel="noopener">神经网络中concatenate和add层的不同</a></li><li><a href="https://www.zhihu.com/question/306213462" target="_blank" rel="noopener">如何理解神经网络中通过add的方式融合特征?</a></li><li><a href="https://www.twblogs.net/a/5c841117bd9eee35cd69c20a" target="_blank" rel="noopener">深度特徵融合—理解add和concat、DenseNet之多層特徵融合</a></li></ol>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>windowsws下安装coco的PythonAPI</title>
<link href="/2019/05/14/windowsws%E4%B8%8B%E5%AE%89%E8%A3%85coco%E7%9A%84PythonAPI/"/>
<url>/2019/05/14/windowsws%E4%B8%8B%E5%AE%89%E8%A3%85coco%E7%9A%84PythonAPI/</url>
<content type="html"><![CDATA[<p>步骤:</p><ol><li>git clone –recursive <a href="https://github.com/cocodataset/cocoapi" target="_blank" rel="noopener">https://github.com/cocodataset/cocoapi</a> cocoapi</li><li><p>打开PythonAPI中的setup.py,修改为(主要修改第12行)</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line">from setuptools import setup, Extension</span><br><span class="line">import numpy as np</span><br><span class="line"></span><br><span class="line"># To compile and install locally run "python setup.py build_ext --inplace"</span><br><span class="line"># To install library to Python site-packages run "python setup.py build_ext install"</span><br><span class="line"></span><br><span class="line">ext_modules = [</span><br><span class="line"> Extension(</span><br><span class="line"> 'pycocotools._mask',</span><br><span class="line"> sources=['../common/maskApi.c', 'pycocotools/_mask.pyx'],</span><br><span class="line"> include_dirs = [np.get_include(), '../common'],</span><br><span class="line"> extra_compile_args=[]#['-Wno-cpp', '-Wno-unused-function', '-std=c99'],这一行</span><br><span class="line"> )</span><br><span class="line">]</span><br><span class="line"></span><br><span class="line">setup(</span><br><span class="line"> name='pycocotools',</span><br><span class="line"> packages=['pycocotools'],</span><br><span class="line"> package_dir = {'pycocotools': 'pycocotools'},</span><br><span class="line"> install_requires=[</span><br><span class="line"> 'setuptools>=18.0',</span><br><span class="line"> 'cython>=0.27.3',</span><br><span class="line"> 'matplotlib>=2.1.0'</span><br><span class="line"> ],</span><br><span class="line"> version='2.0',</span><br><span class="line"> ext_modules= ext_modules</span><br><span class="line">)</span><br></pre></td></tr></table></figure></li><li><p>运行</p> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">python setup.py build develop</span><br></pre></td></tr></table></figure></li></ol><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>浏览观点</title>
<link href="/2019/05/07/%E6%B5%8F%E8%A7%88%E8%A7%82%E7%82%B9/"/>
<url>/2019/05/07/%E6%B5%8F%E8%A7%88%E8%A7%82%E7%82%B9/</url>
<content type="html"><![CDATA[<p>摘抄自:<a href="https://zhuanlan.zhihu.com/p/41825737" target="_blank" rel="noopener">CornerNet:目标检测算法新思路</a></p><p>个人观点:CornerNet创新来自于多人姿态估计的Bottom-Up思路,预测corner的heatmps,根据Embeddings vector对corner进行分组,其主干网络也来自于姿态估计的Hourglass Network。模型的源码在github已经公布,可以放心大胆的研究测试。</p><p>CV的很多任务之间是相通的,CVPR2018 best paper [8]也印证这一观点,在不同的子领域寻找相似性,迁移不同领域的算法,是CV行业一个趋势。</p><p>多人姿态估计的Hourglass Network算法也不断改进中,其实论文模型的推断速率受限于Hourglass Network的特征提取,有志青年也可以沿着这个思路取得更好的性能。</p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>有关opencv的学习教程</title>
<link href="/2019/04/30/%E6%9C%89%E5%85%B3opencv%E7%9A%84%E5%AD%A6%E4%B9%A0%E6%95%99%E7%A8%8B/"/>
<url>/2019/04/30/%E6%9C%89%E5%85%B3opencv%E7%9A%84%E5%AD%A6%E4%B9%A0%E6%95%99%E7%A8%8B/</url>
<content type="html"><![CDATA[<p><a href="https://www.learnopencv.com/" target="_blank" rel="noopener">LearnOpenCV</a></p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> opencv </tag>
</tags>
</entry>
<entry>
<title>libtorch版本的dcgan</title>
<link href="/2019/04/30/libtorch%E7%89%88%E6%9C%AC%E7%9A%84dcgan/"/>
<url>/2019/04/30/libtorch%E7%89%88%E6%9C%AC%E7%9A%84dcgan/</url>
<content type="html"><![CDATA[<p>贴一波代码,官方给的代码在windows环境下运行不了,这是我自己改写的代码,呕心沥血。。。。。。</p><a id="more"></a><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br><span class="line">188</span><br><span class="line">189</span><br><span class="line">190</span><br><span class="line">191</span><br><span class="line">192</span><br><span class="line">193</span><br><span class="line">194</span><br><span class="line">195</span><br><span class="line">196</span><br><span class="line">197</span><br><span class="line">198</span><br><span class="line">199</span><br><span class="line">200</span><br><span class="line">201</span><br><span class="line">202</span><br><span class="line">203</span><br><span class="line">204</span><br><span class="line">205</span><br><span class="line">206</span><br><span class="line">207</span><br><span class="line">208</span><br><span class="line">209</span><br><span class="line">210</span><br><span class="line">211</span><br><span class="line">212</span><br><span class="line">213</span><br><span class="line">214</span><br><span class="line">215</span><br><span class="line">216</span><br><span class="line">217</span><br><span class="line">218</span><br><span class="line">219</span><br><span class="line">220</span><br><span class="line">221</span><br><span class="line">222</span><br><span class="line">223</span><br></pre></td><td class="code"><pre><span class="line">#include <torch/torch.h></span><br><span class="line"></span><br><span class="line">#include <cmath></span><br><span class="line">#include <cstdio></span><br><span class="line">#include <iostream></span><br><span class="line"></span><br><span class="line">// The size of the noise vector fed to the generator.</span><br><span class="line">const int64_t kNoiseSize = 100;</span><br><span class="line"></span><br><span class="line">// The batch size for training.</span><br><span class="line">const int64_t kBatchSize = 64;</span><br><span class="line"></span><br><span class="line">// The number of epochs to train.</span><br><span class="line">const int64_t kNumberOfEpochs = 30;</span><br><span class="line"></span><br><span class="line">// Where to find the MNIST dataset.</span><br><span class="line">const char* kDataFolder = "./data";</span><br><span class="line"></span><br><span class="line">// After how many batches to create a new checkpoint periodically.</span><br><span class="line">const int64_t kCheckpointEvery = 200;</span><br><span class="line"></span><br><span class="line">// How many images to sample at every checkpoint.</span><br><span class="line">const int64_t kNumberOfSamplesPerCheckpoint = 10;</span><br><span class="line"></span><br><span class="line">// Set to `true` to restore models and optimizers from previously saved</span><br><span class="line">// checkpoints.</span><br><span class="line">const bool kRestoreFromCheckpoint = false;</span><br><span class="line"></span><br><span class="line">// After how many batches to log a new update with the loss value.</span><br><span class="line">const int64_t kLogInterval = 10;</span><br><span class="line"></span><br><span class="line">using namespace torch;</span><br><span class="line"></span><br><span class="line">struct GeneratorrImpl : nn::Module {</span><br><span class="line">GeneratorrImpl()</span><br><span class="line">: conv1(nn::Conv2dOptions(kNoiseSize, 256, 4)</span><br><span class="line">.with_bias(false)</span><br><span class="line">.transposed(true)),</span><br><span class="line">batch_norm1(256),</span><br><span class="line">conv2(nn::Conv2dOptions(256, 128, 3)</span><br><span class="line">.stride(2)</span><br><span class="line">.padding(1)</span><br><span class="line">.with_bias(false)</span><br><span class="line">.transposed(true)),</span><br><span class="line">batch_norm2(128),</span><br><span class="line">conv3(nn::Conv2dOptions(128, 64, 4)</span><br><span class="line">.stride(2)</span><br><span class="line">.padding(1)</span><br><span class="line">.with_bias(false)</span><br><span class="line">.transposed(true)),</span><br><span class="line">batch_norm3(64),</span><br><span class="line">conv4(nn::Conv2dOptions(64, 1, 4)</span><br><span class="line">.stride(2)</span><br><span class="line">.padding(1)</span><br><span class="line">.with_bias(false)</span><br><span class="line">.transposed(true)) {</span><br><span class="line">register_module("conv1", conv1);</span><br><span class="line">register_module("conv2", conv2);</span><br><span class="line">register_module("conv3", conv3);</span><br><span class="line">register_module("conv4", conv4);</span><br><span class="line"></span><br><span class="line">register_module("batch_norm1", batch_norm1);</span><br><span class="line">register_module("batch_norm2", batch_norm2);</span><br><span class="line">register_module("batch_norm3", batch_norm3);</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">torch::Tensor forward(torch::Tensor x) {</span><br><span class="line">x = torch::relu(batch_norm1(conv1(x)));</span><br><span class="line">x = torch::relu(batch_norm2(conv2(x)));</span><br><span class="line">x = torch::relu(batch_norm3(conv3(x)));</span><br><span class="line">x = torch::tanh(conv4(x));</span><br><span class="line">return x;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">nn::Conv2d conv1, conv2, conv3, conv4;</span><br><span class="line">nn::BatchNorm batch_norm1, batch_norm2, batch_norm3;</span><br><span class="line">};</span><br><span class="line">TORCH_MODULE(Generatorr);</span><br><span class="line"></span><br><span class="line">Generatorr generatorr;</span><br><span class="line"></span><br><span class="line">struct DiscriminatorImpl : nn::Module {</span><br><span class="line">DiscriminatorImpl()</span><br><span class="line">: conv1(nn::Conv2dOptions(1, 64, 4).stride(2).padding(1).with_bias(false)),</span><br><span class="line">leaky_relu1(nn::Functional(torch::leaky_relu, 0.2)),</span><br><span class="line">conv2(nn::Conv2dOptions(64, 128, 4).stride(2).padding(1).with_bias(false)),</span><br><span class="line">batch_norm2(128),</span><br><span class="line">leaky_relu2(nn::Functional(torch::leaky_relu, 0.2)),</span><br><span class="line">conv3(nn::Conv2dOptions(128, 256, 4).stride(2).padding(1).with_bias(false)),</span><br><span class="line">batch_norm3(256),</span><br><span class="line">leaky_relu3(nn::Functional(torch::leaky_relu, 0.2)),</span><br><span class="line">conv4(nn::Conv2dOptions(256, 1, 3).stride(1).padding(0).with_bias(false)) {</span><br><span class="line">register_module("conv1", conv1);</span><br><span class="line">register_module("conv2", conv2);</span><br><span class="line">register_module("conv3", conv3);</span><br><span class="line">register_module("conv4", conv4);</span><br><span class="line">register_module("leaky_relu1", leaky_relu1);</span><br><span class="line">register_module("leaky_relu2", leaky_relu2);</span><br><span class="line">register_module("leaky_relu3", leaky_relu3);</span><br><span class="line">register_module("batch_norm2", batch_norm2);</span><br><span class="line">register_module("batch_norm3", batch_norm3);</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">torch::Tensor forward(torch::Tensor x) {</span><br><span class="line">x = leaky_relu1(conv1(x));</span><br><span class="line">x = leaky_relu2(batch_norm2(conv2(x)));</span><br><span class="line">x = leaky_relu3(batch_norm3(conv3(x)));</span><br><span class="line">x = torch::sigmoid(conv4(x));</span><br><span class="line">return x;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">nn::Conv2d conv1, conv2, conv3, conv4;</span><br><span class="line">nn::Functional leaky_relu1, leaky_relu2, leaky_relu3;</span><br><span class="line">nn::BatchNorm batch_norm2, batch_norm3;</span><br><span class="line">};</span><br><span class="line">TORCH_MODULE(Discriminator);</span><br><span class="line">Discriminator discriminator;</span><br><span class="line"></span><br><span class="line">int main(int argc, const char* argv[]) {</span><br><span class="line">torch::manual_seed(1);</span><br><span class="line"></span><br><span class="line">// Create the device we pass around based on whether CUDA is available.</span><br><span class="line">torch::Device device(torch::kCPU);</span><br><span class="line">if (torch::cuda::is_available()) {</span><br><span class="line">std::cout << "CUDA is available! Training on GPU." << std::endl;</span><br><span class="line">device = torch::Device(torch::kCUDA);</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">generatorr->to(device);</span><br><span class="line">discriminator->to(device);</span><br><span class="line"></span><br><span class="line">// Assume the MNIST dataset is available under `kDataFolder`;</span><br><span class="line">auto dataset = torch::data::datasets::MNIST(kDataFolder)</span><br><span class="line">.map(torch::data::transforms::Normalize<>(0.5, 0.5))</span><br><span class="line">.map(torch::data::transforms::Stack<>());</span><br><span class="line">const int64_t batches_per_epoch =</span><br><span class="line">std::ceil(dataset.size().value() / static_cast<double>(kBatchSize));</span><br><span class="line"></span><br><span class="line">auto data_loader = torch::data::make_data_loader(</span><br><span class="line">std::move(dataset),</span><br><span class="line">torch::data::DataLoaderOptions().batch_size(kBatchSize).workers(2));</span><br><span class="line">torch::optim::Adam generator_optimizer(</span><br><span class="line">generatorr->parameters(), torch::optim::AdamOptions(2e-4).beta1(0.5));</span><br><span class="line">torch::optim::Adam discriminator_optimizer(</span><br><span class="line">discriminator->parameters(), torch::optim::AdamOptions(2e-4).beta1(0.5));</span><br><span class="line"></span><br><span class="line">if (kRestoreFromCheckpoint) {</span><br><span class="line">torch::load(generatorr, "generator-checkpoint.pt");</span><br><span class="line">torch::load(generator_optimizer, "generator-optimizer-checkpoint.pt");</span><br><span class="line">torch::load(discriminator, "discriminator-checkpoint.pt");</span><br><span class="line">torch::load(</span><br><span class="line">discriminator_optimizer, "discriminator-optimizer-checkpoint.pt");</span><br><span class="line">}</span><br><span class="line">int64_t checkpoint_counter = 1;</span><br><span class="line">for (int64_t epoch = 1; epoch <= kNumberOfEpochs; ++epoch) {</span><br><span class="line">int64_t batch_index = 0;</span><br><span class="line">for (torch::data::Example<>& batch : *data_loader) {</span><br><span class="line">// Train discriminator with real images.</span><br><span class="line">discriminator->zero_grad();</span><br><span class="line">torch::Tensor real_images = batch.data.to(device);</span><br><span class="line">torch::Tensor real_labels =</span><br><span class="line">torch::empty(batch.data.size(0), device).uniform_(0.8, 1.0);</span><br><span class="line">torch::Tensor real_output = discriminator->forward(real_images);</span><br><span class="line">torch::Tensor d_loss_real =</span><br><span class="line">torch::binary_cross_entropy(real_output, real_labels);</span><br><span class="line">d_loss_real.backward();</span><br><span class="line"></span><br><span class="line">// Train discriminator with fake images.</span><br><span class="line">torch::Tensor noise =</span><br><span class="line">torch::randn({ batch.data.size(0), kNoiseSize, 1, 1 }, device);</span><br><span class="line">torch::Tensor fake_images = generatorr->forward(noise);</span><br><span class="line">torch::Tensor fake_labels = torch::zeros(batch.data.size(0), device);</span><br><span class="line">torch::Tensor fake_output = discriminator->forward(fake_images.detach());</span><br><span class="line">//std::cout << fake_labels.sizes() << std::endl;</span><br><span class="line">//std::cout << fake_output.sizes() << std::endl;</span><br><span class="line">torch::Tensor d_loss_fake =</span><br><span class="line">torch::binary_cross_entropy(fake_output, fake_labels);</span><br><span class="line">d_loss_fake.backward();</span><br><span class="line"></span><br><span class="line">torch::Tensor d_loss = d_loss_real + d_loss_fake;</span><br><span class="line">discriminator_optimizer.step();</span><br><span class="line"></span><br><span class="line">// Train generator.</span><br><span class="line">generatorr->zero_grad();</span><br><span class="line">fake_labels.fill_(1);</span><br><span class="line">fake_output = discriminator->forward(fake_images);</span><br><span class="line">torch::Tensor g_loss =</span><br><span class="line">torch::binary_cross_entropy(fake_output, fake_labels);</span><br><span class="line">g_loss.backward();</span><br><span class="line">generator_optimizer.step();</span><br><span class="line">batch_index++;</span><br><span class="line">if (batch_index % kLogInterval == 0) {</span><br><span class="line">std::printf(</span><br><span class="line">"\r[%2ld/%2ld][%3ld/%3ld] D_loss: %.4f | G_loss: %.4f\n",</span><br><span class="line">epoch,</span><br><span class="line">kNumberOfEpochs,</span><br><span class="line">batch_index,</span><br><span class="line">batches_per_epoch,</span><br><span class="line">d_loss.item<float>(),</span><br><span class="line">g_loss.item<float>());</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">if (batch_index % kCheckpointEvery == 0) {</span><br><span class="line">// Checkpoint the model and optimizer state.</span><br><span class="line">torch::save(generatorr, "generator-checkpoint.pt");</span><br><span class="line">torch::save(generator_optimizer, "generator-optimizer-checkpoint.pt");</span><br><span class="line">torch::save(discriminator, "discriminator-checkpoint.pt");</span><br><span class="line">torch::save(</span><br><span class="line">discriminator_optimizer, "discriminator-optimizer-checkpoint.pt");</span><br><span class="line">// Sample the generator and save the images.</span><br><span class="line">torch::Tensor samples = generatorr->forward(torch::randn(</span><br><span class="line">{ kNumberOfSamplesPerCheckpoint, kNoiseSize, 1, 1 }, device));</span><br><span class="line">torch::save(</span><br><span class="line">(samples + 1.0) / 2.0,</span><br><span class="line">torch::str("dcgan-sample-", checkpoint_counter, ".pt"));</span><br><span class="line">std::cout << "\n-> checkpoint " << ++checkpoint_counter << '\n';</span><br><span class="line">}</span><br><span class="line">}</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">std::cout << "Training complete!" << std::endl;</span><br><span class="line">}</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> libtorch </tag>
<tag> dcgan </tag>
</tags>
</entry>
<entry>
<title>pytorch的C版本maskrcnn</title>
<link href="/2019/04/30/pytorch%E7%9A%84C%E7%89%88%E6%9C%ACmaskrcnn/"/>
<url>/2019/04/30/pytorch%E7%9A%84C%E7%89%88%E6%9C%ACmaskrcnn/</url>
<content type="html"><![CDATA[<p>找了好长时间,终于找到了,苍天啊,大地啊。。。。。。<br><a href="https://www.datasciencecentral.com/profiles/blogs/machine-learning-with-c-mask-r-cnn-with-pytorch-c-frontend" target="_blank" rel="noopener">Machine Learning with C++ - Mask R-CNN with PyTorch C++ Frontend</a></p><p>Github: <a href="https://github.com/Kolkir/mlcpp/tree/master/mask_rcnn_pytorch" target="_blank" rel="noopener">mask_rcnn_pytorch</a></p><p>可以关注Kolkir的<a href="https://github.com/Kolkir" target="_blank" rel="noopener">GitHub</a>和<a href="https://www.datasciencecentral.com/profiles/blog/list?user=1hxl8pmf7m6b9" target="_blank" rel="noopener">博客Kyrylo Kolodiazhnyi’s Blog</a>。</p>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> pytorch </tag>
<tag> C </tag>
<tag> maskrcnn </tag>
</tags>
</entry>
<entry>
<title>有关遮挡的论文</title>
<link href="/2019/04/29/%E6%9C%89%E5%85%B3%E9%81%AE%E6%8C%A1%E7%9A%84%E8%AE%BA%E6%96%87/"/>
<url>/2019/04/29/%E6%9C%89%E5%85%B3%E9%81%AE%E6%8C%A1%E7%9A%84%E8%AE%BA%E6%96%87/</url>
<content type="html"><![CDATA[<p><a href="https://arxiv.org/pdf/1805.00123.pdf" target="_blank" rel="noopener">CrowdHuman: A Benchmark for Detecting Human in a Crowd</a></p><p><a href="https://arxiv.org/pdf/1904.03629.pdf" target="_blank" rel="noopener">Adaptive NMS: Refining Pedestrian Detection in a Crowd</a></p><p><a href="https://arxiv.org/pdf/1711.07752.pdf" target="_blank" rel="noopener">Reploss Loss: Detecting Pedestrians in a Crowd</a></p><p>这个不算遮挡<br><a href="https://arxiv.org/pdf/1810.10220v2.pdf" target="_blank" rel="noopener">DSFD: Dual Shot Face Detector</a></p><a id="more"></a>]]></content>
</entry>
<entry>
<title>COCO比赛leaderboard</title>
<link href="/2019/04/29/COCO%E6%AF%94%E8%B5%9Bleaderboard/"/>
<url>/2019/04/29/COCO%E6%AF%94%E8%B5%9Bleaderboard/</url>
<content type="html"><![CDATA[<p><a href="http://cocodataset.org/#detection-leaderboard" target="_blank" rel="noopener">detection-leaderboard</a></p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> COCO </tag>
</tags>
</entry>
<entry>
<title>colaboratory</title>
<link href="/2019/04/29/colaboratory/"/>
<url>/2019/04/29/colaboratory/</url>
<content type="html"><![CDATA[<p>薅资本主义羊毛,有人辟谣说不好用。。。</p><p><a href="https://colab.research.google.com/notebooks/welcome.ipynb" target="_blank" rel="noopener">colaboratory</a></p><a id="more"></a>]]></content>
</entry>
<entry>
<title>如何部署pytorch到windows</title>
<link href="/2019/04/26/%E5%A6%82%E4%BD%95%E9%83%A8%E7%BD%B2pytorch%E5%88%B0windows/"/>
<url>/2019/04/26/%E5%A6%82%E4%BD%95%E9%83%A8%E7%BD%B2pytorch%E5%88%B0windows/</url>
<content type="html"><![CDATA[<h4 id="下载Libtorch"><a href="#下载Libtorch" class="headerlink" title="下载Libtorch"></a>下载Libtorch</h4><p><a href="https://pytorch.org/get-started/locally/" target="_blank" rel="noopener">libtorch网址</a></p><a id="more"></a><h4 id="在python端保存模型"><a href="#在python端保存模型" class="headerlink" title="在python端保存模型"></a>在python端保存模型</h4><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"><span class="keyword">import</span> torchvision</span><br><span class="line"></span><br><span class="line">model=torchvision.models.resnet18(pretrained=<span class="keyword">True</span>)</span><br><span class="line"></span><br><span class="line">example=torch.rand(<span class="number">1</span>,<span class="number">3</span>,<span class="number">224</span>,<span class="number">224</span>)</span><br><span class="line"></span><br><span class="line">model=model.eval()</span><br><span class="line"></span><br><span class="line">traced_script_module=torch.jit.trace(model,example)</span><br><span class="line"></span><br><span class="line">output=traced_script_module(torch.ones(<span class="number">1</span>,<span class="number">3</span>,<span class="number">224</span>,<span class="number">224</span>))</span><br><span class="line"></span><br><span class="line">traced_script_module.save(<span class="string">'model-trace.pt'</span>)</span><br><span class="line"></span><br><span class="line">print(output[<span class="number">0</span>,:<span class="number">5</span>])</span><br></pre></td></tr></table></figure><p>这样保存的模型既能在CPU端使用,也能在GPU端使用。</p><h4 id="在C-端使用模型"><a href="#在C-端使用模型" class="headerlink" title="在C++端使用模型"></a>在C++端使用模型</h4><figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><iostream></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><torch\script.h></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><memory></span></span></span><br><span class="line"><span class="keyword">using</span> <span class="keyword">namespace</span> <span class="built_in">std</span>;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">()</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="built_in">cout</span> << <span class="string">"Hello Libtorch"</span> << <span class="built_in">endl</span>;</span><br><span class="line"></span><br><span class="line"><span class="built_in">shared_ptr</span><torch::jit::script::Module> <span class="keyword">module</span> = torch::jit::load(<span class="string">"D:/VSCode/PythonProject/model-trace.pt"</span>);</span><br><span class="line"><span class="keyword">module</span>->to(at::kCUDA);</span><br><span class="line">assert(<span class="keyword">module</span> != <span class="literal">nullptr</span>);</span><br><span class="line"><span class="built_in">cout</span> << <span class="string">"OK"</span> << <span class="built_in">endl</span>;</span><br><span class="line"></span><br><span class="line"><span class="built_in">std</span>::<span class="built_in">vector</span><torch::jit::IValue> inputs;</span><br><span class="line">inputs.push_back(torch::ones({ <span class="number">1</span>,<span class="number">3</span>,<span class="number">224</span>,<span class="number">224</span> }).to(at::kCUDA));</span><br><span class="line"><span class="keyword">auto</span> output = <span class="keyword">module</span>->forward(inputs).toTensor().to(at::kCPU);</span><br><span class="line"><span class="built_in">cout</span> << output.slice(<span class="number">1</span>, <span class="number">0</span>, <span class="number">5</span>) << <span class="built_in">endl</span>;</span><br><span class="line"><span class="built_in">cout</span> << output[<span class="number">0</span>][<span class="number">1</span>]<< <span class="built_in">endl</span>;</span><br><span class="line"><span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>此代码是在GPU端的调用</p><p>附加包含目录>>{libtorch}/include</p><p>附加库目录>>{libtorch}/lib</p><p>附加依赖项>>c10.lib;caffe2.lib;torch.lib;</p><p>讲dll文件拷贝到exe文件目录下>>c10.dll;caffe2.dll;caffe2_gpu.dll;cudnn64_7.dll;torch.dll</p><p>至此生成成功。</p><h4 id="下面是结合了opencv的部署示例。"><a href="#下面是结合了opencv的部署示例。" class="headerlink" title="下面是结合了opencv的部署示例。"></a>下面是结合了opencv的部署示例。</h4><p>代码py端:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br><span class="line">165</span><br><span class="line">166</span><br><span class="line">167</span><br><span class="line">168</span><br><span class="line">169</span><br><span class="line">170</span><br><span class="line">171</span><br><span class="line">172</span><br><span class="line">173</span><br><span class="line">174</span><br><span class="line">175</span><br><span class="line">176</span><br><span class="line">177</span><br><span class="line">178</span><br><span class="line">179</span><br><span class="line">180</span><br><span class="line">181</span><br><span class="line">182</span><br><span class="line">183</span><br><span class="line">184</span><br><span class="line">185</span><br><span class="line">186</span><br><span class="line">187</span><br></pre></td><td class="code"><pre><span class="line">from __future__ import print_function,division</span><br><span class="line">import torch</span><br><span class="line">import torch.nn as nn</span><br><span class="line">import torch.optim as optim</span><br><span class="line">from torch.optim import lr_scheduler</span><br><span class="line">import numpy as np</span><br><span class="line">import torchvision</span><br><span class="line">from torchvision import datasets,models,transforms</span><br><span class="line">import matplotlib.pyplot as plt</span><br><span class="line">import time</span><br><span class="line">import os</span><br><span class="line">import copy</span><br><span class="line"># 在使用matplotlib的过程中,常常会需要画很多图,但是好像并不能同时展示许多图。</span><br><span class="line"># 这是因为python可视化库matplotlib的显示模式默认为阻塞(block)模式。</span><br><span class="line"># 什么是阻塞模式那?我的理解就是在plt.show()之后,程序会暂停到那儿,并不会继续执行下去。</span><br><span class="line"># 如果需要继续执行程序,就要关闭图片。</span><br><span class="line"># 那如何展示动态图或多个窗口呢?</span><br><span class="line"># 这就要使用plt.ion()这个函数,使matplotlib的显示模式转换为交互(interactive)模式。</span><br><span class="line"># 即使在脚本中遇到plt.show(),代码还是会继续执行。</span><br><span class="line"></span><br><span class="line"># plt.ion()</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># Just normalization for validation</span><br><span class="line">data_transforms = {</span><br><span class="line"> 'train': transforms.Compose([</span><br><span class="line"> transforms.RandomResizedCrop(224),</span><br><span class="line"> transforms.RandomHorizontalFlip(),</span><br><span class="line"> transforms.ToTensor(),</span><br><span class="line"> transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])</span><br><span class="line"> ]),</span><br><span class="line"> 'val': transforms.Compose([</span><br><span class="line"> transforms.Resize(256),</span><br><span class="line"> transforms.CenterCrop(224),</span><br><span class="line"> transforms.ToTensor(),</span><br><span class="line"> transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])</span><br><span class="line"> ]),</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">data_dir = 'hymenoptera_data'</span><br><span class="line">image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),</span><br><span class="line"> data_transforms[x])</span><br><span class="line"> for x in ['train', 'val']}</span><br><span class="line">dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,</span><br><span class="line"> shuffle=True)</span><br><span class="line"> for x in ['train', 'val']}</span><br><span class="line">dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}</span><br><span class="line">class_names = image_datasets['train'].classes</span><br><span class="line"></span><br><span class="line">device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")</span><br><span class="line"></span><br><span class="line">def imshow(inp, title=None):</span><br><span class="line"> """Imshow for Tensor."""</span><br><span class="line"> inp = inp.numpy().transpose((1, 2, 0))</span><br><span class="line"> mean = np.array([0.485, 0.456, 0.406])</span><br><span class="line"> std = np.array([0.229, 0.224, 0.225])</span><br><span class="line"> inp = std * inp + mean</span><br><span class="line"> inp = np.clip(inp, 0, 1)</span><br><span class="line"> plt.imshow(inp)</span><br><span class="line"> if title is not None:</span><br><span class="line"> plt.title(title)</span><br><span class="line"> plt.pause(5) # pause a bit so that plots are updated</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># Get a batch of training data</span><br><span class="line"># inputs, classes = next(iter(dataloaders['train']))</span><br><span class="line"></span><br><span class="line"># Make a grid from batch</span><br><span class="line"># out = torchvision.utils.make_grid(inputs)</span><br><span class="line"></span><br><span class="line"># imshow(out, title=[class_names[x] for x in classes])</span><br><span class="line"></span><br><span class="line">def train_model(model, criterion, optimizer, scheduler, num_epochs=25):</span><br><span class="line"> since = time.time()</span><br><span class="line"></span><br><span class="line"> best_model_wts = copy.deepcopy(model.state_dict())</span><br><span class="line"> best_acc = 0.0</span><br><span class="line"></span><br><span class="line"> for epoch in range(num_epochs):</span><br><span class="line"> print('Epoch {}/{}'.format(epoch, num_epochs - 1))</span><br><span class="line"> print('-' * 10)</span><br><span class="line"></span><br><span class="line"> # Each epoch has a training and validation phase</span><br><span class="line"> for phase in ['train', 'val']:</span><br><span class="line"> if phase == 'train':</span><br><span class="line"> scheduler.step()</span><br><span class="line"> model.train() # Set model to training mode</span><br><span class="line"> else:</span><br><span class="line"> model.eval() # Set model to evaluate mode</span><br><span class="line"></span><br><span class="line"> running_loss = 0.0</span><br><span class="line"> running_corrects = 0</span><br><span class="line"></span><br><span class="line"> # Iterate over data.</span><br><span class="line"> for inputs, labels in dataloaders[phase]:</span><br><span class="line"> inputs = inputs.to(device)</span><br><span class="line"> labels = labels.to(device)</span><br><span class="line"></span><br><span class="line"> # zero the parameter gradients</span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"></span><br><span class="line"> # forward</span><br><span class="line"> # track history if only in train</span><br><span class="line"> with torch.set_grad_enabled(phase == 'train'):</span><br><span class="line"> outputs = model(inputs)</span><br><span class="line"> _, preds = torch.max(outputs, 1)</span><br><span class="line"> loss = criterion(outputs, labels)</span><br><span class="line"></span><br><span class="line"> # backward + optimize only if in training phase</span><br><span class="line"> if phase == 'train':</span><br><span class="line"> loss.backward()</span><br><span class="line"> optimizer.step()</span><br><span class="line"></span><br><span class="line"> # statistics</span><br><span class="line"> running_loss += loss.item() * inputs.size(0)</span><br><span class="line"> running_corrects += torch.sum(preds == labels.data)</span><br><span class="line"></span><br><span class="line"> epoch_loss = running_loss / dataset_sizes[phase]</span><br><span class="line"> epoch_acc = running_corrects.double() / dataset_sizes[phase]</span><br><span class="line"></span><br><span class="line"> print('{} Loss: {:.4f} Acc: {:.4f}'.format(</span><br><span class="line"> phase, epoch_loss, epoch_acc))</span><br><span class="line"></span><br><span class="line"> # deep copy the model</span><br><span class="line"> if phase == 'val' and epoch_acc > best_acc:</span><br><span class="line"> best_acc = epoch_acc</span><br><span class="line"> best_model_wts = copy.deepcopy(model.state_dict())</span><br><span class="line"></span><br><span class="line"> print()</span><br><span class="line"></span><br><span class="line"> time_elapsed = time.time() - since</span><br><span class="line"> print('Training complete in {:.0f}m {:.0f}s'.format(</span><br><span class="line"> time_elapsed // 60, time_elapsed % 60))</span><br><span class="line"> print('Best val Acc: {:4f}'.format(best_acc))</span><br><span class="line"></span><br><span class="line"> # load best model weights</span><br><span class="line"> model.load_state_dict(best_model_wts)</span><br><span class="line"> return model</span><br><span class="line"></span><br><span class="line">def visualize_model(model, num_images=6):</span><br><span class="line"> was_training = model.training</span><br><span class="line"> model.eval()</span><br><span class="line"> images_so_far = 0</span><br><span class="line"> fig = plt.figure()</span><br><span class="line"></span><br><span class="line"> with torch.no_grad():</span><br><span class="line"> for i, (inputs, labels) in enumerate(dataloaders['val']):</span><br><span class="line"> inputs = inputs.to(device)</span><br><span class="line"> labels = labels.to(device)</span><br><span class="line"></span><br><span class="line"> outputs = model(inputs)</span><br><span class="line"> _, preds = torch.max(outputs, 1)</span><br><span class="line"></span><br><span class="line"> for j in range(inputs.size()[0]):</span><br><span class="line"> images_so_far += 1</span><br><span class="line"> ax = plt.subplot(num_images//2, 2, images_so_far)</span><br><span class="line"> ax.axis('off')</span><br><span class="line"> ax.set_title('predicted: {}'.format(class_names[preds[j]]))</span><br><span class="line"> imshow(inputs.cpu().data[j])</span><br><span class="line"></span><br><span class="line"> if images_so_far == num_images:</span><br><span class="line"> model.train(mode=was_training)</span><br><span class="line"> return</span><br><span class="line"> model.train(mode=was_training)</span><br><span class="line"></span><br><span class="line">model_ft = models.resnet18(pretrained=True)</span><br><span class="line">num_ftrs = model_ft.fc.in_features</span><br><span class="line">model_ft.fc = nn.Linear(num_ftrs, 2)</span><br><span class="line"></span><br><span class="line">model_ft = model_ft.to(device)</span><br><span class="line"></span><br><span class="line">criterion = nn.CrossEntropyLoss()</span><br><span class="line"></span><br><span class="line"># Observe that all parameters are being optimized</span><br><span class="line">optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)</span><br><span class="line"></span><br><span class="line"># Decay LR by a factor of 0.1 every 7 epochs</span><br><span class="line">exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)</span><br><span class="line"></span><br><span class="line">model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=5)</span><br><span class="line"></span><br><span class="line">example=torch.rand(1,3,224,224)</span><br><span class="line"></span><br><span class="line">traced_script_module=torch.jit.trace(model_ft.to('cpu'),example)</span><br><span class="line"></span><br><span class="line">traced_script_module.save('model-beesandants-trace.pt')</span><br></pre></td></tr></table></figure><p>代码c++端:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br></pre></td><td class="code"><pre><span class="line">#include <iostream></span><br><span class="line">#include <torch\script.h></span><br><span class="line">#include <memory></span><br><span class="line">#include <opencv2\opencv.hpp></span><br><span class="line">using namespace std;</span><br><span class="line">using namespace cv;</span><br><span class="line"></span><br><span class="line">// resize并保持图像比例不变</span><br><span class="line">cv::Mat resize_with_ratio(cv::Mat& img)</span><br><span class="line">{</span><br><span class="line">cv::Mat temImage;</span><br><span class="line">int w = img.cols;</span><br><span class="line">int h = img.rows;</span><br><span class="line"></span><br><span class="line">float t = 1.;</span><br><span class="line">float len = t * std::max(w, h);</span><br><span class="line">int dst_w = 224, dst_h = 224;</span><br><span class="line">cv::Mat image = cv::Mat(cv::Size(dst_w, dst_h), CV_8UC3, cv::Scalar(128, 128, 128));</span><br><span class="line">cv::Mat imageROI;</span><br><span class="line">if (len == w)</span><br><span class="line">{</span><br><span class="line">float ratio = (float)h / (float)w;</span><br><span class="line">cv::resize(img, temImage, cv::Size(224, 224 * ratio), 0, 0, cv::INTER_LINEAR);</span><br><span class="line">imageROI = image(cv::Rect(0, ((dst_h - 224 * ratio) / 2), temImage.cols, temImage.rows));</span><br><span class="line">temImage.copyTo(imageROI);</span><br><span class="line">}</span><br><span class="line">else</span><br><span class="line">{</span><br><span class="line">float ratio = (float)w / (float)h;</span><br><span class="line">cv::resize(img, temImage, cv::Size(224 * ratio, 224), 0, 0, cv::INTER_LINEAR);</span><br><span class="line">imageROI = image(cv::Rect(((dst_w - 224 * ratio) / 2), 0, temImage.cols, temImage.rows));</span><br><span class="line">temImage.copyTo(imageROI);</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">return image;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">int main()</span><br><span class="line">{</span><br><span class="line">std::cout << "Hello Libtorch" << std::endl;</span><br><span class="line"></span><br><span class="line">cv::Mat image=cv::imread("D:/VSCode/PythonProject/hymenoptera_data/val/bees/26589803_5ba7000313.jpg");</span><br><span class="line">//cv::Mat image = cv::imread("D:/VSCode/PythonProject/hymenoptera_data/val/ants/800px-Meat_eater_ant_qeen_excavating_hole.jpg");</span><br><span class="line">image = resize_with_ratio(image);</span><br><span class="line">//cv::imshow("resized image", image);</span><br><span class="line">cv::Mat input;</span><br><span class="line">cv::cvtColor(image, input, cv::COLOR_BGR2RGB);</span><br><span class="line"></span><br><span class="line">// 下方的代码即将图像转化为Tensor,随后导入模型进行预测</span><br><span class="line">torch::Tensor tensor_image = torch::from_blob(input.data, { 1,input.rows, input.cols,3 }, torch::kByte);</span><br><span class="line">tensor_image = tensor_image.permute({ 0,3,1,2 });</span><br><span class="line">tensor_image = tensor_image.toType(torch::kFloat);</span><br><span class="line">tensor_image = tensor_image.div(255);</span><br><span class="line">tensor_image = tensor_image.to(torch::kCUDA);</span><br><span class="line"></span><br><span class="line">shared_ptr<torch::jit::script::Module> module = torch::jit::load("D:/VSCode/PythonProject/model-beesandants-trace.pt");</span><br><span class="line">module->to(at::kCUDA);</span><br><span class="line">assert(module != nullptr);</span><br><span class="line">std::cout << "OK" << std::endl;</span><br><span class="line"></span><br><span class="line">std::vector<torch::jit::IValue> inputs;</span><br><span class="line">auto result = module->forward({ tensor_image }).toTensor().to(at::kCPU);</span><br><span class="line">std::cout << result << std::endl;</span><br><span class="line">auto max_result = result.max(1, true);</span><br><span class="line">auto max_index = std::get<1>(max_result).item<float>();</span><br><span class="line">std::cout << max_index << std::endl;</span><br><span class="line">return 0;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><blockquote><p>简单部署如上,还需要测试其他部署情况,比如识别之类的大的模型。</p></blockquote><h3 id="重要资料"><a href="#重要资料" class="headerlink" title="重要资料"></a>重要资料</h3><p>如何将maskrcnn_benchmark转换为pt供C++调用</p><p>应该是他讲代码重写了,然后还需要很多调试,暂时不知道能否使用。。。。。。</p><p><a href="https://github.com/facebookresearch/maskrcnn-benchmark/issues/617" target="_blank" rel="noopener">maskrcnn-benchmark/issues/617</a></p><p><a href="https://github.com/t-vi/maskrcnn-benchmark/blob/scripting/demo/cpp/traced_model.cpp" target="_blank" rel="noopener">scripting/demo/cpp/traced_model.cpp)</a></p><p><a href="https://github.com/t-vi/maskrcnn-benchmark/blob/scripting/demo/trace_model.py" target="_blank" rel="noopener">scripting/demo/trace_model.py</a></p><h4 id="参考文档"><a href="#参考文档" class="headerlink" title="参考文档"></a>参考文档</h4><ol><li><a href="https://www.cnblogs.com/cheungxiongwei/p/10689483.html" target="_blank" rel="noopener">如何在 windows 配置 libtorch c++ 前端库?</a></li><li><a href="https://oldpan.me/archives/pytorch-windows-libtorch" target="_blank" rel="noopener">Pytorch的C++端(libtorch)在Windows中的使用</a></li><li><a href="https://oldpan.me/archives/pytorch-c-libtorch-inference" target="_blank" rel="noopener">利用Pytorch的C++前端(libtorch)读取预训练权重并进行预测</a></li></ol>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> c++ </tag>
<tag> pytorch </tag>
</tags>
</entry>
<entry>
<title>git克隆分支</title>
<link href="/2019/04/26/git%E5%85%8B%E9%9A%86%E5%88%86%E6%94%AF/"/>
<url>/2019/04/26/git%E5%85%8B%E9%9A%86%E5%88%86%E6%94%AF/</url>
<content type="html"><![CDATA[<p>git clone –recursive -b 分支 master.git</p><a id="more"></a>]]></content>
</entry>
<entry>
<title>XMAN流程</title>
<link href="/2019/04/23/XMAN%E6%B5%81%E7%A8%8B/"/>
<url>/2019/04/23/XMAN%E6%B5%81%E7%A8%8B/</url>
<content type="html"><![CDATA[<p>在用工具标注好之后</p><p>D:\science\data\xrayBag\create_VOC_with_all.m<br>用上面的代码,生成pascal_voc格式。</p><p>D:\repo\downloads_repo\cocoapi\MatlabAPI</p><p>用上面文件夹下的代码,将pascal_voc格式转换成coco格式。(参考博客里的将pascal_voc转换成coco格式文章)。</p><p>C:\Users\colin\Desktop\Xray20181001\nanjingzhan_yolov3.m</p><p>参考上面的代码,将pascal_voc格式转换成yolo格式。</p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> xman </tag>
</tags>
</entry>
<entry>
<title>MobaXterm</title>
<link href="/2019/04/19/MobaXterm/"/>
<url>/2019/04/19/MobaXterm/</url>
<content type="html"><![CDATA[<p>工具推荐:</p><p><a href="https://mobaxterm.mobatek.net/" target="_blank" rel="noopener">MobaXterm</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> MobaXterm </tag>
</tags>
</entry>
<entry>
<title>重要的代码Repository集合</title>
<link href="/2019/04/18/%E9%87%8D%E8%A6%81%E7%9A%84%E4%BB%A3%E7%A0%81Repo%E9%9B%86%E5%90%88/"/>
<url>/2019/04/18/%E9%87%8D%E8%A6%81%E7%9A%84%E4%BB%A3%E7%A0%81Repo%E9%9B%86%E5%90%88/</url>
<content type="html"><![CDATA[<p>发现好的代码收藏到这里,不断更新。。。。。。</p><a id="more"></a><p><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8099548" target="_blank" rel="noopener">Deep level sets for salient object detection</a></p><p><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8658668" target="_blank" rel="noopener">CNN-Based Semantic Segmentation Using Level Set Loss</a></p><p><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8259369" target="_blank" rel="noopener">Reformulating Level Sets as Deep Recurrent Neural Network Approach to Semantic Segmentation</a></p><p>深度学习结合level set的论文</p><div style="text-align:center"> <div class="github-card" data-user="VisionLearningGroup" data-repo="DA_Detection" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>有意思的项目A Pytorch Implementation of Strong-Weak Distribution Alignment for Adaptive Object Detection (CVPR 2019)</p><div style="text-align:center"> <div class="github-card" data-user="wuhuikai" data-repo="SparseMask" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>SparseMask: Differentiable Connectivity Learning for Dense Image Prediction/pytorch=1.0</p><div style="text-align:center"> <div class="github-card" data-user="kobiso" data-repo="Computer-Vision-Leaderboard" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>计算机视觉挑战排行榜(Leaderboard)集锦</p><p><a href="https://learngit.selfhostedserver.com/" target="_blank" rel="noopener">笨办法学 Git</a></p><p>本书强调通过实践来掌握 Git 的基本用法,其中包含 51 个动手实验。这些实验经过精心设计,篇幅皆十分短小,只需数分钟时间便可完成。对于想要快速学习 Git 的朋友而言,这是一份难得的指南。</p><div style="text-align:center"> <div class="github-card" data-user="AruniRC" data-repo="detectron-self-train" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>PyTorch-Detectron for domain adaptation by self-training on hard examples</p><div style="text-align:center"> <div class="github-card" data-user="qubvel" data-repo="segmentation_models.pytorch" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>PyTorch实现的图像分割与预训练模型</p><div style="text-align:center"> <div class="github-card" data-user="Duankaiwen" data-repo="CenterNet" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>CenterNet: Keypoint Triplets for Object Detection</p><div style="text-align:center"> <div class="github-card" data-user="eg4000" data-repo="SKU110K_CVPR19" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>重要<a href="https://github.com/eg4000/SKU110K_CVPR19" target="_blank" rel="noopener">Precise Detection in Densely Packed Scenes</a></p><div style="text-align:center"> <div class="github-card" data-user="zllrunning" data-repo="SiameseX.PyTorch" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>跟踪 A simplified PyTorch implementation of Siamese networks for tracking</p><div style="text-align:center"> <div class="github-card" data-user="BIGBALLON" data-repo="CIFAR-ZOO" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>面向CIFAR的CNN模型文献/PyTorch实现集锦</p><div style="text-align:center"> <div class="github-card" data-user="Vipermdl" data-repo="OCR_detection_IC15" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>基于RRPN,参考后面的RRPN_pytorch</p><div style="text-align:center"> <div class="github-card" data-user="Adamdad" data-repo="keras-YOLOv3-mobilenet" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>keras-yolo3-Mobilenet</p><div style="text-align:center"> <div class="github-card" data-user="kakaobrain" data-repo="fast-autoaugment" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>Fast AutoAugment</p><div style="text-align:center"> <div class="github-card" data-user="phodal" data-repo="github" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>GitHub 漫游指南</p><div style="text-align:center"> <div class="github-card" data-user="yossigandelsman" data-repo="DoubleDIP" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>Double-DIP: Unsupervised Image Decomposition via Coupled Deep-Image-Priors</p><div style="text-align:center"> <div class="github-card" data-user="mjq11302010044" data-repo="RRPN_pytorch" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>Arbitrary-Oriented Scene Text Detection via Rotation Proposals</p><div style="text-align:center"> <div class="github-card" data-user="facebookresearch" data-repo="maskrcnn-benchmark" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>根据这边博客<a href="https://www.jianshu.com/p/e9680d0bfa5c" target="_blank" rel="noopener">win10 安装maskrcnn-benchmark 教程</a>在windows下安装成功。</p><div style="text-align:center"> <div class="github-card" data-user="tensorflow" data-repo="models" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>谷歌官方的高质量代码。</p><div style="text-align:center"> <div class="github-card" data-user="vinta" data-repo="awesome-python" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>如名字,高质量python。</p><div style="text-align:center"> <div class="github-card" data-user="chinakook" data-repo="Awesome-MXNet" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>有关mxnet的一些代码</p><div style="text-align:center"> <div class="github-card" data-user="selfteaching" data-repo="the-craft-of-selfteaching" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>李笑来</p><div style="text-align:center"> <div class="github-card" data-user="fo40225" data-repo="tensorflow-windows-wheel" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>tensorflow在windows下的一些轮子</p><div style="text-align:center"> <div class="github-card" data-user="matterport" data-repo="Mask_RCNN" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><p>最经典的mask_rcnn的代码</p><div style="text-align:center"> <div class="github-card" data-user="bharathgs" data-repo="Awesome-pytorch-list" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> git </tag>
<tag> repository </tag>
</tags>
</entry>
<entry>
<title>英文学术论文写作好书推荐</title>
<link href="/2019/03/27/%E8%8B%B1%E6%96%87%E5%AD%A6%E6%9C%AF%E8%AE%BA%E6%96%87%E5%86%99%E4%BD%9C%E5%A5%BD%E4%B9%A6%E6%8E%A8%E8%8D%90/"/>
<url>/2019/03/27/%E8%8B%B1%E6%96%87%E5%AD%A6%E6%9C%AF%E8%AE%BA%E6%96%87%E5%86%99%E4%BD%9C%E5%A5%BD%E4%B9%A6%E6%8E%A8%E8%8D%90/</url>
<content type="html"><![CDATA[<p>知乎问答:<a href="https://www.zhihu.com/question/35071142" target="_blank" rel="noopener">英文学术论文写作,有什么好书可以推荐?</a></p><p>To be continued……</p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> 学术写作 </tag>
</tags>
</entry>
<entry>
<title>新电脑装机流程</title>
<link href="/2019/03/27/%E6%96%B0%E7%94%B5%E8%84%91%E8%A3%85%E6%9C%BA%E6%B5%81%E7%A8%8B/"/>
<url>/2019/03/27/%E6%96%B0%E7%94%B5%E8%84%91%E8%A3%85%E6%9C%BA%E6%B5%81%E7%A8%8B/</url>
<content type="html"><![CDATA[<p>待有时间时慢慢更新……</p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>写论文发现的那些神网站</title>
<link href="/2019/03/27/%E5%86%99%E8%AE%BA%E6%96%87%E5%8F%91%E7%8E%B0%E7%9A%84%E9%82%A3%E4%BA%9B%E7%A5%9E%E7%BD%91%E7%AB%99/"/>
<url>/2019/03/27/%E5%86%99%E8%AE%BA%E6%96%87%E5%8F%91%E7%8E%B0%E7%9A%84%E9%82%A3%E4%BA%9B%E7%A5%9E%E7%BD%91%E7%AB%99/</url>
<content type="html"><![CDATA[<p>知乎问答:<a href="https://www.zhihu.com/question/35931336" target="_blank" rel="noopener">你写论文时发现了哪些神网站?</a></p><p>To be continued……<br><a id="more"></a></p>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> 学术写作 </tag>
</tags>
</entry>
<entry>
<title>resnet论文结构截图</title>
<link href="/2019/03/26/resnet%E8%AE%BA%E6%96%87%E7%BB%93%E6%9E%84%E6%88%AA%E5%9B%BE/"/>
<url>/2019/03/26/resnet%E8%AE%BA%E6%96%87%E7%BB%93%E6%9E%84%E6%88%AA%E5%9B%BE/</url>
<content type="html"><![CDATA[<p><img src="/images/resnet.png" alt=""></p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> resnet </tag>
</tags>
</entry>
<entry>
<title>awesome开源数据集</title>
<link href="/2019/03/24/awesome%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E9%9B%86/"/>
<url>/2019/03/24/awesome%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E9%9B%86/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="awesomedata" data-repo="awesome-public-datasets" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>提问的智慧</title>
<link href="/2019/03/24/%E6%8F%90%E9%97%AE%E7%9A%84%E6%99%BA%E6%85%A7/"/>
<url>/2019/03/24/%E6%8F%90%E9%97%AE%E7%9A%84%E6%99%BA%E6%85%A7/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="ryanhanwu" data-repo="How-To-Ask-Questions-The-Smart-Way" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>学术写作参考网站</title>
<link href="/2019/03/24/%E5%AD%A6%E6%9C%AF%E5%86%99%E4%BD%9C%E5%8F%82%E8%80%83%E7%BD%91%E7%AB%99/"/>
<url>/2019/03/24/%E5%AD%A6%E6%9C%AF%E5%86%99%E4%BD%9C%E5%8F%82%E8%80%83%E7%BD%91%E7%AB%99/</url>
<content type="html"><![CDATA[<ol><li><a href="https://www.english-corpora.org/coca/" target="_blank" rel="noopener">coca</a></li><li><a href="http://www.phrasebank.manchester.ac.uk/" target="_blank" rel="noopener">Academic Phrasebank</a></li></ol><a id="more"></a><p>Academic Phrasebank</p><div class="row"> <embed src="/images/Academic_Phrasebank.pdf" width="100%" height="550" type="application/pdf"></div>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> 学术写作 </tag>
</tags>
</entry>
<entry>
<title>sota学术收集</title>
<link href="/2019/03/24/sota%E5%AD%A6%E6%9C%AF%E6%94%B6%E9%9B%86/"/>
<url>/2019/03/24/sota%E5%AD%A6%E6%9C%AF%E6%94%B6%E9%9B%86/</url>
<content type="html"><![CDATA[<p><a href="https://paperswithcode.com/sota" target="_blank" rel="noopener">Browse state-of-the-art</a></p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>基于yolo_v3的物体跟踪</title>
<link href="/2019/03/24/%E5%9F%BA%E4%BA%8Eyolo-v3%E7%9A%84%E7%89%A9%E4%BD%93%E8%B7%9F%E8%B8%AA/"/>
<url>/2019/03/24/%E5%9F%BA%E4%BA%8Eyolo-v3%E7%9A%84%E7%89%A9%E4%BD%93%E8%B7%9F%E8%B8%AA/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="Qidian213" data-repo="deep_sort_yolov3" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><div style="text-align:center"> <div class="github-card" data-user="ZQPei" data-repo="deep_sort_pytorch" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><a id="more"></a>]]></content>
</entry>
<entry>
<title>yolo_v3的pytorch版本实现</title>
<link href="/2019/03/24/yolo-v3%E7%9A%84pytorch%E7%89%88%E6%9C%AC%E5%AE%9E%E7%8E%B0/"/>
<url>/2019/03/24/yolo-v3%E7%9A%84pytorch%E7%89%88%E6%9C%AC%E5%AE%9E%E7%8E%B0/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="BobLiu20" data-repo="YOLOv3_PyTorch" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><a id="more"></a>]]></content>
</entry>
<entry>
<title>tensorflow2教程</title>
<link href="/2019/03/24/tensorflow2%E6%95%99%E7%A8%8B/"/>
<url>/2019/03/24/tensorflow2%E6%95%99%E7%A8%8B/</url>
<content type="html"><![CDATA[<div style="text-align:center"> <div class="github-card" data-user="dragen1860" data-repo="TensorFlow2.0Tutorials" data-width="400" data-theme="default" data-target="" data-client-id="" data-client-secret=""></div></div><script src="/github-card-lib/githubcard.js"></script><a id="more"></a>]]></content>
</entry>
<entry>
<title>GAN概览</title>
<link href="/2019/03/24/GAN%E6%A6%82%E8%A7%88/"/>
<url>/2019/03/24/GAN%E6%A6%82%E8%A7%88/</url>
<content type="html"><![CDATA[<p>可以在hexo中插入pdf文件了</p><a id="more"></a><div class="row"> <embed src="/images/GAN-Overview-Chinese.pdf" width="100%" height="550" type="application/pdf"></div>]]></content>
</entry>
<entry>
<title>博客添加pdf插件</title>
<link href="/2019/03/24/%E5%8D%9A%E5%AE%A2%E6%B7%BB%E5%8A%A0pdf%E6%8F%92%E4%BB%B6/"/>
<url>/2019/03/24/%E5%8D%9A%E5%AE%A2%E6%B7%BB%E5%8A%A0pdf%E6%8F%92%E4%BB%B6/</url>
<content type="html"><![CDATA[<h4 id="安装插件"><a href="#安装插件" class="headerlink" title="安装插件"></a>安装插件</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npm install --save hexo-pdf</span><br></pre></td></tr></table></figure><h4 id="配置"><a href="#配置" class="headerlink" title="配置"></a>配置</h4><p>创建 book 页面<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">hexo new page book</span><br></pre></td></tr></table></figure></p><h4 id="编写"><a href="#编写" class="headerlink" title="编写"></a>编写</h4><p>在成的md文件中添加pdf</p><p>外部链接:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">{% pdf http://7xov2f.com1.z0.glb.clouddn.com/bash_freshman.pdf %}</span><br><span class="line">本地连接:</span><br><span class="line">{% pdf ./pdf名字.pdf %}</span><br></pre></td></tr></table></figure><a id="more"></a>]]></content>
</entry>
<entry>
<title>吉他教程网页</title>
<link href="/2019/03/19/%E5%90%89%E4%BB%96%E6%95%99%E7%A8%8B%E7%BD%91%E9%A1%B5/"/>
<url>/2019/03/19/%E5%90%89%E4%BB%96%E6%95%99%E7%A8%8B%E7%BD%91%E9%A1%B5/</url>
<content type="html"><![CDATA[<ol><li><a href="https://zhuanlan.zhihu.com/p/36092936" target="_blank" rel="noopener">知乎:吉他基础教程</a></li><li><a href="https://zhuanlan.zhihu.com/senphen" target="_blank" rel="noopener">知乎:吉他与乐理</a></li></ol><a id="more"></a>]]></content>
</entry>
<entry>
<title>翻牡丹亭外</title>
<link href="/2019/03/13/%E7%BF%BB%E7%89%A1%E4%B8%B9%E4%BA%AD%E5%A4%96/"/>
<url>/2019/03/13/%E7%BF%BB%E7%89%A1%E4%B8%B9%E4%BA%AD%E5%A4%96/</url>
<content type="html"><![CDATA[<iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width="330" height="86" src="//music.163.com/outchain/player?type=2&id=417594625&auto=0&height=66"></iframe><a id="more"></a><p>为救李郎离家园<br>谁料黄榜中状元<br>中状元 着红袍<br>帽插宫花好啊<br>好新鲜<br>李郎一梦已过往<br>风流人物啊在何方<br>从古到今说来话<br>不过是情而已<br>你问我怕什么<br>怕不能遇见你<br>这人间有点假<br>可我莫名爱上他<br>荒凉一梦二十年<br>依旧是不懂爱也不懂情<br>写歌的人假正经啊<br>听歌的人最无情<br>为救李郎离家园<br>谁料黄榜中状元<br>中状元 着红袍<br>帽插宫花好啊<br>好新鲜</p>]]></content>
<categories>
<category> 音乐之声 </category>
</categories>
<tags>
<tag> 刘润洁 </tag>
</tags>
</entry>
<entry>
<title>微博图床</title>
<link href="/2019/03/13/%E5%BE%AE%E5%8D%9A%E5%9B%BE%E5%BA%8A/"/>
<url>/2019/03/13/%E5%BE%AE%E5%8D%9A%E5%9B%BE%E5%BA%8A/</url>
<content type="html"><![CDATA[<div class="group-picture"><div class="group-picture-container"><div class="group-picture-row"><div class="group-picture-column" style="width: 100%;"><img src="http://wx3.sinaimg.cn/mw690/007FQu71ly1g11g35towkj31440u04qr.jpg" alt="1" title="老门东"></div></div><div class="group-picture-row"></div></div></div><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> 图床 </tag>
</tags>
</entry>
<entry>
<title>瘦身贴没用的证据</title>
<link href="/2018/10/24/%E7%98%A6%E8%BA%AB%E8%B4%B4%E6%B2%A1%E7%94%A8%E7%9A%84%E8%AF%81%E6%8D%AE/"/>
<url>/2018/10/24/%E7%98%A6%E8%BA%AB%E8%B4%B4%E6%B2%A1%E7%94%A8%E7%9A%84%E8%AF%81%E6%8D%AE/</url>
<content type="html"><![CDATA[<p>瘦身贴翻译为slim patch或者weight loss patch,药品这一块国外还是比较正规,知道药品的英文名称,谷歌搜索就可以查看相关链接。</p><p>对于减肥这一块,以下链接说明瘦身贴没用。 </p><a id="more"></a><ol><li><a href="https://www.healthline.com/health/weight-loss/weight-loss-patches#weight-loss" target="_blank" rel="noopener">What to Know About Weight Loss Patches</a></li><li><a href="https://www.healthline.com/nutrition/26-evidence-based-weight-loss-tips" target="_blank" rel="noopener">26 Weight Loss Tips That Are Actually Evidence-Based</a></li><li><a href="https://www.healthline.com/nutrition/does-exercise-cause-weight-loss" target="_blank" rel="noopener">Does Exercise Help You Lose Weight? The Surprising Truth</a></li></ol>]]></content>
</entry>
<entry>
<title>pascal_voc数据集转coco数据集</title>
<link href="/2018/09/03/pascal-voc%E6%95%B0%E6%8D%AE%E9%9B%86%E8%BD%ACcoco%E6%95%B0%E6%8D%AE%E9%9B%86/"/>
<url>/2018/09/03/pascal-voc%E6%95%B0%E6%8D%AE%E9%9B%86%E8%BD%ACcoco%E6%95%B0%E6%8D%AE%E9%9B%86/</url>
<content type="html"><![CDATA[<p>下载cocoapi,里面有MatlabAPI。</p><p>参考代码</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">CocoUtils.convertPascalGt( 'D:/DataGit/VOC2007_XMAN/', '2007', 'train', 'pascal_train2007.json')</span><br></pre></td></tr></table></figure><a id="more"></a>]]></content>
</entry>
<entry>
<title>python2代码转python3</title>
<link href="/2018/09/03/python2%E4%BB%A3%E7%A0%81%E8%BD%ACpython3/"/>
<url>/2018/09/03/python2%E4%BB%A3%E7%A0%81%E8%BD%ACpython3/</url>
<content type="html"><![CDATA[<h3 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h3><ol><li><a href="https://blog.csdn.net/lwhsyit/article/details/80621931" target="_blank" rel="noopener">python2代码转python3</a></li></ol><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> python </tag>
</tags>
</entry>
<entry>
<title>hexo-tag-video插件</title>
<link href="/2018/05/05/hexo-tag-video%E6%8F%92%E4%BB%B6/"/>
<url>/2018/05/05/hexo-tag-video%E6%8F%92%E4%BB%B6/</url>
<content type="html"><![CDATA[<blockquote><p>终于找到这个插入视频的插件了,很好用</p></blockquote><p>链接在此:<a href="https://github.com/geekplux/hexo-tag-video" target="_blank" rel="noopener">hexo-tag-video</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> hexo </tag>
<tag> video </tag>
</tags>
</entry>
<entry>
<title>opencv调试</title>
<link href="/2018/05/04/opencv%E8%B0%83%E8%AF%95/"/>
<url>/2018/05/04/opencv%E8%B0%83%E8%AF%95/</url>
<content type="html"><![CDATA[<h4 id="步骤"><a href="#步骤" class="headerlink" title="步骤"></a>步骤</h4><ol><li>下载Image Watch插件,点击安装。</li><li>添加断点。</li><li>打开视图>>其他窗口>>Image Watch。</li><li>点击Local就可以实时查看图像的像素值,便于调试。</li></ol><h4 id="参考文献"><a href="#参考文献" class="headerlink" title="参考文献"></a>参考文献</h4><ol><li><a href="http://docs.opencv.org/doc/tutorials/introduction/windows_visual_studio_image_watch/windows_visual_studio_image_watch.html#windows-visual-studio-image-watch" target="_blank" rel="noopener">windows-visual-studio-image-watch</a></li></ol><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> visual studio </tag>
<tag> opencv </tag>
</tags>
</entry>
<entry>
<title>博客与工具推荐推荐</title>
<link href="/2018/04/28/%E5%8D%9A%E5%AE%A2%E4%B8%8E%E5%B7%A5%E5%85%B7%E6%8E%A8%E8%8D%90/"/>
<url>/2018/04/28/%E5%8D%9A%E5%AE%A2%E4%B8%8E%E5%B7%A5%E5%85%B7%E6%8E%A8%E8%8D%90/</url>
<content type="html"><![CDATA[<blockquote><p>本文主要推荐一些博客链接和工具链接,以便查找,如果需要详细说明,会单开一篇来介绍相应的工具。</p></blockquote><a id="more"></a><h3 id="github"><a href="#github" class="headerlink" title="github"></a>github</h3><ul><li><a href="https://github.com/stanzhai/be-a-professional-programmer" target="_blank" rel="noopener">be-a-professional-programmer</a></li><li><a href="https://github.com/Gisonrg/hexo-github-card" target="_blank" rel="noopener">hexo-github-card</a>这是个可以展示github内repo的插件,详见:<a href="***">tensorflow2教程</a></li></ul>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
</entry>
<entry>
<title>创建碎碎念</title>
<link href="/2018/04/26/%E5%88%9B%E5%BB%BA%E7%A2%8E%E7%A2%8E%E5%BF%B5/"/>
<url>/2018/04/26/%E5%88%9B%E5%BB%BA%E7%A2%8E%E7%A2%8E%E5%BF%B5/</url>
<content type="html"><![CDATA[<p>创建碎碎念页面</p><a id="more"></a><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">hexo new page murmurs</span><br></pre></td></tr></table></figure><p>在<code>next/source/css/_variables/base.styl</code>中添加:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">/*首先,我们要创建一个容器class*/</span><br><span class="line">.times {display:block;margin:20px 0;}</span><br><span class="line">/*利用ul标签的特性,设置外边框左移25px,左边边框是2px粗的实心线,颜色一般要浅一点*/</span><br><span class="line">.times ul {margin-right:5px;margin-left:10px;border-left:1px solid #ddd;list-style-type:none;}</span><br><span class="line">/*一般情况,通过li标签控制圆点回到时间线上,然后控制要出现的文字大小和是否粗体*/</span><br><span class="line">.times ul li {width:100%;margin-left:-26px;line-height:20px;font-weight:narmal;}</span><br><span class="line">.times ul li p {margin-top:10px }</span><br><span class="line">/*设置span标签的属性,让它来做时间显示,加一点边距,使时间显示离时间线远一点*/</span><br><span class="line">.times ul li span {padding-left:7px;font-size:15px;line-height:20px;color:#555;margin-down:50px;}</span><br><span class="line">/*注意这一行,前面的li标签后面加了一个:hover伪属性,意思是鼠标移上来,激活后面的属性,这样可以设置鼠标移动到整个时间范围的时候,时间点和时间显示会变色*/</span><br><span class="line">.times ul li:hover p {border-bottom: 1px solid #000000;}</span><br></pre></td></tr></table></figure><p>在<code>source/murmurs/index.md</code>中添加:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><div class="times"></span><br><span class="line"><ul></span><br><span class="line"> <li><span>2018-01-01</span><p>这里是2018年哟</p></li></span><br><span class="line"> <li><span>2017-01-01</span><p>这里是2017年哟</p></li></span><br><span class="line"> <li><span>2016-01-01</span><p>这里是2016年哟</p></li></span><br><span class="line"></ul></span><br><span class="line"></div></span><br></pre></td></tr></table></figure><p>至此,初步碎碎念完成。</p>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> hexo </tag>
</tags>
</entry>
<entry>
<title>自动生成favcion</title>
<link href="/2018/04/26/%E8%87%AA%E5%8A%A8%E7%94%9F%E6%88%90favcion/"/>
<url>/2018/04/26/%E8%87%AA%E5%8A%A8%E7%94%9F%E6%88%90favcion/</url>
<content type="html"><![CDATA[<p>链接在此:<a href="https://www.favicon-generator.org/" target="_blank" rel="noopener">favicon-generator</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> next </tag>
</tags>
</entry>
<entry>
<title>下载旧版本VS</title>
<link href="/2018/04/24/%E4%B8%8B%E8%BD%BD%E6%97%A7%E7%89%88%E6%9C%ACVS/"/>
<url>/2018/04/24/%E4%B8%8B%E8%BD%BD%E6%97%A7%E7%89%88%E6%9C%ACVS/</url>
<content type="html"><![CDATA[<p>链接在此:<a href="https://www.visualstudio.com/zh-hans/vs/older-downloads/" target="_blank" rel="noopener">仍想使用较旧的版本?</a></p><a id="more"></a>]]></content>
</entry>
<entry>
<title>自己编写CNN框架之零</title>
<link href="/2018/04/19/%E8%87%AA%E5%B7%B1%E7%BC%96%E5%86%99CNN%E6%A1%86%E6%9E%B6%E4%B9%8B%E9%9B%B6/"/>
<url>/2018/04/19/%E8%87%AA%E5%B7%B1%E7%BC%96%E5%86%99CNN%E6%A1%86%E6%9E%B6%E4%B9%8B%E9%9B%B6/</url>
<content type="html"><![CDATA[<blockquote><p>终于下定决心自己编写CNN框架了,立FLAG了!!!</p></blockquote><p>参考链接:</p><ul><li><a href="http://hongbomin.com/2016/11/12/easycnn-design-history/" target="_blank" rel="noopener">EasyCNN的设计实现</a></li><li><a href="https://github.com/xylcbd/EasyCNN" target="_blank" rel="noopener">EasyCNN</a></li><li><a href="http://hongbomin.com/2018/03/03/zuo-si-de-hou-xu/" target="_blank" rel="noopener">Flag实现:C++从零开始开发深度学习框架</a></li><li><a href="https://github.com/PrincetonVision/marvin" target="_blank" rel="noopener">marvin</a></li><li><a href="http://marvin.is/" target="_blank" rel="noopener">marvin官网</a></li><li><a href="https://github.com/tiny-dnn/tiny-dnn" target="_blank" rel="noopener">tiny-dnn</a></li><li><a href="https://github.com/pjreddie/darknet" target="_blank" rel="noopener">darknet</a></li><li><a href="https://github.com/attractivechaos/kann" target="_blank" rel="noopener">kann</a></li></ul><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
</entry>
<entry>
<title>CUDA匹配SM和COMPUTE</title>
<link href="/2018/04/18/CUDA%E5%8C%B9%E9%85%8DSM%E5%92%8CCOMPUTE/"/>
<url>/2018/04/18/CUDA%E5%8C%B9%E9%85%8DSM%E5%92%8CCOMPUTE/</url>
<content type="html"><![CDATA[<p>链接在此:<a href="https://github.com/tpruvot/ccminer/wiki/Compatibility" target="_blank" rel="noopener">Compatibility</a></p><p>Supported SM and Gencode variations<br>Below are the supported sm variations and sample cards from that generation</p><a id="more"></a><h3 id="Supported-on-CUDA-7-and-later"><a href="#Supported-on-CUDA-7-and-later" class="headerlink" title="Supported on CUDA 7 and later"></a>Supported on CUDA 7 and later</h3><p>####Fermi (CUDA 3.2 and later, deprecated from CUDA 9):</p><ul><li>SM20 or SM_20, compute_30 – Older cards such as GeForce 400, 500, 600, GT-630<br>####Kepler (CUDA 5 and later):</li><li>SM30 or SM_30, compute_30 – Kepler architecture (generic – Tesla K40/K80, GeForce 700, GT-730)<br>Adds support for unified memory programming</li><li>SM35 or SM_35, compute_35 – More specific Tesla K40<br>Adds support for dynamic parallelism. Shows no real benefit over SM30 in my experience.</li><li>SM37 or SM_37, compute_37 – More specific Tesla K80<br>Adds a few more registers. Shows no real benefit over SM30 in my experience<h4 id="Maxwell-CUDA-6-and-later"><a href="#Maxwell-CUDA-6-and-later" class="headerlink" title="Maxwell (CUDA 6 and later):"></a>Maxwell (CUDA 6 and later):</h4></li><li>SM50 or SM_50, compute_50 – Tesla/Quadro M series</li><li>SM52 or SM_52, compute_52 – Quadro M6000 , GeForce 900, GTX-970, GTX-980, GTX Titan X</li><li>SM53 or SM_53, compute_53 – Tegra (Jetson) TX1 / Tegra X1<h4 id="Pascal-CUDA-8-and-later"><a href="#Pascal-CUDA-8-and-later" class="headerlink" title="Pascal (CUDA 8 and later)"></a>Pascal (CUDA 8 and later)</h4></li><li>SM60 or SM_60, compute_60 – GP100/Tesla P100 – DGX-1 (Generic Pascal)</li><li>SM61 or <strong>SM_61, compute_61</strong> – GTX 1080, <strong>GTX 1070</strong>, GTX 1060, GTX 1050, GTX 1030, Titan Xp, Tesla P40, Tesla P4</li><li>SM62 or SM_62, compute_62 – Drive-PX2, Tegra (Jetson) TX2, Denver-based GPU<h4 id="Volta-CUDA-9-and-later"><a href="#Volta-CUDA-9-and-later" class="headerlink" title="Volta (CUDA 9 and later)"></a>Volta (CUDA 9 and later)</h4></li><li>SM70 or SM_70, compute_70 – Tesla V100</li><li>SM71 or SM_71, compute_71 – probably not implemented</li><li>SM72 or SM_72, compute_72 – currently unknown</li></ul>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> cuda </tag>
</tags>
</entry>
<entry>
<title>next添加相册gallery</title>
<link href="/2018/04/17/next%E6%B7%BB%E5%8A%A0%E7%9B%B8%E5%86%8Cgallery/"/>
<url>/2018/04/17/next%E6%B7%BB%E5%8A%A0%E7%9B%B8%E5%86%8Cgallery/</url>
<content type="html"><![CDATA[<p>链接在此:<a href="https://github.com/iissnan/hexo-theme-next/pull/1989/files" target="_blank" rel="noopener">详细</a></p><a id="more"></a><p>主要作了如下修改:</p><h3 id="next-config-yml"><a href="#next-config-yml" class="headerlink" title="/next/_config.yml"></a>/next/_config.yml</h3><p>添加新版本的fancybox</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">vendors:</span><br><span class="line"> # Internal path prefix. Please do not edit it.</span><br><span class="line"> _internal: lib</span><br><span class="line"></span><br><span class="line"> # Internal version: 2.1.3</span><br><span class="line"> jquery:</span><br><span class="line"></span><br><span class="line"> # Internal version: 2.1.5</span><br><span class="line"> # See: http://fancyapps.com/fancybox/</span><br><span class="line"> fancybox: https://cdn.bootcss.com/fancybox/3.3.5/jquery.fancybox.min.js</span><br><span class="line"> fancybox_css: https://cdn.bootcss.com/fancybox/3.3.5/jquery.fancybox.min.css</span><br></pre></td></tr></table></figure><h3 id="next-layout-macro-post-swig"><a href="#next-layout-macro-post-swig" class="headerlink" title="/next/layout/_macro/post.swig"></a>/next/layout/_macro/post.swig</h3><p>找到如下代码修改<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line">{# Gallery support #}</span><br><span class="line">{% if post.photos and post.photos.length %}</span><br><span class="line"><div class="post-gallery" itemscope itemtype="http://schema.org/ImageGallery"></span><br><span class="line"> {% set COLUMN_NUMBER = 3 %}</span><br><span class="line"> {% for photo in post.photos %}</span><br><span class="line"> {% if loop.index0 % COLUMN_NUMBER === 0 %}<div class="post-gallery-row">{% endif %}</span><br><span class="line"> {###原始代码开始</span><br><span class="line"> <a class="post-gallery-img fancybox"</span><br><span class="line"> href="{{ url_for(photo) }}" rel="gallery_{{ post._id }}"</span><br><span class="line"> 原始代码结束###}</span><br><span class="line"> <a class="post-gallery-img" data-fancybox="gallery_{{ post._id }}"</span><br><span class="line"> href="{{ url_for(photo) }}!gumini"</span><br><span class="line"> itemscope itemtype="http://schema.org/ImageObject" itemprop="url"></span><br><span class="line"> <img src="{{ url_for(photo) }}!guresize" itemprop="contentUrl"/></span><br><span class="line"> </a></span><br><span class="line"> {% if loop.index0 % COLUMN_NUMBER === 2 %}</div>{% endif %}</span><br><span class="line"> {% endfor %}</span><br><span class="line"></span><br><span class="line"> {# Append end tag for `post-gallery-row` when (photos size mod COLUMN_NUMBER) is less than COLUMN_NUMBER #}</span><br><span class="line"> {% if post.photos.length % COLUMN_NUMBER > 0 %}</div>{% endif %}</span><br><span class="line"></div></span><br><span class="line"> {% endif %}</span><br></pre></td></tr></table></figure></p><h3 id="next-source-js-src-utils-js"><a href="#next-source-js-src-utils-js" class="headerlink" title="/next/source/js/src/utils.js"></a>/next/source/js/src/utils.js</h3><p>找到如下代码修改</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">/**</span><br><span class="line">* Wrap images with fancybox support.</span><br><span class="line">*/</span><br><span class="line">wrapImageWithFancyBox: function () {</span><br><span class="line">$('.content img')</span><br><span class="line"> .not('[hidden]')</span><br><span class="line"> .not('.group-picture img, .post-gallery img')</span><br><span class="line"> .each(function () {</span><br><span class="line"> var $image = $(this);</span><br><span class="line"> /*var imageTitle = $image.attr('title');原始*/</span><br><span class="line"> var $imageWrapLink = $image.parent('a');</span><br><span class="line"></span><br><span class="line"> if ($imageWrapLink.size() < 1) {</span><br><span class="line"> var imageLink = ($image.attr('data-original')) ? this.getAttribute('data-original') : this.getAttribute('src');</span><br><span class="line"> $imageWrapLink = $image.wrap('<a href="' + imageLink + '"></a>').parent('a');</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> // $imageWrapLink.addClass('fancybox fancybox.image');</span><br><span class="line"> // $imageWrapLink.attr('rel', 'group');</span><br><span class="line"> //</span><br><span class="line"> // if (imageTitle) {</span><br><span class="line"> // $imageWrapLink.append('<p class="image-caption">' + imageTitle + '</p>');</span><br><span class="line"> //</span><br><span class="line"> // //make sure img title tag will show correctly in fancybox</span><br><span class="line"> // $imageWrapLink.attr('title', imageTitle);</span><br><span class="line"> if (!$imageWrapLink.attr('data-fancybox')) {</span><br><span class="line"> $imageWrapLink.attr('data-fancybox', 'group');</span><br><span class="line"> }</span><br><span class="line"> });</span><br><span class="line"></span><br><span class="line"> // $('.fancybox').fancybox({</span><br><span class="line"> // helpers: {</span><br><span class="line"> // overlay: {</span><br><span class="line"> // locked: false</span><br><span class="line"> // }</span><br><span class="line"> // }</span><br><span class="line"> $('[data-fancybox]').fancybox({</span><br><span class="line"> });</span><br><span class="line">},</span><br></pre></td></tr></table></figure><p>至此修改初步完成,后续如果需要添加,再按照上文给出的链接进行修改,本文到这一步已经够了,后面考虑自动加载云端图片文件!!!<br>🚩🚩🚩</p>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> hexo </tag>
<tag> next </tag>
</tags>
</entry>
<entry>
<title>hexo主题开发经验之谈</title>
<link href="/2018/04/17/hexo%E4%B8%BB%E9%A2%98%E5%BC%80%E5%8F%91%E7%BB%8F%E9%AA%8C%E4%B9%8B%E8%B0%88/"/>
<url>/2018/04/17/hexo%E4%B8%BB%E9%A2%98%E5%BC%80%E5%8F%91%E7%BB%8F%E9%AA%8C%E4%B9%8B%E8%B0%88/</url>
<content type="html"><![CDATA[<p><a href="https://molunerfinn.com/make-a-hexo-theme/#%E9%A1%B5%E9%9D%A2" target="_blank" rel="noopener">hexo主题开发经验之谈</a></p><p><a href="http://chensd.com/2016-06/hexo-theme-guide.html" target="_blank" rel="noopener">hexo主题开发指南</a></p><a id="more"></a>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> hexo </tag>
</tags>
</entry>
<entry>
<title>Intellij_idea激活</title>
<link href="/2018/04/17/Intellij-idea%E6%BF%80%E6%B4%BB/"/>
<url>/2018/04/17/Intellij-idea%E6%BF%80%E6%B4%BB/</url>
<content type="html"><![CDATA[<p>链接在此:<a href="http://idea.lanyus.com/" target="_blank" rel="noopener">IntelliJ IDEA 注册码</a></p><p>学生可免费申请使用:<a href="https://sales.jetbrains.com/hc/zh-cn/articles/207154369-%E5%AD%A6%E7%94%9F%E6%8E%88%E6%9D%83%E7%94%B3%E8%AF%B7%E6%96%B9%E5%BC%8F" target="_blank" rel="noopener">学生授权申请方式</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> Intellij Idea </tag>
</tags>
</entry>
<entry>
<title>hexo主题prince</title>
<link href="/2018/04/16/hexo%E4%B8%BB%E9%A2%98prince/"/>
<url>/2018/04/16/hexo%E4%B8%BB%E9%A2%98prince/</url>
<content type="html"><![CDATA[<p>推荐一个hexo主题</p><p>链接在此:<a href="https://github.com/yiliashaw/hexo-theme-prince" target="_blank" rel="noopener">hexo-theme-prince</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> hexo </tag>
</tags>
</entry>
<entry>
<title>hexo文章推荐</title>
<link href="/2018/04/16/hexo%E6%96%87%E7%AB%A0%E6%8E%A8%E8%8D%90/"/>
<url>/2018/04/16/hexo%E6%96%87%E7%AB%A0%E6%8E%A8%E8%8D%90/</url>
<content type="html"><![CDATA[<p>hexo跨博客文章推荐插件</p><p>链接在此:<a href="https://github.com/huiwang/hexo-recommended-posts" target="_blank" rel="noopener">hexo-recommended-posts</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> hexo </tag>
</tags>
</entry>
<entry>
<title>next豆瓣插件</title>
<link href="/2018/04/16/next%E8%B1%86%E7%93%A3%E6%8F%92%E4%BB%B6/"/>
<url>/2018/04/16/next%E8%B1%86%E7%93%A3%E6%8F%92%E4%BB%B6/</url>
<content type="html"><![CDATA[<p>推荐一个使用插件将豆瓣电影、读书和游戏自动部署到自己的github博客上。</p><p>链接在此:<a href="https://github.com/mythsman/hexo-douban" target="_blank" rel="noopener">hexo-douban</a></p><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
<tags>
<tag> hexo </tag>
<tag> next </tag>
</tags>
</entry>
<entry>
<title>windows安装pytorch</title>
<link href="/2018/04/05/windows%E5%AE%89%E8%A3%85pytorch/"/>
<url>/2018/04/05/windows%E5%AE%89%E8%A3%85pytorch/</url>
<content type="html"><![CDATA[<p>江湖传言,tensorflow适合工业,pytorch适合科研,所以,来一波呗</p><a id="more"></a><h4 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h4><ol><li><a href="https://blog.csdn.net/xiangxianghehe/article/details/78736482" target="_blank" rel="noopener">Windows下安装PyTorch0.3.0</a></li></ol>]]></content>
</entry>
<entry>
<title>Love Like Magic</title>
<link href="/2018/03/27/Love-Like-Magic/"/>
<url>/2018/03/27/Love-Like-Magic/</url>
<content type="html"><![CDATA[<blockquote><p>翻到这么首歌,还不错!!!</p></blockquote><a id="more"></a><div class="video-container"><iframe src="http://player.youku.com/embed/XMjI5MDc3Mzk2" frameborder="0" allowfullscreen></iframe></div>]]></content>
<categories>
<category> 音乐之声 </category>
</categories>
<tags>
<tag> 张国荣 </tag>
</tags>
</entry>
<entry>
<title>朝鲜战争基本常识问答</title>
<link href="/2018/03/26/%E6%9C%9D%E9%B2%9C%E6%88%98%E4%BA%89%E5%9F%BA%E6%9C%AC%E5%B8%B8%E8%AF%86%E9%97%AE%E7%AD%94/"/>
<url>/2018/03/26/%E6%9C%9D%E9%B2%9C%E6%88%98%E4%BA%89%E5%9F%BA%E6%9C%AC%E5%B8%B8%E8%AF%86%E9%97%AE%E7%AD%94/</url>
<content type="html"><![CDATA[<h4 id="一、三八线是条什么线?它是“国际公认”的朝鲜半岛南北政权的政治分治线吗?"><a href="#一、三八线是条什么线?它是“国际公认”的朝鲜半岛南北政权的政治分治线吗?" class="headerlink" title="一、三八线是条什么线?它是“国际公认”的朝鲜半岛南北政权的政治分治线吗?"></a>一、三八线是条什么线?它是“国际公认”的朝鲜半岛南北政权的政治分治线吗?</h4><p>答:非也!三八线是1945年8月由美国提出,以朝鲜国土北纬三十八度线作为美、苏两国对日军事行动和受降范围的临时分界线,三八线以北为苏军接受日军投降区,以南为美军受降区。三八线是一条受降临时分界线,仅此而已。说这条线是“国际公认”的朝鲜南北的政治分治线,没有任何国际法依据,而且不为朝鲜半岛南北双方政权所承认。</p><p>换言之,三八线具有约束苏美军队的意义,却没有约束朝鲜半岛南北双方的意义。中国领导人决策出兵参战的前提是“美国军队越过三八线”,而不是“韩国军队越过三八线”,这是原因之一。而美国军队率先越过三八线,是打破这个约束的始作俑者。</p><a id="more"></a><h4 id="二、朝鲜战争是北方侵略了南方吗?"><a href="#二、朝鲜战争是北方侵略了南方吗?" class="headerlink" title="二、朝鲜战争是北方侵略了南方吗?"></a>二、朝鲜战争是北方侵略了南方吗?</h4><p>答:朝鲜战争是一场以民族统一为目的的内战,没有“侵略”不“侵略”之分。正如中国国共内战既不能说国民党侵略了共产党,也不能说共产党侵略了国民党,美国内战既不能说南方侵略了北方,也不能说北方侵略了南方一样。美国军队直接介入朝鲜内战,才是名符其实的“侵略”行为。</p><h4 id="三、联合国是否有权利干预朝鲜内战?"><a href="#三、联合国是否有权利干预朝鲜内战?" class="headerlink" title="三、联合国是否有权利干预朝鲜内战?"></a>三、联合国是否有权利干预朝鲜内战?</h4><p>答:没有权利。联合国是一个国际组织,不是“世界政府”。与一个国家的政府有着本质的区别,它没有干预一个国家内部事务权利和依据。实际上,《联合国宪章》就明确规定:“不得干涉本质上属于任何国家内部管辖之事件”。联合国安理会关于武装干预朝鲜的决议是在安理会常任理事国有缺席,而美国军队已经先斩后奏率先介入的情况下作出的,开了一个毫无道理的先例,是一个非法决议。所以从朝鲜战争以后,再也没有“联合国军”名义的军事行动,就是这种方式已为历史进程所否定的反证。</p><p>尤其需要特别说明的是,朝鲜南北方政权都没有加入联合国,都不是联合国成员国,联合国出兵干预,不伦不类,无根无基,没有任何法理依据,本身就是对《联合国宪章》的粗暴践踏。</p><h4 id="四、“联合国军”是一支维持和平部队吗?"><a href="#四、“联合国军”是一支维持和平部队吗?" class="headerlink" title="四、“联合国军”是一支维持和平部队吗?"></a>四、“联合国军”是一支维持和平部队吗?</h4><p>答:不是!“联合国维持和平行动”的概念产生于朝鲜战争之后,而且有其特定涵义和规范。联合国成立之时,就设有军事观察员,并逐步形成了维和部队,在1956年建立第一支联合国维和部队时,联合国秘书长哈马舍尔德曾经提出了著名的维和三原则:</p><p>第一,维和行动不得妨碍有关当事国之权利、要求和立场,需保持中立,不得偏袒冲突中的任何一方;</p><p>第二,维和行动必须征得有关各方的一致同意才能实施;</p><p>第三,维和部队只携带轻武器,只有自卫时方可使用武力。</p><p>人们把这三项原则概括为中立的原则、同意的原则和自卫的原则,并称之为哈马舍尔德三原则。哈马合尔德三原则是联合国传统维和行动的基本准则。80年代未期以前的维和行动,都是以哈马舍尔德三原则为基本依据的。哈马吉尔德三原则对联合国近四十年的维和行动具有重要的指导意义。秘书长啥马舍尔德之所以就联合国维和行动提出这三项基本原则,主要是因为联合国建立时制定的《联合国宪章》没有关于维和行动的规定。以哈马舍尔德原则为基础,传统维和行动大体遵循以下一些基本原则:</p><p>1.维和行动由联合国安理会授权和组织.特殊情况下由联合国大会组织,具体由秘书长控制和指挥。</p><p>2.维和行动必须征得冲突各方政府以及直接有关的各方的同意。具体讲,维和部队的规模、进驻的起始和结束时间、进驻的地域等部必须征得有关各方的一致同意特别是进驻国的同意。</p><p>3.维和部队的军事人员由会员国自愿提供。军事观察员不携带武器、维和部队携带轻型防御性武器。</p><p>4.维和部队除自卫外,不得使用武力。</p><p>5.严守中立。不能支待一方反对另一方。</p><p>6.不得干涉驻在国内部事务,不能介入内部冲突。</p><p>“联合国军”的决策和行动不符合其中任何一条。</p><h4 id="五、美国政府只有解决朝鲜问题而没有染指中国的意图吗?"><a href="#五、美国政府只有解决朝鲜问题而没有染指中国的意图吗?" class="headerlink" title="五、美国政府只有解决朝鲜问题而没有染指中国的意图吗?"></a>五、美国政府只有解决朝鲜问题而没有染指中国的意图吗?</h4><p>答:中国有句老话:察其言,观其行。美国军队事实上已经侵犯了中国领土(台湾),介入了中国内战(出兵台湾即介入中国内战),而且其地面武装力量已经越过三八线直趋中国国门,“联合国军”总司令已经提出:“无论如何,如果我们不去利用鸭绿江的自然防御功能,那么这种西部低洼,东中部崎岖的地形是不适于我们的防御体系的。这条江是整个朝鲜绝无仅有的天然屏障,但如果仅仅依赖于此作为唯一的天然防线,则无论是军事还是政治的防御能力都不足以维护韩国的领土完整。……只是占领鸭绿江以南地区旋即停止推进,我们根本就不可能找到一个可有效控制所有通向北朝鲜的路径的位置”。</p><p>而同时期,美国空军的炸弹已经落到中国的城市和乡村。这已经不是意图而是实实在在的事实,这在任何一个国家包括美利坚合众国自己,都绝不会视为一种友好表示而只能认为是不怀好意的侵略行动。</p><p>美国前国务卿亨利·基辛格先生也曾在其著作《大外交》中指出:“毛泽东有理由认为,如果他不在朝鲜阻挡美国,他或许会在中国领土上与美军交战。最起码,他没有理由去作出相反的结论。”</p><p>1989年5月5日,美国军事历史学家约翰·托兰(著有《漫长的战斗》)在中国人民解放军军事科学院与他的中国同行们交流时说:“中国出兵朝鲜是出于国家利益的考虑,是不得已的。如果苏联打到墨西哥,那么美国在5分钟之内就会决定出兵。”</p><h4 id="六、中国军队出兵援朝的决策真正原因是什么?"><a href="#六、中国军队出兵援朝的决策真正原因是什么?" class="headerlink" title="六、中国军队出兵援朝的决策真正原因是什么?"></a>六、中国军队出兵援朝的决策真正原因是什么?</h4><p>答:保家卫国!中国人民志愿军入朝参战,是在中国领土主权受到侵犯,“联合国军”打到鸭绿江边,战火已烧到中国边境城市的情况下发生的,是侵略凶焰已经直接威胁到我们的国家安全环境的情况下发生的,是在中国政府再三警告言之有预有理有节先礼后兵而侵略者仍然置若罔闻肆无忌惮得寸进尺一意孤行的情况下发生的。严肃一点的美国学者或军人──既或是与中国军队交过手的美国军人,都不否认中国军队出兵援朝的合理性。</p><p>其次,对盟友和战友危难之际履行一个社会主义大国的国际主义义务也是一个重要原因,弱者对付强者最有效的武器就是自身的团结与互助,一个负责任的社会主义大国首先应该对自己的战友和盟友负责!受人涓埃之恩,必当涌泉相报,这是中华民族代代传承的道义火矩和优良传统,中朝两国人民在过去反对帝国主义侵略的共同斗争曾经相濡以沫相互支援共挽民族危亡,中国人民革命斗争的旗帜也浸染着朝鲜志士的鲜血,共同的境遇共同的命运使中朝两大民族同病相怜,共同的利益共同的愿望使中朝两国人民生死相依。可以说,没有任何两个毗邻民族能够比中朝两大民族更能深刻体验和感受唇亡齿寒之迫,户破堂危之急。勿需对历史作太久远的回顾,灭亡了朝鲜的日本得寸进尺染指中国最终迫使中华民族发出“最后的吼声”,对中国人民就足具史鉴来者之功效。</p><h4 id="七、中国军队出兵援朝延缓了解放台湾吗?"><a href="#七、中国军队出兵援朝延缓了解放台湾吗?" class="headerlink" title="七、中国军队出兵援朝延缓了解放台湾吗?"></a>七、中国军队出兵援朝延缓了解放台湾吗?</h4><p>答:这种说法颠倒了因果关系,中国军队出兵援朝是在美国军队进占台湾之后,阻碍中国人民完成祖国统一大业的是美国军队。这个因果关系应该不难分清。</p><h4 id="八、朝鲜民主主义人民共和国经济状况不佳,战后发展远不如大韩民国,是否证实中国人民志愿军入朝参战是错误的?"><a href="#八、朝鲜民主主义人民共和国经济状况不佳,战后发展远不如大韩民国,是否证实中国人民志愿军入朝参战是错误的?" class="headerlink" title="八、朝鲜民主主义人民共和国经济状况不佳,战后发展远不如大韩民国,是否证实中国人民志愿军入朝参战是错误的?"></a>八、朝鲜民主主义人民共和国经济状况不佳,战后发展远不如大韩民国,是否证实中国人民志愿军入朝参战是错误的?</h4><p>答:没有道理。打个比方,你向银行货款购房,银行是否因此就要对你终生的行为和经济状况负责?你购了房,银行得了利,你后来又把房卖了,银行是否必须为你的卖房行为负责?或者再打个比方,你见义勇为救了一个人,是否意味着你必须对此人此后所有行为负责?更何况,朝鲜民主主义人民共和国在安全环境受到了严重威胁,生存环境受到了严重制约的情况下,取得举世瞩目的建设成就,朝鲜人民既或在经济上遭受了严重困难的日子里仍然享受着令世人羡慕的各种基本福利制度,劳动群众的基本生存权利得到了相当切实有效的保障,这也是不容忽视的事实!</p><h4 id="九、有人说:“中国军队出兵援朝有合理性,但打过三八线就是侵略。”这种说法有无道理?"><a href="#九、有人说:“中国军队出兵援朝有合理性,但打过三八线就是侵略。”这种说法有无道理?" class="headerlink" title="九、有人说:“中国军队出兵援朝有合理性,但打过三八线就是侵略。”这种说法有无道理?"></a>九、有人说:“中国军队出兵援朝有合理性,但打过三八线就是侵略。”这种说法有无道理?</h4><p>答:没有道理。</p><p>第一,来而不往非礼也,寇能往,我亦能往!</p><p>第二,除恶务尽,第二次世界大战中,苏美英军队直捣柏林为中国军队作出了极好的榜样。至于没有达到这个目的,那是中国军队本事不够,家伙也不行,与该不该打过去没有关系!</p><p> 第三,三八线的本质涵义是只有约束美苏的意义,而无约束其它人的意义。</p><h4 id="十、抗美援朝延缓了中国的国际交往,延缓了经济建设和对外开放。"><a href="#十、抗美援朝延缓了中国的国际交往,延缓了经济建设和对外开放。" class="headerlink" title="十、抗美援朝延缓了中国的国际交往,延缓了经济建设和对外开放。"></a>十、抗美援朝延缓了中国的国际交往,延缓了经济建设和对外开放。</h4><p>答:此问与第七问一样,属因果颠倒。再者,战争胜利鼓舞了人民斗志,在抗美援朝战争期间,中国完成了国民经济恢复,在近代史上,第一次将军费降到了国家财政支出的一半以下,同时还完成了清匪反霸,消灭百万国民党残余部队,进军西藏完成祖国大陆统一的壮举。应该说抗美援朝促进了新中国的建设。至于对外开放交流,抗美援朝战争为三十年后的改革开放奠定了安全环境。没有志愿军将士用枪炮与霸权实现的交流,就没有后来在平等基础上的和平对话。没有“打”开路,就没有“和”临头!对强权者,敢战,方能言和!</p><h4 id="十一、有人将德国分治与朝鲜半岛分裂相提并论,有无法理依据?"><a href="#十一、有人将德国分治与朝鲜半岛分裂相提并论,有无法理依据?" class="headerlink" title="十一、有人将德国分治与朝鲜半岛分裂相提并论,有无法理依据?"></a>十一、有人将德国分治与朝鲜半岛分裂相提并论,有无法理依据?</h4><p>答:没有!德国是第二次世界大战的战败国,不光要享受盟国分区占领的待遇,还要接受盟国的强行管制。1945年6月5日,苏美英法在柏林签署了击败德国、对德分区占领和管制德国的三个宣言,决定了德国彻底的非武装化和非军事化的问题,明确了盟国有权在德国任何部分或全部驻扎军队及设置民事机构,行使最高权力。同是也明确了盟国可以采取他们认为对于和平与安全所需要的步骤。</p><p>而朝鲜是日本帝国主义的殖民地,是帝国主义侵略战争的受害者而不是加害者,朝鲜人民在反法西斯战争胜利后理应获得独立自由和解放——这也是开罗宣言中包括美国在内的各大国为之作出的承诺,而不是占领、约束和强行管制,更不是再次受到侵略战争的戗害!</p><h4 id="十二、有人称,朝鲜战争中苏联占了大便宜,因而中国出兵参战是错误的,此话似乎有理?"><a href="#十二、有人称,朝鲜战争中苏联占了大便宜,因而中国出兵参战是错误的,此话似乎有理?" class="headerlink" title="十二、有人称,朝鲜战争中苏联占了大便宜,因而中国出兵参战是错误的,此话似乎有理?"></a>十二、有人称,朝鲜战争中苏联占了大便宜,因而中国出兵参战是错误的,此话似乎有理?</h4><p>答:这是一个低智商问题,与当今时髦的市场经济理念格格不入——这与做生意的道理一样,大本钱挣大钱,小本钱挣小钱,不能因为有大本钱的挣了大钱,只有小本钱的连小钱也不挣了——更何况挣来的还未必是小钱。比如第二次世界大战中国出了大力,占的便宜不大,甚至还被人出卖权益,而美国人却占了大便宜,那么是否可以认为中国抗战也是错误的?</p><h4 id="十三、美国即然出兵占领了中国台湾,为什么中国不出兵台湾而出兵朝鲜?"><a href="#十三、美国即然出兵占领了中国台湾,为什么中国不出兵台湾而出兵朝鲜?" class="headerlink" title="十三、美国即然出兵占领了中国台湾,为什么中国不出兵台湾而出兵朝鲜?"></a>十三、美国即然出兵占领了中国台湾,为什么中国不出兵台湾而出兵朝鲜?</h4><p>答:又是一个低智商问题。谁规定了别人打我头我也只能打他头的道理?德国轰炸英国的脑袋伦敦,邱吉尔却在打量人家“柔软的下腹部”。同理,美国人卡中国人脖子,中国人就朝踢美国人的裤裆狠命一脚——如此而已!</p><h4 id="十四、有人占了中国的外蒙古,中国为什么不出兵?"><a href="#十四、有人占了中国的外蒙古,中国为什么不出兵?" class="headerlink" title="十四、有人占了中国的外蒙古,中国为什么不出兵?"></a>十四、有人占了中国的外蒙古,中国为什么不出兵?</h4><p>答:新中国接过的是国民党反动政府的破产家业,同时也承担了国民党反动政府留下的历史债务,而且也尽其可能清理得足够干净了。新中国必须为已经取得的国家利益以及能够争取到的国家权益承担全责,世人没有理由要求他们能够清偿所有的历史债务——尤其是扔下这个破产家业再也不承但任何实际责任的前朝败家子!</p><h4 id="十五、中国军队在朝鲜战争中伤亡大于美军,所以美军是胜利者!"><a href="#十五、中国军队在朝鲜战争中伤亡大于美军,所以美军是胜利者!" class="headerlink" title="十五、中国军队在朝鲜战争中伤亡大于美军,所以美军是胜利者!"></a>十五、中国军队在朝鲜战争中伤亡大于美军,所以美军是胜利者!</h4><p>答:不胜其理!即或此说前提成立,推论仍属荒谬。评价战争胜负的首要前提是战争的目的达到与否及达到的程度,而不仅仅是人头账。苏德战争德军伤亡低于苏军,是否可以认为德军是胜利者?越南战争越南军民伤亡200~300万,美军伤亡30余万,美国人是否敢说自己是胜利者?</p><p>另外,中朝军队面对的是整个“联合国军”和韩军,做算术题时忽略这些被加数,是一种难以原谅的选择性遗忘!</p><h4 id="十六、毛泽东送儿子上前线是镀金。"><a href="#十六、毛泽东送儿子上前线是镀金。" class="headerlink" title="十六、毛泽东送儿子上前线是镀金。"></a>十六、毛泽东送儿子上前线是镀金。</h4><p>答:既然这是个天大的好事儿,将来再有战争或抗洪救灾之类的好事情时,建议首先安排出此语者自己或其儿女到炮火下或洪水中去镀它一金,或烈火焚身,或洪水没顶?新中国决定出兵入朝参战时,连许多身经百战的将帅都没有把握一定胜利,新中国领导人甚至还准备应付美国军队进入中国——“就当中国革命晚胜利几年”!如果有人硬要说毛泽东此时送子上前线是去“镀金”,那不是卯足了劲儿在夸毛泽东料事如神,硬把毛泽东再往神坛上推么?</p><p>再者,如果毛泽东不送儿子上前线呢?你是否能够接受而不再赘言?你又会不会诅咒毛泽东让别人的孩子当炮灰,自家儿子在家躲清闲?横竖毛泽东都是一肚子私欲?而毛岸英就因为有毛泽东这个老子,横竖都该死,——哪怕他是为国捐躯?</p><p>这还有理可讲么?不是天赋人权么?不是上帝面前人人平等么?哪儿去啦?</p><h4 id="十七、为什么要用志愿军名义,是因为中国人胆小不敢向美国宣战!"><a href="#十七、为什么要用志愿军名义,是因为中国人胆小不敢向美国宣战!" class="headerlink" title="十七、为什么要用志愿军名义,是因为中国人胆小不敢向美国宣战!"></a>十七、为什么要用志愿军名义,是因为中国人胆小不敢向美国宣战!</h4><p>答:这是幼儿智力问题且有睁眼瞎之嫌!美利坚合众国正规军劈头盖脑挨了一顶臭揍,明知出招者乃货真价实训练有素的中国正规八路,却仍然忍气吞声不敢堂而皇之宣战接招,不光是胆小,且已气短。至于中国人为何使用志愿军名义,那是中国人民高兴中国人民愿意,中国人民乐意在没有官方名义的前提下充分表达自己的“自由意志”。</p><h4 id="十八、中国军队有苏联撑腰,胜之不武!"><a href="#十八、中国军队有苏联撑腰,胜之不武!" class="headerlink" title="十八、中国军队有苏联撑腰,胜之不武!"></a>十八、中国军队有苏联撑腰,胜之不武!</h4><p>答:中国军队将美国军队从鸭绿江赶回三八线,基本上凭的是手中的“万国牌武器”。苏式武器是运动战后期四五次战役才开始陆续装备部队,苏联空军只掩护清川江以北部分交通线,且大规模参战是在五一年夏季以后,而此时战场大格局已经奠定。</p><p>另外,国民党军队有美国家伙撑腰还有力量优势,仍然败到了台湾?是不是败之很武?</p><p>顺便说一句,美国军队有联合国旗号壮胆,却被迫与人议和,与之对等议和者还是一个根本不被联合国承认的国家,实在是和之无脸!</p><h4 id="十九、中国军队打人海战术,胜之不武!"><a href="#十九、中国军队打人海战术,胜之不武!" class="headerlink" title="十九、中国军队打人海战术,胜之不武!"></a>十九、中国军队打人海战术,胜之不武!</h4><p>答:战争是一种资源较量。各打各的资源,穷人的资源是人,富人的资源是钱──钱能买来“火海战术”。中国军队在“火海战术”下还能集中和机动优势兵力打歼灭战,是战争指导艺术高超的体现。中国军队战略上是“人海战术”,战术上是“小兵群战术”,对此,前美第八集团军司令官马克斯韦尔·泰勒将军对中国军队有极高评价。</p><h4 id="二十、在今天这个和平发展的新时代应该多讲如何避免战争,而不应津津乐道于过去的战争。"><a href="#二十、在今天这个和平发展的新时代应该多讲如何避免战争,而不应津津乐道于过去的战争。" class="headerlink" title="二十、在今天这个和平发展的新时代应该多讲如何避免战争,而不应津津乐道于过去的战争。"></a>二十、在今天这个和平发展的新时代应该多讲如何避免战争,而不应津津乐道于过去的战争。</h4><p>答:同意!所以说“好战必亡”的道理应该多讲给战争能力极其强大而自身受战争戗害极少的国家听。“忘战必倾”的道理应该多讲给战争能力不够强大且自身受战争戗害极多的国家听。具体地说,军事机器最强大而自身受战争戗害极少的美利坚合众国不应津津乐道过去的战争,而要多听听“好战必亡”的道理;军事机器不够强大且自身受战争戗害太多的中华人民共和国需要多多回顾过去的战争,且须多念念“忘战必倾”的道理。这样才有可能避免战争再起。 </p><h4 id="名词解释:"><a href="#名词解释:" class="headerlink" title="名词解释:"></a>名词解释:</h4><p>朝鲜战争:1950年6月—1953年7月,是<a href="http://baike.baidu.com/view/119146.htm" target="_blank" rel="noopener">朝鲜半岛</a>上的朝韩之间的民族内战。</p><p>抗美援朝:1950年10月—1953年7月,是中国人民支援朝鲜人民抗击美国侵略的群众性运动。</p>]]></content>
<categories>
<category> 闲话桑麻 </category>
</categories>
<tags>
<tag> 转载 </tag>
</tags>
</entry>
<entry>
<title>说话</title>
<link href="/2018/03/25/%E8%AF%B4%E8%AF%9D/"/>
<url>/2018/03/25/%E8%AF%B4%E8%AF%9D/</url>
<content type="html"><![CDATA[<link href="/style.css" rel="stylesheet" type="text/css"><script src="/crypto-js.js"></script><script src="/mcommon.js"></script><script src="//cdn.bootcss.com/jquery/1.11.3/jquery.min.js"></script> <div id="security"> <div> <div class="input-container"> <input type="password" class="form-control" id="pass" placeholder=" Welcome to my blog, enter password to read. "/> <label for="pass"> Welcome to my blog, enter password to read. </label> <div class="bottom-line"></div> </div> </div> </div> <div id="encrypt-blog" style="display:none"> U2FsdGVkX1+hL67qqXxkahbbNZ9TspMNXZbVzHgytn+rp+qAQRnsBnngu4AD7sW85jlNXtvsPa5j00ABWIBosXv6y+c9xL0+vLYlmB79Kni6cC6/G4cK95+p3huSwUuMFkLaLXTqsiF2FbiwUzsZ/vxReZaLEiqfsGS0LqCuNV7yblx3rIfRLtlOgG8GWcZz/oMuv+dix9f0WTKWuh7M/jQJo03cDDbREkb+mGyrFjsbY7XLk7lzLDnYOA1BDh8yrHOo/fjSizcWTfOrxmhQ8qwSUEiE3ERG6JjA/yueCznldgGQ3pAbXBDeFcYEawyyXt4qK/Zfi31p4N5rVaiH3HGNutP/Hj7gbQspqfG9fc46SAGE6kt0IuLV28dJqZJXgsLwcTrqEM4ciuH67PK7G+ISec5Qw98sdbEYvyYQwGeHbAQCgJJ0/Zs064kIckxuOHiYP+haGqjNlADPUfTDCRKk9jHbmeUkU7k8Cyjm3hU8syHRlZzZEKIVWeagtdbMU6o/CXla8EmadVV9WGvKqrq+0+jlontKBMvb1Wety1O6k750ycxYyIB3OTHKAdtvFmz7JhfS4Qr4TwapCpJYNMuQy+cfSF7XD6oblpYOKhLv4YocefzNjDhge1lj8RS8ese8GDkbxjJyRsYS8kjkRQvDXcl4Y3PK36DMhCHWWzDRDnMMZ7HLP0myQAbSlxQLGGNo2eBuz4xqos/OhVv2csS5PiUESdpW20PFaLwLZmmKEedtwMOY1cQdV2FdgWeYy15UMxMJUs3CEzXYgPvxYYBnK0fiZkK9hbYyxZ48GRaILeYT3E7uXmm/xLqzqeF8viuR8dRdq02rRbCELzeEVot71Z/3yUDurvPzjkuFVS0hZdcspONR6uox3dmxmRQfOMIfLdq4kr++ugEyXg+NWE5z3p5+s74wsErMeGi4zpWLN5DVPGzc+RCEQ/e3l6tQel8eE/FNQgdWOWttS74HJMJALf5bYujMt0z3vuEhm1y/GyNkpJosnnQc4CcVRUcCuMbEnMDbi2HzmNXbHOHG9mG40918VupTybcPyyB9Q5TIEk1NSPDj1Gm/N7isqcb7d2OpF8l9dG5zcIIcc3YOmMOOvGeO49xaiowX76Rz3V80VK1u9+0TgMZWT+xxFfiDftX41hvAaH+3TgS6UelAd6zdWObUZnyvwuneckgkjbGq0Ci9ItA+P0HDVmQ+IMvrAOAc/bgv1TfavLV9UkHixjqIlJUvspjUWhMw18BFC6s3ooLpLxHSGGbNpg31QwrKLA2ED4EG6FAbw+JBagXpkf+z5KCVKsqHplN4tPV7IGGXzZWvLFK1AYN9Ectyfn/ys5tito1Tsv+llIIY2pYyuXbpsh2tLkRIy8A6UBC/lsMKHVjIBOoZBtuJsAt6dz8ZCWu3bQHeeHLS8A45zGpRuc23U1yFXawXoY3HSqMc46LkJ3g0knQmhbJd5xqZk3uklUpAvPGFAUlbSMqzRWo5xERX9l44tCTluNL4xnmW3g1oPG6TkvD9fCb5W2uYwXd95B2o19kEifK0n06qDXIPLmAgJhpsnYPCyqY2W6rg3sHIpKsJLfjbmXfioZytqa57gp+lryfbGFiUc9bOqyN+8B8ZFunz+rklKxLABoSnl/fbMjkGfD+RcDVsq32WPYy35b9Cp4lZ0yyakeJFmDluvBeIy5pKCjT12FeiaOsh0DZVzRT1r1wdDI5rbXNnTHovUL5sa8dJNHUyD/jCqbR6pmk1quvqdL5oRLB1naFIbVSInNp5gcje8yDISBqxVuV2Bi/b0paFyTJRkDNyCv0Idlg1DtybrczcUJ8W3FrvGeJmxeeyD/A2Bjv2UzBTsPuy6+31TvY+JSBdiJyRKXwMlHAB/tOuYiMgWGqA4gVBgW/XoI4vbBWCSKPRcUV20TNSXRqPqNnCacAM0kZqRoHC8Y3Hx5LL+GQUsgwD+5P2kkToyz8aBGABoIA1Kky9uxqaZ6YHCUlwFXHn3LTDpbNPrIcXjCGTAmMvL72KlSTxwJtWOtrclYcrA8bzQUcG0awO8ZmW2fryhBewBUGo8/uVGEqxRakTJVUikHhlWy0aWwhxmvGJH9JWqVkU9sATK1Yr6R6hGCdQhzQhp8HfGGzaWhqnb1AssHGeh0QNwMEf9SPaWTv/xk1m2UBT4IH8hhQ63WFXACZDYGb64SUHScqeLEeGcZwOQWs4iJAs7N2e02aJ+pU3uaVPP3OBpJPJVC8Bo2NJzKC09TwU7qdMAzg9A/6be4omP514rUwrzLlIp/OdtQNojYkY+5J686o6dw4QuScXLsWEX8d4KV28rRyg2LKHCz5sKCBs85vN3P4HpD8yYJLG8u2VcGulTiwKI/enuqaznYqYjca2D88Ty7FdrIUwEXvRv1b8Z+/xZW01TQ7v0LZZnV6d6M/gExoHjDNtP/nF1738E3yMJMxveTtMNzfRiFu/sVqYE7MY8cJEAfEK6dKkqFM1L+1sGSrp4yo/4MoC3zUOD7KCpq4zsfaNFxvs8pTkrFOBoY2Sq/94O8468mNhYIntgUF7oZCFTaKg4kxgmUqhW1j4LbarUW9ZChQP5ORzIKyn2Zout4kgHH1UtcZzsE1+VS6FKM4ykbGvOYhCrqmFzpV//msL/Q+/8SwdtLKqHU1L5l7iBUSFXE6ajo88paFFcnbEIm5WQRCozR1PKohQkTp6BphDjYxEz43c48oly4H9Pkw4wM6d18RqMATmzWyBcU9VWfWIBO3qNW+Qbmdyw3WepoSxd9yxZ+yeYJxQ8h/AWlGbej0CgqlXtnfLu2P8hhVcNyKEFPkSKIi7ElCyExxcwOa0Y3Yt6zZmrcnGoKsbP0ZwF79sGF+KTfUpHlEIL3xUgPrrg5U4a/LbsZJZWwmX03nUBxEPGaYNleluR/Ze4jMzXP6sxKcS9EyvaoTu3Wk6EVB8n+++7QE6uDfozLY4ZOUEclde3HgJGalfELWxZgCGqML15ZqgYroryKY4lJEOjwAmUCGRGZyXi49AuF/PFXjXhOEYssUY1gbWnrlmmujgiqO3PV7RqMl6retlVhzT5ljMgmUCNEpUMmF2dU2NKYdRydXk7ctdtixjrES4urlqaZsBAPCLrG5Foa4mmL2dnyiQfxXv4rXJzZrChfaGtW8ATxBxS5liGxsS32J4chL9Ffc8JwcuxeCO0DSE/9ZPkqxQRI9AqGOOsT15UmPdtlg7RI3QTba1Ci0Bz3wJSnRFGixzeFFXvtj6yh+QOX3jqCqrNpQ9Ke0EUM9OvGMzlCmrQ/gU0b/Opb3g486IfJkCBC7dy7X+BEvYvdRgZUroCozRv5LOLnPeGB8lLdPBvtg08EAsSNVpuMOaqznSmJZXLvbgQA44SFcJq0z12nmVMPK0kG2IQXi99sjJbsJtGwKatbOfnflsP6xsLG1ITM+3PON7fj1k43TaRopBof+Z9ZfwGsSOTLhm1O+H7cY7jJ7MSmDUslYNTZs5JIVe0q0/DkUNpvNe52jqIaEi2/uuXdsCMRQ/YaoGkFXiW6aMQ4pEyMRa06COKF66GlYwWr4AcPUDDJ68deFBq1UJAGQeUiXIf5rCmj0GKzk6NmsJ78jVSRnSYFPj61kcHD//wAhWpv2FNfsKNI0hF8jwGK3iBq7PIYAkH0HZd17fIrvSFmdsyal2B7ISZwyafu+4yhGl1qFPpwY= </div>]]></content>
<categories>
<category> 闲话桑麻 </category>
</categories>
</entry>
<entry>
<title>强制删除工具Geek Uninstaller</title>
<link href="/2018/03/22/%E5%BC%BA%E5%88%B6%E5%88%A0%E9%99%A4%E5%B7%A5%E5%85%B7Geek-Uninstaller/"/>
<url>/2018/03/22/%E5%BC%BA%E5%88%B6%E5%88%A0%E9%99%A4%E5%B7%A5%E5%85%B7Geek-Uninstaller/</url>
<content type="html"><![CDATA[<p>介绍一个强力删除windows软件的小工具,只有2M大小。<a href="https://geekuninstaller.com/" target="_blank" rel="noopener">官方网址在此</a></p><p>经测试,QQ拼音输入法还是不能完全删除,这锅得QQ来背。太流氓!!!不过这款软件还是很好用!</p><h4 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h4><ol><li><a href="https://zhuanlan.zhihu.com/p/31299448" target="_blank" rel="noopener">2M强力“卸载神器”,从此对流氓软件说“不”</a></li></ol><a id="more"></a>]]></content>
<categories>
<category> 软件工具 </category>
</categories>
</entry>
<entry>
<title>Jackie Chan 成龙</title>
<link href="/2018/03/19/Jackie-Chan-%E6%88%90%E9%BE%99/"/>
<url>/2018/03/19/Jackie-Chan-%E6%88%90%E9%BE%99/</url>
<content type="html"><![CDATA[<p>想开篇来聊一聊成龙。</p><p>读了这么多书,总想着写东西首先得规划怎么写,采用什么框架写,才不至于流水账形式。</p><p>对于成龙来说,看着他的电影长大,喜欢李连杰的飘逸,可惜李连杰老了,再不复当年之勇,光环慢慢退却;而成龙,还时常活跃在视线中,他还是那个能打能给人带来欢乐的影人,虽然偶尔也发现他也是个六十多岁的老头了,但对他的欣赏仍旧没有改变,反倒随着时间的推移,越来越爱。为了了解他,特地去知乎上搜了:如何评价成龙?从众网友的文字中不难发现,大家都很爱成龙,他超越了同时代的其他影人,成为了旗帜,成了龙。</p><a id="more"></a><iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width="100%" height="86" src="//music.163.com/outchain/player?type=2&id=64266&auto=0&height=66"></iframe><p>这首歌,是我比较喜欢的大哥的一首歌,歌词平淡,很生活化。其实在众多华语音乐人来说,成龙的标签很明显,这跟他的从师经历有很大的关系。师从京剧名家,京剧表演已经深深烙印在他的灵魂里,无论拍戏还是唱歌,都能发现京剧对他的影响,他很好的把传统与现代结合,独树一帜。</p><!--<iframe width="480" height="320" src="https://static.hdslb.com/miniloader.swf?aid=103632&p=1" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>--><div class="video-container"><iframe src="http://player.youku.com/embed/XNjQwOTM0ODIw" frameborder="0" allowfullscreen></iframe></div><blockquote><p><del>这个B站链接有问题,可以直接点进去观看。</del>终于可以了,用swf格式,就可以内嵌播放了。</p></blockquote><p>这部电影揭示了成家班的特效制作,作为影人,成龙将成家班做到了专业化,这也是他走出去回来给自己带来的变化。</p>]]></content>
<categories>
<category> 闲话桑麻 </category>
</categories>
<tags>
<tag> 成龙 </tag>
</tags>
</entry>
<entry>
<title>教你用文件HASH特征码下载Torrent种子文件</title>
<link href="/2018/03/17/%E6%95%99%E4%BD%A0%E7%94%A8%E6%96%87%E4%BB%B6HASH%E7%89%B9%E5%BE%81%E7%A0%81%E4%B8%8B%E8%BD%BDTorrent%E7%A7%8D%E5%AD%90%E6%96%87%E4%BB%B6/"/>
<url>/2018/03/17/%E6%95%99%E4%BD%A0%E7%94%A8%E6%96%87%E4%BB%B6HASH%E7%89%B9%E5%BE%81%E7%A0%81%E4%B8%8B%E8%BD%BDTorrent%E7%A7%8D%E5%AD%90%E6%96%87%E4%BB%B6/</url>
<content type="html"><![CDATA[<a id="more"></a><ol><li><p>你需要一个HASH(特征码) 比如:8242fb388f8e56a0b6b405ba369c61cfe8c5bc42</p></li><li><p>在你HASH值前加上磁力链接前缀:magnet:?xt=urn:btih:</p></li><li><p>得到这样的Magnet link(磁力链接):magnet:?xt=urn:btih:8242fb388f8e56a0b6b405ba369c61cfe8c5bc42</p></li></ol><p>简单吧,你只需要有个HASH值就可以下载任何文件!</p><p>复制到迅雷、BT等下载工具里下载吧。</p>]]></content>
<categories>
<category> 技术堆栈 </category>
</categories>
<tags>
<tag> Torrent </tag>
</tags>
</entry>
<entry>
<title>一人之下</title>
<link href="/2018/03/16/%E4%B8%80%E4%BA%BA%E4%B9%8B%E4%B8%8B/"/>
<url>/2018/03/16/%E4%B8%80%E4%BA%BA%E4%B9%8B%E4%B8%8B/</url>
<content type="html"><![CDATA[<p>国漫崛起时,最近被一部国漫《一人之下》实力圈粉,尤其喜欢剧内的各种方言配音,当然少不了各种人物角色歌曲,下面即是网易云的链接:</p><a id="more"></a><h3 id="诸葛青"><a href="#诸葛青" class="headerlink" title="诸葛青"></a>诸葛青</h3><iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width="330" height="86" src="//music.163.com/outchain/player?type=2&id=537196363&auto=0&height=66"></iframe>]]></content>
<categories>
<category> 音乐之声 </category>