Skip to content

Commit e75c8a4

Browse files
committed
update
1 parent b8a4549 commit e75c8a4

4 files changed

Lines changed: 35 additions & 15 deletions

File tree

docs/AI/AI深度学习.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,20 @@ docker run --gpus all --shm-size 32g -p 30000:30000 -v ~/.cache/huggingface:/roo
99
```
1010

1111
```bash
12-
Vector Search
12+
Vector Search
1313
向量数据库 学术和产品落地
1414
mongodb
1515

1616
# what is deep search
1717

18-
embedding 端到端 和 workflow
19-
transform
18+
embedding 端到端 和 workflow
2019

20+
three framework
21+
transformer tenserflow and mindspore
2122
paddle paddle ocr 百度
2223

2324
```
25+
| | | |
2426

25-
<span id="busuanzi_container_page_pv">文章总观看量<span id="busuanzi_value_page_pv"></span>次</span>
27+
28+
<span id="busuanzi_container_page_pv">文章总观看量<span id="busuanzi_value_page_pv"></span>次</span>

docs/English/六级.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,24 @@
33
batch 批处理
44

55
batch size 大小
6+
7+
# arxiv abstract
8+
9+
> 1
10+
11+
Large Language Models (LLMs) are increasingly used for various tasks with graph structures. Though LLMs can process graph information in a textual format, they overlook the rich vision modality, which is an intuitive way for humans to comprehend structural information and conduct general graph reasoning. The potential benefits and capabilities of representing graph structures as visual images (i.e., visual graph) are still unexplored. To fill the gap, we innovatively propose an end-to-end framework, called Graph to vIsual and Textual IntegrAtion (GITA), which firstly incorporates visual graphs into general graph reasoning. Besides, we establish Graph-based Vision-Language Question Answering (GVLQA) dataset from existing graph data, which is the first vision-language dataset for general graph reasoning purposes. Extensive experiments on the GVLQA dataset and five real-world datasets show that GITA outperforms mainstream LLMs in terms of general graph reasoning capabilities. Moreover, We highlight the effectiveness of the layout augmentation on visual graphs and pretraining on the GVLQA dataset.
12+
13+
```
14+
大型语言模型 (LLM) 越来越多地用于具有图结构的各种任务。尽管法学硕士可以处理文本格式的图信息,但它们忽略了丰富的视觉模态,这是人类理解结构信息和进行一般图推理的一种直观方式。将图结构表示为视觉图像(即 可视化图表 可视化图表 )仍未被探索。为了填补这一空白,我们创新地提出了一个端到端框架,称为 G G Raph 到 V 我 我 SUAL 和 T T extual 积分 一个 一个 tion(GITA),它首先将可视化图纳入一般图推理中。此外,我们还建立了 G G 基于 RAPH V V ision- L L 语言 Q Q 应用 一个 一个 nswering (GVLQA) 数据集,这是第一个用于通用图推理目的的视觉语言数据集。对 GVLQA 数据集和 5 个真实世界数据集的广泛实验表明,GITA 在通用图推理能力方面优于主流 LLM。此外,我们强调了可视化图上的布局增强和GVLQA数据集上的预训练的有效性。
15+
```
16+
17+
18+
19+
> 2
20+
21+
We propose a new variant of the Adam optimizer called MicroAdam that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint significantly. We control the resulting compression error via a novel instance of the classical *error feedback* mechanism from distributed optimization in which *the error correction information is itself compressed* to allow for practical memory gains. We prove that the resulting approach maintains theoretical convergence guarantees competitive to those of AMSGrad, while providing good practical performance. Specifically, we show that MicroAdam can be implemented efficiently on GPUs: on both million-scale (BERT) and billion-scale (LLaMA) models, MicroAdam provides practical convergence competitive to that of the uncompressed Adam baseline, with lower memory usage and similar running time. Our code is available at https://github.com/IST-DASLab/MicroAdam.
22+
23+
```
24+
我们提出了一种名为 MicroAdam 的 Adam 优化器的新变体,它专门最大限度地减少内存开销,同时保持理论上的收敛保证。我们通过在将梯度信息输入优化器状态之前对其进行压缩来实现这一点,从而显着减少其内存占用。我们通过分布式优化的经典误差反馈机制的新实例来控制由此产生的压缩误差,其中纠错信息本身被压缩以实现实际的内存增益。我们证明,由此产生的方法保持了理论收敛保证,与 AMSGrad 相比具有竞争力,同时提供了良好的实际性能。具体来说,我们表明 MicroAdam 可以在 GPU 上高效实现:在百万规模 (BERT) 和十亿规模 (LLaMA) 模型上,MicroAdam 提供了与未压缩的 Adam 基线相比具有竞争力的实际收敛性,具有更低的内存使用率和相似的运行时间。我们的代码可在 https://github.com/IST-DASLab/MicroAdam 获得。
25+
```
26+

docs/algorithm/algorithm_path.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,4 @@
44

55
leetcode教程笔记 <[序 | LeetCode Cookbook](https://books.halfrost.com/leetcode/)>
66

7-
8-
9-
10-
117
<span id="busuanzi_container_page_pv">文章总观看量<span id="busuanzi_value_page_pv"></span>次</span>

docs/program_language/python/python小组作业/AI组第四次作业.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ print(my_list[6]) # list out of range
4747
# <class 'list'>
4848
```
4949

50-
0 1 2 3 4
50+
0 1 2 3 4
5151

5252
下表索引:对应相对位置的元素
5353

@@ -114,8 +114,8 @@ for element in mylist:
114114
print(f"遍历:{element}", end=" ")
115115

116116
"""
117-
遍历:hello 遍历:python 遍历:test
118-
遍历:hello 遍历:python 遍历:test
117+
遍历:hello 遍历:python 遍历:test
118+
遍历:hello 遍历:python 遍历:test
119119
"""
120120
```
121121

@@ -141,7 +141,7 @@ find_double()
141141

142142
tuple:只读的list,防止被篡改
143143

144-
* 使用( )
144+
* 使用( )
145145

146146
* ```python
147147
t1 = (1, "hello", "name")
@@ -374,7 +374,7 @@ print(score_dict["张三"]["Chinese"])
374374
##### 字典常用操作
375375

376376
```python
377-
score_dict["张三"]["English"] = 100
377+
score_dict["张三"]["English"] = 100
378378
score = score_dict.pop("张三")
379379
print(f"更新后的{score_dict},张三的所有分数是{score}")
380380
```
@@ -390,7 +390,7 @@ dict_keys(['lili', 'libai', 'xiaoming'])
390390
for key in keys:
391391
print(f"字典里面的Key是:{key}")
392392
print(f"字典的value是:{my_dict[key]}")
393-
393+
394394
for key in my_dict:
395395
print(f"字典里面的Key是:{key}")
396396
print(f"字典的value是:{my_dict[key]}")
@@ -422,4 +422,4 @@ list tuple str set dict
422422

423423
第四次作业第五题
424424

425-
![image-20241129145744254](C:\Users\han\AppData\Roaming\Typora\typora-user-images\image-20241129145744254.png)
425+
![image-20241129145744254](C:\Users\han\AppData\Roaming\Typora\typora-user-images\image-20241129145744254.png)

0 commit comments

Comments
 (0)