Skip to content

Commit

Permalink
Translate beginner_source/deep_learning_nlp_tutorial.rst λ²ˆμ—­ (#887)
Browse files Browse the repository at this point in the history
* Translate beginner_source/deep_learning_nlp_tutorial.rst
  • Loading branch information
oh5221 authored Oct 15, 2024
1 parent 2f5c892 commit 9cb5af4
Showing 1 changed file with 30 additions and 29 deletions.
59 changes: 30 additions & 29 deletions beginner_source/deep_learning_nlp_tutorial.rst
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
Deep Learning for NLP with Pytorch
PyTorchλ₯Ό μ΄μš©ν•œ NLPλ₯Ό μœ„ν•œ λ”₯λŸ¬λ‹
**********************************
**Author**: `Robert Guthrie <https://github.com/rguthrie3/DeepLearningForNLPInPytorch>`_

This tutorial will walk you through the key ideas of deep learning
programming using Pytorch. Many of the concepts (such as the computation
graph abstraction and autograd) are not unique to Pytorch and are
relevant to any deep learning toolkit out there.

I am writing this tutorial to focus specifically on NLP for people who
have never written code in any deep learning framework (e.g, TensorFlow,
Theano, Keras, Dynet). It assumes working knowledge of core NLP
problems: part-of-speech tagging, language modeling, etc. It also
assumes familiarity with neural networks at the level of an intro AI
class (such as one from the Russel and Norvig book). Usually, these
courses cover the basic backpropagation algorithm on feed-forward neural
networks, and make the point that they are chains of compositions of
linearities and non-linearities. This tutorial aims to get you started
writing deep learning code, given you have this prerequisite knowledge.

Note this is about *models*, not data. For all of the models, I just
create a few test examples with small dimensionality so you can see how
the weights change as it trains. If you have some real data you want to
try, you should be able to rip out any of the models from this notebook
and use them on it.
**μ €μž**: `Robert Guthrie <https://github.com/rguthrie3/DeepLearningForNLPInPytorch>`_
**λ²ˆμ—­**: `μ˜€μˆ˜μ—° <github.com/oh5221>`_

이 νŠœν† λ¦¬μ–Όμ€ PyTorchλ₯Ό μ‚¬μš©ν•œ λ”₯λŸ¬λ‹ ν”„λ‘œκ·Έλž¨μ˜ μ£Όμš” 아이디어에 λŒ€ν•΄
μ°¨κ·Όμ°¨κ·Ό μ‚΄νŽ΄λ³Ό κ²ƒμž…λ‹ˆλ‹€. λ§Žμ€ κ°œλ…λ“€(계산 κ·Έλž˜ν”„ 좔상화 및
autograd)은 PyTorchμ—μ„œλ§Œ μ œκ³΅ν•˜λŠ” 것이 μ•„λ‹ˆλ©°, 이미 곡개된
λ”₯λŸ¬λ‹ toolkitκ³Ό 관련이 μžˆμŠ΅λ‹ˆλ‹€.

이 νŠœν† λ¦¬μ–Όμ€ λ”₯λŸ¬λ‹ ν”„λ ˆμž„μ›Œν¬(예: Tensorflow, Theano, Keras,
Dynet)μ—μ„œ μ–΄λ–€ μ½”λ“œλ„ μž‘μ„±ν•΄ λ³Έ 적이 μ—†λŠ” μ‚¬λžŒλ“€μ„
μœ„ν•œ NLP에 νŠΉλ³„νžˆ μ΄ˆμ μ„ λ§žμΆ”μ–΄ μž‘μ„±ν•˜μ˜€μŠ΅λ‹ˆλ‹€. νŠœν† λ¦¬μ–Όμ„ μœ„ν•΄ NLP λΆ„μ•Όμ˜
핡심 λ¬Έμ œμ— λŒ€ν•œ 싀무 기초 지식이 ν•„μš”ν•©λ‹ˆλ‹€. μ˜ˆμ‹œ: ν’ˆμ‚¬ νƒœκΉ…, μ–Έμ–΄ λͺ¨λΈλ§ λ“±. λ˜ν•œ
AI μž…λ¬Έ μˆ˜μ—… μˆ˜μ€€ (Russelκ³Ό Norvig 책에 λ‚˜μ˜€λŠ” 것 같은) 신경망 μΉœμˆ™λ„κ°€ ν•„μš”ν•©λ‹ˆλ‹€. 일반적으둜,
feed-forward 신경망에 λŒ€ν•œ 기본적인 μ—­μ „νŒŒ μ•Œκ³ λ¦¬μ¦˜μ„
닀루고, μ„ ν˜•μ„±κ³Ό λΉ„μ„ ν˜•μ„±μ˜ 연쇄적인 κ΅¬μ„±μ΄λΌλŠ” 점을
κ°•μ‘°ν•©λ‹ˆλ‹€. 이 νŠœν† λ¦¬μ–Όμ€ 이런 ν•„μˆ˜μ μΈ 지식이 μžˆλŠ” μƒνƒœμ—μ„œ
λ”₯λŸ¬λ‹ μ½”λ“œ μž‘μ„±μ„ μ‹œμž‘ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•©λ‹ˆλ‹€.

이 νŠœν† λ¦¬μ–Όμ΄ 데이터가 μ•„λ‹ˆλΌ *λͺ¨λΈ* 에 κ΄€ν•œ κ²ƒμž„μ— μ£Όμ˜ν•΄μ•Ό ν•©λ‹ˆλ‹€. λͺ¨λ“ 
λͺ¨λΈμ— μžˆμ–΄, 단지 μž‘μ€ 차원을 가진 λͺ‡ 가지 μ˜ˆμ œλ§Œμ„ λ§Œλ“€μ–΄ ν›ˆλ ¨ μ‹œ
κ°€μ€‘μΉ˜ λ³€ν™”λ₯Ό λ³Ό 수 있게 ν•©λ‹ˆλ‹€. λ§Œμ•½ μ‹€μ œ 데이터λ₯Ό κ°–κ³  μžˆλ‹€λ©΄,
이 λ…ΈνŠΈλΆμ˜ λͺ¨λΈ 쀑 ν•˜λ‚˜λ₯Ό κ°€μ Έλ‹€κ°€
μ‚¬μš©ν•΄ λ³Ό 수 μžˆμ„ κ²ƒμž…λ‹ˆλ‹€.


.. toctree::
Expand All @@ -36,19 +36,20 @@ and use them on it.


.. galleryitem:: /beginner/nlp/pytorch_tutorial.py
:intro: All of deep learning is computations on tensors, which are generalizations of a matrix that can be
:intro: λͺ¨λ“  λ”₯λŸ¬λ‹μ€ ν–‰λ ¬μ˜ μΌλ°˜ν™”μΈ Tensor에 λŒ€ν•œ κ³„μ‚°μž…λ‹ˆλ‹€.

.. galleryitem:: /beginner/nlp/deep_learning_tutorial.py
:intro: Deep learning consists of composing linearities with non-linearities in clever ways. The introduction of non-linearities allows
:intro: λ”₯λŸ¬λ‹μ€ μ„ ν˜•μ„±κ³Ό λΉ„μ„ ν˜•μ„±μ„ μ˜λ¦¬ν•˜κ²Œ μ‘°ν•©ν•˜λŠ” κ²ƒμœΌλ‘œ κ΅¬μ„±λ©λ‹ˆλ‹€. λΉ„μ„ ν˜•μ„± λ„μž…μ˜ μ†Œκ°œ

.. galleryitem:: /beginner/nlp/word_embeddings_tutorial.py
:intro: Word embeddings are dense vectors of real numbers, one per word in your vocabulary. In NLP, it is almost always the case that your features are
:intro: 단어 μž„λ² λ”©μ€ μ‹€μˆ˜μ˜ dense vector둜, vocabulary(단어 집합)의 단어 λ‹Ή ν•˜λ‚˜μ”©μž…λ‹ˆλ‹€. NLPμ—μ„œλŠ” 거의 feature λŒ€λΆ€λΆ„μ˜ κ²½μš°μ— ν•΄λ‹Ήν•©λ‹ˆλ‹€.

.. galleryitem:: /beginner/nlp/sequence_models_tutorial.py
:intro: At this point, we have seen various feed-forward networks. That is, there is no state maintained by the network at all.
:intro: 이 μ‹œμ μ—μ„œ, λ‹€μ–‘ν•œ feed-forward λ„€νŠΈμ›Œν¬λ₯Ό λ³΄μ•˜μŠ΅λ‹ˆλ‹€. 즉, λ„€νŠΈμ›Œν¬μ— μ˜ν•΄ μœ μ§€λ˜λŠ” μƒνƒœκ°€ μ—†μŠ΅λ‹ˆλ‹€.


.. galleryitem:: /beginner/nlp/advanced_tutorial.py
:intro: Dynamic versus Static Deep Learning Toolkits. Pytorch is a *dynamic* neural network kit.
:intro: 동적 vs. 정적 λ”₯λŸ¬λ‹ Toolkits. PyTorchλŠ” *동적* 신경망 ν‚€νŠΈμž…λ‹ˆλ‹€.


.. raw:: html
Expand Down

0 comments on commit 9cb5af4

Please sign in to comment.