Skip to content

Commit

Permalink
prototype_source/torchscript_freezing.py λ²ˆμ—­ (#788)
Browse files Browse the repository at this point in the history
prototype_source/torchscript_freezing.py λ²ˆμ—­
  • Loading branch information
jih0-kim committed Nov 26, 2023
1 parent 4658d55 commit 98f0344
Showing 1 changed file with 25 additions and 28 deletions.
53 changes: 25 additions & 28 deletions prototype_source/torchscript_freezing.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,24 @@
"""
Model Freezing in TorchScript
TorchScript둜 λͺ¨λΈ λ™κ²°ν•˜κΈ°
=============================
λ²ˆμ—­ : `κΉ€μ§€ν˜Έ <https://github.com/jiho3004/>`_
In this tutorial, we introduce the syntax for *model freezing* in TorchScript.
Freezing is the process of inlining Pytorch module parameters and attributes
values into the TorchScript internal representation. Parameter and attribute
values are treated as final values and they cannot be modified in the resulting
Frozen module.
이 νŠœν† λ¦¬μ–Όμ—μ„œλŠ”, TorchScript둜 *λͺ¨λΈ 동결* ν•˜λŠ” 문법을 μ†Œκ°œν•©λ‹ˆλ‹€.
동결은 νŒŒμ΄ν† μΉ˜ λͺ¨λ“ˆμ˜ λ§€κ°œλ³€μˆ˜μ™€ 속성 값듀을 TorchScript λ‚΄λΆ€ ν‘œν˜„μœΌλ‘œ 인라이닝(inlining)ν•˜λŠ” κ³Όμ •μž…λ‹ˆλ‹€.
λ§€κ°œλ³€μˆ˜μ™€ 속성 값듀은 μ΅œμ’… κ°’μœΌλ‘œ 처리되며 λ™κ²°λœ λͺ¨λ“ˆμ—μ„œ μˆ˜μ •λ  수 μ—†μŠ΅λ‹ˆλ‹€.
Basic Syntax
κΈ°λ³Έ 문법
------------
Model freezing can be invoked using API below:
λͺ¨λΈ 동결은 μ•„λž˜ APIλ₯Ό μ‚¬μš©ν•˜μ—¬ ν˜ΈμΆœν•  수 μžˆμŠ΅λ‹ˆλ‹€:
``torch.jit.freeze(mod : ScriptModule, names : str[]) -> SciptModule``
Note the input module can either be the result of scripting or tracing.
See https://tutorials.pytorch.kr/beginner/Intro_to_TorchScript_tutorial.html
μž…λ ₯ λͺ¨λ“ˆμ€ μŠ€ν¬λ¦½νŒ…(scripting) ν˜Ήμ€ 좔적(tracing)을 μ‚¬μš©ν•œ κ²°κ³Όμž…λ‹ˆλ‹€.
`TorchScript μ†Œκ°œ νŠœν† λ¦¬μ–Ό <https://tutorials.pytorch.kr/beginner/Intro_to_TorchScript_tutorial.html>`_
을 μ°Έμ‘°ν•˜μ„Έμš”.
Next, we demonstrate how freezing works using an example:
λ‹€μŒμœΌλ‘œ, 예제λ₯Ό 톡해 동결이 μ–΄λ–€ λ°©μ‹μœΌλ‘œ λ™μž‘ν•˜λŠ”μ§€ ν™•μΈν•©λ‹ˆλ‹€:
"""

import torch, time
Expand Down Expand Up @@ -58,17 +59,15 @@ def version(self):

try:
print(fnet.conv1.bias)
# without exception handling, prints:
# μ˜ˆμ™Έ μ²˜λ¦¬κ°€ 없을 μ‹œ 'conv1' μ΄λΌλŠ” 이름과 ν•¨κ»˜ λ‹€μŒμ„ 좜λ ₯ν•©λ‹ˆλ‹€.
# RuntimeError: __torch__.z.___torch_mangle_3.Net does not have a field
# with name 'conv1'
except RuntimeError:
print("field 'conv1' is inlined. It does not exist in 'fnet'")

try:
fnet.version()
# without exception handling, prints:
# μ˜ˆμ™Έ μ²˜λ¦¬κ°€ 없을 μ‹œ 'version' μ΄λΌλŠ” 이름과 ν•¨κ»˜ λ‹€μŒμ„ 좜λ ₯ν•©λ‹ˆλ‹€.
# RuntimeError: __torch__.z.___torch_mangle_3.Net does not have a field
# with name 'version'
except RuntimeError:
print("method 'version' is not deleted in fnet. Only 'forward' is preserved")

Expand Down Expand Up @@ -108,27 +107,25 @@ def version(self):
print("Frozen - Inference time: {0:5.2f}".format(end-start), flush =True)

###############################################################
# On my machine, I measured the time:
# 개인 λ¨Έμ‹ μ—μ„œ μ‹œκ°„μ„ μΈ‘μ •ν•œ κ²°κ³Όμž…λ‹ˆλ‹€:
#
# * Scripted - Warm up time: 0.0107
# * Frozen - Warm up time: 0.0048
# * Scripted - Inference: 1.35
# * Frozen - Inference time: 1.17

###############################################################
# In our example, warm up time measures the first two runs. The frozen model
# is 50% faster than the scripted model. On some more complex models, we
# observed even higher speed up of warm up time. freezing achieves this speed up
# because it is doing some the work TorchScript has to do when the first couple
# runs are initiated.
# 이 μ˜ˆμ œμ—μ„œ, μ›Œλ°μ—… μ‹œκ°„μ€ 졜초 두 번 μ‹€ν–‰ν•  λ•Œ μΈ‘μ •ν•©λ‹ˆλ‹€.
# λ™κ²°λœ λͺ¨λΈμ΄ 슀크립트된 λͺ¨λΈλ³΄λ‹€ 50% 더 λΉ λ¦…λ‹ˆλ‹€.
# 보닀 λ³΅μž‘ν•œ λͺ¨λΈμ—μ„œλŠ” μ›Œλ°μ—… μ‹œκ°„μ΄ λ”μš± λΉ¨λΌμ§‘λ‹ˆλ‹€.
# 졜초 두 번의 싀행을 μ΄ˆκΈ°ν™”ν•  λ•Œ TorchScriptκ°€ ν•΄μ•Ό ν•  일의 일뢀λ₯Ό 동결이 ν•˜κ³  있기 λ•Œλ¬Έμ— 속도 κ°œμ„ μ΄ μΌμ–΄λ‚©λ‹ˆλ‹€.
#
# Inference time measures inference execution time after the model is warmed up.
# Although we observed significant variation in execution time, the
# frozen model is often about 15% faster than the scripted model. When input is larger,
# we observe a smaller speed up because the execution is dominated by tensor operations.
# μΆ”λ‘  μ‹œκ°„μ€ λͺ¨λΈμ΄ μ›Œλ°μ—…λ˜κ³  λ‚œ λ’€, μΆ”λ‘  μ‹œ μ‹€ν–‰ μ‹œκ°„μ„ μΈ‘μ •ν•©λ‹ˆλ‹€.
# μ‹€ν–‰ μ‹œκ°„μ— λ§Žμ€ νŽΈμ°¨κ°€ μžˆκΈ°λŠ” ν•˜μ§€λ§Œ, λŒ€κ°œ λ™κ²°λœ λͺ¨λΈμ΄ 슀크립트된 λͺ¨λΈλ³΄λ‹€ μ•½ 15% 더 λΉ λ¦…λ‹ˆλ‹€.
# μ‹€ν–‰ μ‹œκ°„μ€ tensor 연산에 μ˜ν•΄ μ§€λ°°λ˜κΈ° λ•Œλ¬Έμ— μž…λ ₯의 크기가 더 컀지면 속도 κ°œμ„  μ •λ„λŠ” 더 μž‘μ•„μ§‘λ‹ˆλ‹€.

###############################################################
# Conclusion
# κ²°λ‘ 
# -----------
# In this tutorial, we learned about model freezing. Freezing is a useful technique to
# optimize models for inference and it also can significantly reduce TorchScript warmup time.
# 이 νŠœν† λ¦¬μ–Όμ—μ„œλŠ” λͺ¨λΈ 동결에 λŒ€ν•΄ λ°°μ› μŠ΅λ‹ˆλ‹€.
# 동결은 μΆ”λ‘  μ‹œ λͺ¨λΈ μ΅œμ ν™”λ₯Ό ν•  수 μžˆλŠ” μœ μš©ν•œ 기법이며 TorchScript μ›Œλ°μ—… μ‹œκ°„μ„ 크게 μ€„μž…λ‹ˆλ‹€.

0 comments on commit 98f0344

Please sign in to comment.