From bd9a25d70f5c5f4527043f5a9eecf85f92cbb137 Mon Sep 17 00:00:00 2001 From: "M. Eren Akbiyik" Date: Mon, 25 Sep 2023 13:29:04 +0300 Subject: [PATCH] doc: clarify the use case --- README.md | 2 ++ docs/source/index.rst | 2 ++ 2 files changed, 4 insertions(+) diff --git a/README.md b/README.md index 9f987c8..fe414cf 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,8 @@ Effortlessly cache PyTorch module outputs on-the-fly with `torchcache`. +Particularly useful for caching and serving the outputs of computationally expensive large, pre-trained PyTorch modules, such as vision transformers. Note that gradients will not flow through the cached outputs. + - [Features](#features) - [Installation](#installation) - [Basic usage](#basic-usage) diff --git a/docs/source/index.rst b/docs/source/index.rst index 7b11961..dc1fd47 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -5,6 +5,8 @@ Welcome to torchcache! `torchcache` offers an effortless way to cache PyTorch module outputs on-the-fly. By caching the outputs of a module, you can save time and resources when running the same pre-trained model on the same inputs multiple times. +Note that gradients will not flow through the cached outputs. + .. toctree:: :maxdepth: 2 :caption: Table of Contents: