Skip to content

Commit

Permalink
fix: readthedocs build
Browse files Browse the repository at this point in the history
  • Loading branch information
meakbiyik committed Aug 29, 2023
1 parent ec40635 commit f3afd11
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 5 deletions.
1 change: 0 additions & 1 deletion docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ API documentation

.. autosummary::
:toctree: _autosummary
:recursive:

torchcache.torchcache
torchcache.torchcache._TorchCache
Expand Down
9 changes: 7 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
import os
import sys

sys.path.insert(0, os.path.abspath("..")) # Source code dir relative to this file
# Source code dir relative to this file
sys.path.insert(0, os.path.abspath(".." + os.sep + ".."))

# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
Expand All @@ -26,7 +27,11 @@
"sphinx.ext.autosummary",
"numpydoc",
]
autosummary_generate = True # Turn on sphinx.ext.autosummary
if os.environ.get("READTHEDOCS") == "True":
autosummary_generate = False # Turn it off on readthedocs, otherwise it will fail
else:
autosummary_generate = True
add_module_names = False
numpydoc_show_class_members = False

templates_path = ["_templates"]
Expand Down
2 changes: 1 addition & 1 deletion docs/source/how_it_works.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Automatic cache management
1. The decorated module (including its source code obtained through `inspect.getsource`) and its args/kwargs.
2. The inputs provided to the module's forward method.

This hash serves as the cache key for the forward method's output per item in a batch. When our MRU (most-recently-used) cache fills up for the given session, the system continues running the forward method and dismisses the oldest output. This MRU strategy streamlines cache invalidation, aligning with the iterative nature of neural network training, without requiring any additional record-keeping.
This hash serves as the cache key for the forward method's output per item in a batch. When our MRU (most-recently-used) cache fills up for the given session, the system continues running the forward method and dismisses the newest output. This MRU strategy streamlines cache invalidation, aligning with the iterative nature of neural network training, without requiring any additional record-keeping.

.. warning::

Expand Down
2 changes: 1 addition & 1 deletion torchcache/torchcache.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ class CachedModule(nn.Module):
.. code-block:: python
@torchcache(peristent=True)
@torchcache(persistent=True)
class CachedModule(nn.Module):
def __init__(self, cache_dir: str | Path):
self.torchcache_persistent_cache_dir = cache_dir
Expand Down

0 comments on commit f3afd11

Please sign in to comment.