Skip to content

Commit

Permalink
Modularized tile_inference and Updated Documentation for 1.1.3 Rele…
Browse files Browse the repository at this point in the history
…ase (#83)

* Modularized `tile_inference` for all layer types.
* Added CPU and GPU bindings for `tiled_inference`.
* Updated ReadTheDocs documentation.
  • Loading branch information
coreylammie committed Aug 10, 2021
1 parent 305eb84 commit bb8836e
Show file tree
Hide file tree
Showing 36 changed files with 864 additions and 488 deletions.
29 changes: 14 additions & 15 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
## Added

1. C++ and CUDA bindings for `memtorch.bh.crossbar.Tile.tile_matmul`.

Using an NVIDIA GeForce GTX 1080, a tile shape of (25, 25), and two tensors of size (500, 500), the runtime of `tile_matmul` without quantization support is reduced by 2.45x and 5.48x, for CPU-bound and GPU-bound operation, respectively. With an ADC resolution of 4 bits and an overflow rate of 0.0, the runtime of `tile_matmul` with quantization support is reduced by 2.30x and 105.27x, for CPU-bound and GPU-bound operation, respectively.

| Implementation | Runtime Without Quantization Support (s) | Runtime With Quantization Support (s) |
| ---------------------- | ---------------------------------------- | ------------------------------------- |
| Pure Python (Previous) | 6.917784 | 27.099764 |
| C++ (CPU-bound) | 2.822265 | 11.736974 |
| CUDA (GPU-bound) | 1.262861 | 0.2574267 |

3. `Eigen` integration with C++ and CUDA bindings.
4. Additional unit tests.
1. Added another version of the Data Driven Model defined using `memtorch.bh.memrsitor.Data_Driven2021`.
2. Added CPU- and GPU-bound C++ bindings for `gen_tiles`.
3. Exposed `use_bindings`.
4. Added unit tests for `use_bindings`.
5. Added `exemptAssignees` tag to `scale.yml`.
6. Created `memtorch.map.Input` to encapsulate customizable input scaling methods.
7. Added the `force_scale` input argument to the default scaling method to specify whether inputs are force scaled if they do not exceed `max_input_voltage`.
8. Added CPU and GPU bindings for `tiled_inference`.

## Enhanced

1. Modularized C++ and CUDA `quantize` bindings.
2. Enhanced functionality of `naive_progam` and added additional input arguments to dictate logic for stuck devices.
1. Modularized input scaling logic for all layer types.
2. Modularized `tile_inference` for all layer types.
3. Updated ReadTheDocs documentation.

## Fixed

1. Removed debugging code from `naive_progam`.
1. Fixed GitHub Action Workflows for external pull requests.
2. Fixed error raised by `memtorch.map.Parameter` when `p_l` is defined.
3. Fixed semantic error in `memtorch.cpp.gen_tiles`.
4 changes: 3 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
author = "Corey Lammie"

# The full version, including alpha/beta/rc tags
release = "1.1.2"
release = "1.1.3"
autodoc_inherit_docstrings = False

# -- General configuration ---------------------------------------------------
Expand Down Expand Up @@ -72,3 +72,5 @@
html_css_files = [
"my_theme.css",
]

pygments_style = "autumn"
13 changes: 11 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,21 @@
:github_url: https://github.com/coreylammie/MemTorch

MemTorch documentation
MemTorch
====================================
`MemTorch <https://github.com/coreylammie/MemTorch>`_ is a simulation framework for memristive deep learning systems that integrates directly with the well-known PyTorch Machine Learning (ML) library.
MemTorch is formally described in *MemTorch: An Open-source Simulation Framework for Memristive Deep Learning Systems*, which is openly accessible `here <https://arxiv.org/abs/2004.10971>`_.

.. image:: https://raw.githubusercontent.com/coreylammie/MemTorch/master/overview.svg?raw=True

The best place to get started is `here <https://colab.research.google.com/github/coreylammie/MemTorch/blob/master/memtorch/examples/Tutorial.ipynb>`__.

Documentation
====================================
We provide documentation in the form of a complete Python API, and numerous interactive tutorials. In addition, a Gitter chatroom is avaliable for discussions:

.. toctree::
:maxdepth: 4
:maxdepth: 3

memtorch
tutorials
Expand Down
13 changes: 12 additions & 1 deletion docs/memtorch.bh.memristor.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
memtorch.bh.memristor
=====================
Submodule containing various behavioral memristor models, that extend :ref:`base-class-label`.
Submodule containing various behavioral memristor models, that extend :class:`memtorch.bh.memristor.Memristor`.

.. automodule:: memtorch.bh.memristor.window
:members:
Expand All @@ -9,12 +9,15 @@ Submodule containing various behavioral memristor models, that extend :ref:`base

memtorch.bh.memristor.Memristor
-------------------------------
Base class used to model memristive device behavior.

.. automodule:: memtorch.bh.memristor.Memristor
:members:
:undoc-members:
:show-inheritance:

Currently supported memristor models are listed below:

memtorch.bh.memristor.LinearIonDrift
------------------------------------

Expand All @@ -39,6 +42,14 @@ memtorch.bh.memristor.Data_Driven
:undoc-members:
:show-inheritance:

memtorch.bh.memristor.Data_Driven2021
---------------------------------

.. automodule:: memtorch.bh.memristor.Data_Driven2021
:members:
:undoc-members:
:show-inheritance:

memtorch.bh.memristor.Stanford_PKU
----------------------------------

Expand Down
57 changes: 51 additions & 6 deletions docs/memtorch.bh.nonideality.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,40 @@
memtorch.bh.nonideality
=======================
Submodule containing various models, which can be used to introduce various non-ideal device characteristics using `memtorch.bh.nonideality.NonIdeality.apply_nonidealities`.

.. toctree::
memtorch.bh.nonideality.endurance_retention_models
Submodule containing various models, which can be used to introduce various non-ideal device characteristics using :class:`memtorch.bh.nonideality.NonIdeality.apply_nonidealities`.

memtorch.bh.nonideality.NonIdeality
-----------------------------------
Class used to introduce/model non-ideal device and circuit characteristics. :class:`patched_model.apply_nonidealities` is commonly used to introduce such characteristics, as demonstrated by the following example:

.. code-block:: python
import copy
import Net
from memtorch.mn.Module import patch_model
from memtorch.map.Parameter import naive_map
from memtorch.map.Input import naive_scale
model = Net()
reference_memristor = memtorch.bh.memristor.VTEAM
patched_model = patch_model(copy.deepcopy(model),
memristor_model=reference_memristor,
memristor_model_params={},
module_parameters_to_patch=[torch.nn.Linear, torch.nn.Conv2d],
mapping_routine=naive_map,
transistor=True,
programming_routine=None,
tile_shape=(128, 128),
max_input_voltage=0.3,
scaling_routine=naive_scale,
ADC_resolution=8,
ADC_overflow_rate=0.,
quant_method='linear')
# Example usage of memtorch.bh.nonideality.NonIdeality.DeviceFaults
patched_model = patched_model.apply_nonidealities(patched_model,
non_idealities=[memtorch.bh.nonideality.NonIdeality.DeviceFaults],
lrs_proportion=0.25,
hrs_proportion=0.10,
electroform_proportion=0)
.. automodule:: memtorch.bh.nonideality.NonIdeality
:members:
Expand All @@ -15,7 +43,6 @@ memtorch.bh.nonideality.NonIdeality

memtorch.bh.nonideality.FiniteConductanceStates
-----------------------------------------------
Used to model a finite number of conductance states.

.. automodule:: memtorch.bh.nonideality.FiniteConductanceStates
:members:
Expand All @@ -24,6 +51,7 @@ Used to model a finite number of conductance states.

memtorch.bh.nonideality.DeviceFaults
------------------------------------
Methods used to model device faults.

.. automodule:: memtorch.bh.nonideality.DeviceFaults
:members:
Expand Down Expand Up @@ -52,4 +80,21 @@ memtorch.bh.nonideality.Retention
.. automodule:: memtorch.bh.nonideality.Retention
:members:
:undoc-members:
:show-inheritance:
:show-inheritance:

For both :class:`memtorch.bh.nonideality.Endurance` and :class:`memtorch.bh.nonideality.Retention`, the following internal endurance and retention models are natively supported:

memtorch.bh.nonideality.endurance_retention_models.conductance_drift
--------------------------------------------------------------------
.. automodule:: memtorch.bh.nonideality.endurance_retention_models.conductance_drift
:members:
:undoc-members:
:show-inheritance:

memtorch.bh.nonideality.endurance_retention_models.empirical_metal_oxide_RRAM
-----------------------------------------------------------------------------
.. automodule:: memtorch.bh.nonideality.endurance_retention_models.empirical_metal_oxide_RRAM
:members:
:undoc-members:
:show-inheritance:

61 changes: 46 additions & 15 deletions docs/memtorch.bh.rst
Original file line number Diff line number Diff line change
@@ -1,41 +1,60 @@
memtorch.bh
===========
Submodule containing various behavioral models.
Submodule containing various memristive device behavioral models and methods to simualte non-ideal device and circuit behavior.

.. toctree::
memtorch.bh.memristor
memtorch.bh.nonideality
memtorch.bh.memristor
---------------------
All memristor models and window functions are encapsulated and documented in :doc:`memtorch.bh.memristor <../memtorch.bh.memristor>`.

memtorch.bh.nonideality
-----------------------
All non-idealities modelled by MemTorch are encapsulated and documented in :doc:`memtorch.bh.nonideality <../memtorch.bh.nonideality>`.

memtorch.bh.crossbar.Crossbar
-----------------------------
Class used to model memristor crossbars.
Class used to model memristor crossbars and to manage modular crossbar tiles.

.. code-block:: python
import torch
import memtorch
crossbar = memtorch.bh.crossbar.Crossbar(memtorch.bh.memristor.VTEAM,
{"r_on": 1e2, "r_off": 1e4},
shape=(100, 100),
tile_shape=(64, 64))
crossbar.write_conductance_matrix(torch.zeros(100, 100).uniform_(1e-2, 1e-4), transistor=True)
crossbar.devices[0][0][0].set_conductance(1e-4)
crossbar.update(from_devices=True, parallelize=True)
.. note::
**use_bindings** is enabled by default, to accelerate operation using C++/CUDA (if supported) bindings.

.. automodule:: memtorch.bh.crossbar.Crossbar
:members:
:undoc-members:
:show-inheritance:

memtorch.bh.crossbar.Tile
-------------------------
Class used to create modular crossbar tiles to represent 2D matrices.
memtorch.bh.crossbar.Program
----------------------------
Methods to program (alter) the conductance devices within a crossbar or modular crossbar tiles.

.. automodule:: memtorch.bh.crossbar.Tile
.. automodule:: memtorch.bh.crossbar.Program
:members:
:undoc-members:
:show-inheritance:

memtorch.bh.crossbar.Program
----------------------------
Methods to program (alter) the conductance devices within a crossbar.
memtorch.bh.crossbar.Tile
-------------------------

.. automodule:: memtorch.bh.crossbar.Program
.. automodule:: memtorch.bh.crossbar.Tile
:members:
:undoc-members:
:show-inheritance:

memtorch.bh.Quantize
--------------------
Wrapper for the pytorch-playground quant.py script.
Wrapper for C++ quantization bindings.

.. automodule:: memtorch.bh.Quantize
:members:
Expand All @@ -44,7 +63,19 @@ Wrapper for the pytorch-playground quant.py script.

memtorch.bh.StochasticParameter
-------------------------------
Methods to model stochastic parameters.
Methods to model stochastic parameters.

**memtorch.bh.StochasticParameter** is most commonly used to define stochastic parameters when defining behavioural memristor models, as follows:

.. code-block:: python
import torch
import memtorch
crossbar = memtorch.bh.crossbar.Crossbar(memtorch.bh.memristor.VTEAM,
{"r_on": memtorch.bh.StochasticParameter(min=1e3, max=1e2), "r_off": 1e4},
shape=(100, 100),
tile_shape=(64, 64))
.. automodule:: memtorch.bh.StochasticParameter
:members:
Expand Down
Empty file added docs/memtorch.cpp.rst
Empty file.
Empty file added docs/memtorch.cu.rst
Empty file.
76 changes: 73 additions & 3 deletions docs/memtorch.map.rst
Original file line number Diff line number Diff line change
@@ -1,10 +1,54 @@
memtorch.map
============
Submodule containing various mapping algorithms.
Submodule containing various mapping, scaling, and encoding methods.

memtorch.map.Input
-------------------
Encapsulates internal methods to encode (scale) input values as bit-line voltages. Methods can either be specified when converting individual layers:

.. code-block:: python
from memtorch.map.Input import naive_scale
m = memtorch.mn.Linear(torch.nn.Linear(10, 10),
memtorch.bh.memristor.VTEAM,
{},
tile_shape=(64, 64),
scaling_routine=naive_scale)
or when converting :class:`torch.nn.Module` instances:

.. code-block:: python
import copy
from memtorch.mn.Module import patch_model
from memtorch.map.Input import naive_scale
import Net
model = Net()
patched_model = patch_model(copy.deepcopy(model),
memtorch.bh.memristor.VTEAM,
{},
module_parameters_to_patch=[torch.nn.Linear],
scaling_routine=naive_scale)
.. automodule:: memtorch.map.Input
:members:
:undoc-members:
:show-inheritance:

.. note::
**force_scale** is used to specify whether inputs smaller than or equal to **max_input_voltage** are scaled or not.

memtorch.map.Module
-------------------
Methods to determine relationships between a memristive crossbar and the output for a given memristive module.
Encapsulates internal methods to determine relationships between readout currents of memristive crossbars and desired outputs.

.. warning::
Currently, only **naive_tune** is supported. In a future release, externally-defined methods will be supported.




.. automodule:: memtorch.map.Module
:members:
Expand All @@ -13,7 +57,33 @@ Methods to determine relationships between a memristive crossbar and the output

memtorch.map.Parameter
----------------------
Methods to naively map network parameters to memristive device conductance's.
Encapsulates internal methods to naively map network parameters to memristive device conductance values. Methods can either be specified when converting individual layers:

.. code-block:: python
from memtorch.map.Parameter import naive_map
m = memtorch.mn.Linear(torch.nn.Linear(10, 10),
memtorch.bh.memristor.VTEAM,
{},
tile_shape=(64, 64),
mapping_routine=naive_map)
or when converting :class:`torch.nn.Module` instances:

.. code-block:: python
import copy
from memtorch.mn.Module import patch_model
from memtorch.map.Parameter import naive_map
import Net
model = Net()
patched_model = patch_model(copy.deepcopy(model),
memtorch.bh.memristor.VTEAM,
{},
module_parameters_to_patch=[torch.nn.Linear],
mapping_routine=naive_map)
.. automodule:: memtorch.map.Parameter
:members:
Expand Down
Loading

0 comments on commit bb8836e

Please sign in to comment.