Skip to content

Commit

Permalink
Add imports and function invocations to sample code (#6)
Browse files Browse the repository at this point in the history
* Add imports and function invocations to sample code

Copying and pasting the existing samples either fails due to import errors, or empty profile results if the function decorated with `@profile` isn't invoked. The samples have been updated so they can all be copy and pasted and run to produce the same results listed in the README

* Add future to requirements.txt to fix failing travis build

See https://travis-ci.com/github/Stonesjtu/pytorch_memlab/jobs/299780365 for the corresponding failing job log
Ref: https://stackoverflow.com/questions/27495752/no-module-named-builtins
  • Loading branch information
willprice authored Mar 19, 2020
1 parent 32742f9 commit 68b9f8b
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 1 deletion.
21 changes: 20 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,15 @@ the memory usage info for each line of code in the specified function/method.
#### Sample:

```python
import torch
from pytorch_memlab import profile
@profile
def work():
linear = torch.nn.Linear(100, 100).cuda()
linear2 = torch.nn.Linear(100, 100).cuda()
linear3 = torch.nn.Linear(100, 100).cuda()


work()
```

After the script finishes or interrupted by keyboard, it gives the following
Expand Down Expand Up @@ -112,6 +114,7 @@ selection is globally, which means you have to remember which gpu you are
profiling on during the whole process:

```python
import torch
from pytorch_memlab import profile, set_target_gpu
@profile
def func():
Expand All @@ -120,6 +123,8 @@ def func():
net2 = torch.nn.Linear(1024, 1024).cuda(1)
set_target_gpu(0)
net3 = torch.nn.Linear(1024, 1024).cuda(0)

func()
```


Expand All @@ -140,6 +145,8 @@ a more low-level memory usage information can be obtained by *Memory Reporter*.
- A minimal one:

```python
import torch
from pytorch_memlab import MemReporter
linear = torch.nn.Linear(1024, 1024).cuda()
reporter = MemReporter()
reporter.report()
Expand All @@ -160,6 +167,9 @@ The allocated memory on cuda:0: 4.00M
- You can also pass in a model object for automatically name inference.

```python
import torch
from pytorch_memlab import MemReporter

linear = torch.nn.Linear(1024, 1024).cuda()
inp = torch.Tensor(512, 1024).cuda()
# pass in a model to automatically infer the tensor names
Expand Down Expand Up @@ -206,6 +216,9 @@ The allocated memory on cuda:0: 10.01M
- The reporter automatically deals with the sharing weights parameters:

```python
import torch
from pytorch_memlab import MemReporter

linear = torch.nn.Linear(1024, 1024).cuda()
linear2 = torch.nn.Linear(1024, 1024).cuda()
linear2.weight = linear.weight
Expand Down Expand Up @@ -245,6 +258,9 @@ The allocated memory on cuda:0: 10.02M
- You can better understand the memory layout for more complicated module:

```python
import torch
from pytorch_memlab import MemReporter

lstm = torch.nn.LSTM(1024, 1024).cuda()
reporter = MemReporter(lstm)
reporter.report(verbose=True)
Expand Down Expand Up @@ -308,6 +324,9 @@ store both `inp` and `inp + 2`, unfortunately python only knows the existence
of inp, so we have *2M* memory lost, which is the same size of Tensor `inp`.

```python
import torch
from pytorch_memlab import MemReporter

linear = torch.nn.Linear(1024, 1024).cuda()
inp = torch.Tensor(512, 1024).cuda()
# pass in a model to automatically infer the tensor names
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
calmsize
torch
future

0 comments on commit 68b9f8b

Please sign in to comment.