Skip to content

Commit

Permalink
Merge branch 'master' into joss
Browse files Browse the repository at this point in the history
  • Loading branch information
lyndond authored Apr 2, 2021
2 parents 0a53a08 + 4b92d0c commit 751b7b7
Show file tree
Hide file tree
Showing 42 changed files with 51,602 additions and 55,959 deletions.
31 changes: 26 additions & 5 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
name: build
on:
push:
workflow_dispatch:
schedule:
- cron: "0 0 * * 0" # weekly
pull_request:
branches:
- master
Expand All @@ -20,13 +22,32 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
# based on AllenNLP's setup:
# https://medium.com/ai2-blog/python-caching-in-github-actions-e9452698e98d
- name: Cache environment
uses: actions/cache@v2
with:
path: ${{ env.pythonLocation }}
key: ${{ env.pythonLocation }}-${{ hashFiles('setup.py') }}
- name: Install dependencies
run: |
pip install -e .
pip install pytest-cov
sudo apt update
sudo apt install ffmpeg
# using the --upgrade and --upgrade-strategy eager flags ensures that
# pip will always install the latest allowed version of all
# dependencies, to make sure the cache doesn't go stale
pip install --upgrade --upgrade-strategy eager -e .
pip install --upgrade --upgrade-strategy eager pytest-cov
- name: Run tests with pytest
if: ${{ matrix.test_script == 'display' }}
# we have two cores on the linux github action runners:
# https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
run: |
pip install --upgrade --upgrade-strategy eager pytest-xdist
pytest -n 2 --cov=plenoptic tests/test_${{ matrix.test_script }}.py
- name: Run tests with pytest
if: ${{ matrix.test_script != 'display' }}
# only test_display should parallelize across the cores, the others get
# slowed down by it
run: 'pytest --cov=plenoptic tests/test_${{ matrix.test_script }}.py'
- name: Upload to codecov
run: 'bash <(curl -s https://codecov.io/bash)'
19 changes: 15 additions & 4 deletions .github/workflows/treebeard.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,13 @@
# Run all notebooks on every push and weekly
name: tutorials
on:
push:
workflow_dispatch:
schedule:
- cron: "0 0 * * 0" # weekly
pull_request:
branches:
- master

jobs:
run:
runs-on: ubuntu-latest
Expand All @@ -19,11 +23,18 @@ jobs:
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
# based on AllenNLP's setup:
# https://medium.com/ai2-blog/python-caching-in-github-actions-e9452698e98d
- name: Cache environment
uses: actions/cache@v2
with:
path: ${{ env.pythonLocation }}
key: ${{ env.pythonLocation }}-${{ hashFiles('setup.py') }}
- name: Setup FFmpeg
uses: FedericoCarboni/setup-ffmpeg@v1
- name: Install dependencies
run: |
pip install -e .
sudo apt update
sudo apt install ffmpeg
pip install --upgrade --upgrade-strategy eager -e .
pip install jupyter
pip install ipywidgets
- uses: treebeardtech/treebeard@master
Expand Down
77 changes: 77 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,83 @@ to `.github/workflows/treebeard.yml` and add the name of the new notebook
`examples/100_awesome_tutorial.ipynb`, you would add `100_awesome_tutorial` as
the a new item in the `notebook` list.

### Test parameterizations and fixtures

#### Parametrize

If you have many variants on a test you wish to run, you should probably make
use of pytests' `parametrize` mark. There are many examples throughout our
existing tests (and see official [pytest
docs](https://docs.pytest.org/en/stable/parametrize.html)), but the basic idea
is that you write a function that takes an argument and then use the
`@pytest.mark.parametrize` decorator to show pytest how to iterate over the
arguments. For example, instead of writing:

```python
def test_basic_1():
assert int('3') == 3

def test_basic_2():
assert int('5') == 5
```

You could write:

```python
@pytest.mark.parametrize('a', [3, 5])
def test_basic(a):
if a == '3':
test_val = 3
elif a == '5':
test_val = 5
assert int(a) == test_val

```

This starts to become very helpful when you have multiple arguments you wish to
iterate over in this manner.

#### Fixtures

If you are using an object that gets used in multiple tests (such as an image or
model), you should make use of fixtures to avoid having to load or initialize
the object multiple times. Look at `conftest.py` to see those fixtures available
for all tests, or you can write your own (though pay attention to the
[scope](https://docs.pytest.org/en/stable/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session)).
For example, `conftest.py` contains several images that you can use for your
tests, such as `basic_stimuli`, `curie_img`, or `color_img`. To use them, simply
add them as arguments to your function:

```python
def test_img(curie_img):
img = po.load_images('data/curie.pgm')
assert torch.allclose(img, curie_img)
```

#### Combining the two

You can combine fixtures and parameterization, which is helpful for when you
want to test multiple models with a synthesis method, for example. This is
slightly more complicated and relies on pytest's [indirect
parametrization](https://docs.pytest.org/en/stable/example/parametrize.html#indirect-parametrization)
(and requires `pytest>=5.1.2` to work properly). For example, `conftest.py` has
a fixture, `model`, which accepts a string and returns an instantiated model on
the right device. Use it like so:

```python
@pytest.mark.parametrize('model', ['SPyr', 'LNL'], indirect=True)
def test_synth(curie_img, model):
met = po.synth.Metamer(curie_img, model)
met.synthesize()
```

This model will be run twice, once with the steerable pyramid model and once
with the Linear-Nonlinear model. See the `get_model` function in `conftest.py`
for the available strings. Note that unlike in the simple
[parametrize](#parametrize) example, we add the `indirect=True` argument here.
If we did not include that argument, `model` would just be the strings `'SPyr'`
and `'LNL'`!

## Documentation

### Adding documentation
Expand Down
119 changes: 69 additions & 50 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3995057.svg)](https://doi.org/10.5281/zenodo.3995057)
[![codecov](https://codecov.io/gh/LabForComputationalVision/plenoptic/branch/master/graph/badge.svg?token=EDtl5kqXKA)](https://codecov.io/gh/LabForComputationalVision/plenoptic)
[![Tutorials Status](https://github.com/LabForComputationalVision/plenoptic/workflows/tutorials/badge.svg)](https://github.com/LabForComputationalVision/plenoptic/actions?query=workflow%3Atutorials)
[![Binder](http://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/LabForComputationalVision/plenoptic/master?filepath=examples)

In recent years, [adversarial
examples](https://openai.com/blog/adversarial-example-research/) have
Expand Down Expand Up @@ -46,37 +47,29 @@ Here's a table summarizing this:
| synthesize | {y, θ} | {x} |

`plenoptic` contains the following four synthesis methods (with links
to the papers describing them):

- [metamers](https://www.cns.nyu.edu/pub/eero/portilla99-reprint.pdf):
given a model and an image, synthesize a new image which is admits
a model representation identical to that of the original image.
to examples that make use of them):

- [Metamers](http://www.cns.nyu.edu/~lcv/texture/):
given a model and a reference image, stochastically generate a new image whose
model representation is identical to that of the reference image.
- [Eigendistortions](https://www.cns.nyu.edu/~lcv/eigendistortions/):
given a model and a reference image, compute the image perturbation that produces
the smallest and largest changes in the model response space. These correspond to the
minimal/maximal eigenvectors of the Fisher Information matrix of the representation (for deterministic models,
the minimal/maximal singular vectors of the Jacobian).
- [Maximal differentiation (MAD)
competition](https://www.cns.nyu.edu/pub/lcv/wang08-preprint.pdf):
given two models and an image, synthesize two pairs of images: two
that the first model represents identically, while the second model
represents as differently as possible; and two that the first model
represents as differently as possible while the second model represents
identically.
competition](https://ece.uwaterloo.ca/~z70wang/research/mad/):
given two models that measure distance between images and a reference image, generate pairs of
images that optimally differentiate the models. Specifically, synthesize a pair of images
that the first model says are equi-distant from the reference while the second model says they
are maximally/minimally distant from the reference. Synthesize a second pair with the roles of the two models reversed.
- [Geodesics](https://www.cns.nyu.edu/pub/lcv/henaff16b-reprint.pdf):
given a model and two images, synthesize a sequence of images that form
a short path in the model's representation space. That is, a sequence of
images that are represented to as close as possible to a straight interpolation
line between the two anchor images.
- [Eigendistortions](https://www.cns.nyu.edu/pub/lcv/berardino17c-final.pdf):
given a model and an image, synthesize the most and least noticeable
distortion on the image (with a constant mean-squared error in
pixels). That is, if you can change all pixel values by a total of
100, how does the model think you should do it to make it as obvious
as possible, and how does the model think you should do it to make
it unnoticeable.
given a model and two images, synthesize a sequence of images that lie on
the shortest ("geodesic") path in the model's representation space.

(where for all of these, "identical (resp. different) representation",
stands for small (resp. large) l2-distance in a model's representation space)

# Status

This project is currently in alpha, under heavy development. Not all features
This project is currently under heavy development. Not all features
have been implemented, and there will be breaking changes.

# Roadmap
Expand All @@ -86,7 +79,7 @@ project](https://github.com/LabForComputationalVision/plenoptic/projects/1)
for a more detailed roadmap, but at the high level:

- Short term:
1. Finalize Portilla-Simoncelli texture statistics
1. Finalize Portilla-Simoncelli texture model
2. Recreate existing `MADCompetition` examples.
- Medium term:
1. Finalize geodesics
Expand All @@ -95,9 +88,8 @@ for a more detailed roadmap, but at the high level:
4. Finalize model API, create superclass
5. Add more models
- Long term:
1. Present poster at conference to advertise to users
2. Submit paper to Journal of Open Source Software to get something
for people to cite
1. Present at conference to advertise to users
2. Submit paper to Journal of Open Source Software

# NOTE

Expand Down Expand Up @@ -156,9 +148,16 @@ explanation of git, Github, and the associated terminology.

### ffmpeg

Several methods in this package generate videos. In order to save them or
convert them to HTML5 for viewing, you'll need
[ffmpeg](https://ffmpeg.org/download.html) installed on your system as well.
Several methods in this package generate videos. There are several backends
possible for saving the animations to file, see (matplotlib
documentation)[https://matplotlib.org/stable/api/animation_api.html#writer-classes]
for more details. In order convert them to HTML5 for viewing (and thus, to view
in a jupyter notebook), you'll need [ffmpeg](https://ffmpeg.org/download.html)
installed and on your path as well.

To change the backend, run `matplotlib.rcParams['animation.writer'] = writer`
before calling any of the animate functions. If you try to set that `rcParam`
with a random string, `matplotlib` will tell you the available choices.

## plenoptic

Expand Down Expand Up @@ -190,27 +189,47 @@ will install all the requirements necessary for plenoptic to run.

## Jupyter

The one additional thing you will want is to install
[JupyterLab](https://jupyterlab.readthedocs.io/en/stable/),
which we use for tutorial and example notebooks:
If you wish to locally run the notebooks, you will need to install `jupyter` and
`ipywidgets` (you can also run them in the cloud using
[Binder](https://mybinder.org/v2/gh/LabForComputationalVision/plenoptic/master?filepath=examples)).
There are two main ways of getting a local `jupyter` install` working with this
package:

1. Install jupyter in the same environment as `plenoptic`. If you followed the
[instructions above](#plenoptic) to create a `conda` environment named
`plenoptic`, do the following:

``` sh
conda activate plenoptic
conda install -c conda-forge jupyterlab ipywidgets
```
conda install -c conda-forge jupyterlab

This is easy but, if you have multiple conda environments and want to use
Jupyter notebooks in each of them, it will take up a lot of space.

2. Use
[nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels).
Again, if you followed the instructions above:

``` sh
# activate your 'base' environment, the default one created by miniconda
conda activate
# install jupyter lab and nb_conda_kernels in your base environment
conda install -c conda-forge jupyterlab ipywidgets
conda install nb_conda_kernels
# install ipykernel in the calibration environment
conda install -n plenoptic ipykernel
```

Note we want to do this within our `plenoptic` environment. If you're
running this section straight through, you won't need to do anything
extra, but if you closed your terminal session after the last section
(for example), you'll need to make sure to activate the correct
environment first: `conda activate plenoptic`.

Note that you may also run into an error in the `02_Eigendistortions` notebook
when creating the `NthLayer` class -- we download the trained VGG model using
`torchvision`, which requires `ipywidgets`. See [official
page](https://ipywidgets.readthedocs.io/en/stable/user_install.html) for help;
in our experience, installing `jupyter` (instead of `jupyterlab`) seems to fix
the problem, but is probably overkill. Installing `ipywidgets` directly (either
via `conda` or `pip`) also seems to work.
This is a bit more complicated, but means you only have one installation of
jupyter lab on your machine.

In either case, to open the notebooks, navigate to the `examples/` directory
under this one on your terminal and activate the environment you install jupyter
into (`plenoptic` for 1, `base` for 2), then run `jupyter` and open up the
notebooks. If you followed the second method, you should be prompted to select
your kernel the first time you open a notebook: select the one named
"plenoptic".

## Keeping up-to-date

Expand Down
2 changes: 2 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@
'sphinxcontrib.apidoc',
'matplotlib.sphinxext.plot_directive',
'matplotlib.sphinxext.mathmpl',
'sphinx.ext.autodoc',
'sphinx_autodoc_typehints'
]

# Add any paths that contain templates here, relative to this directory.
Expand Down
1 change: 1 addition & 0 deletions docs/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ dependencies:
- nbsphinx
- nbsphinx_link
- sphinxcontrib-apidoc
- sphinx-autodoc-typehints
Loading

0 comments on commit 751b7b7

Please sign in to comment.