Skip to content

Commit

Permalink
Generated documentation for cshl-vision-2024/pulls/dylan
Browse files Browse the repository at this point in the history
jenkins-plenoptic-cshl-vision-2024-dylan-9
  • Loading branch information
flatiron-jenkins committed Oct 9, 2024
1 parent a82e286 commit 1fc966a
Show file tree
Hide file tree
Showing 179 changed files with 32,725 additions and 1 deletion.

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 16 additions & 0 deletions cshl-vision-2024/pulls/dylan/_sources/glossary.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Glossary

The attached notebook uses some jargon that might be new to you. Hopefully this glossary will help clarify things, but please ask if something is unclear!

- Bandpass: a filter or model that is most sensitive to frequencies in the middle range and is less sensitive or insensitive to high and low frequencies. Much of the early visual system, including retinal ganglion cells, lateral geniculate nucleus neurons, and primary visual cortical neurons, display bandpass selectivity. Example functional forms include difference-of-Gaussians, Gabor filters, Morlet wavelets, and the steerable pyramid filters. Compare to highpass and lowpass.
- Eigendistortions: image distortions that produce the most and least noticeable change in model response. They are the eigenvectors of the model's Fisher information matrix, which provides a quadratic approximation of the discriminability of distortions on a given image. In the cases we consider, all models have a deterministic and differentiable mapping from images to representations, and thus the Fisher information matrix is equal to $J^T J$, where $J$ is the model's Jacobian matrix with respect to the target image.
- Gain control: also known as divisive normalization, gain control is ubiquitous in the central nervous system and has been proposed as a [canonical neural computation](https://www.nature.com/articles/nrn3136) which allows the brain to maximize sensitivity to relevant stimuli in changing contexts. An example is the way that the human eye adapts to different light levels: when entering a dark room from a bright environment, we are initially unable to make out any details, but adaptation allows the eye to change the range of intensities that it is sensitive to. Physical processes (e.g., change in pupil size) account for some of this, but gain control is another way this can be implemented.
- Highpass: a filter or model that is most sensitive to high frequencies and is less sensitive or insensitive to middle and low frequencies. Compare to bandpass and lowpass.
- Invariances / invariant: if a model is invariant to an image feature, the presence of the feature will not affect the model output. Even stronger, that feature can be randomized and it will have no effect on the model output. These features are called the model's *invariances*.
- Lowpass: a filter or model that is most sensitive to low frequencies and is less sensitive or insensitive to middle and high frequencies. The classic example is a Gaussian. Compare to bandpass and lowpass.
- Metamers: visual input that are physically distinct but perceptually identical, such as a scene and an RGB image of that scene. In plenoptic, we synthesize *model metamers*, which are images with different pixel values that produce identical model outputs.
- Model: a computational model maps some input stimulus to a representation, based on some parameters. Neural networks, Gaussian filters, and the energy model of V1 complex cells are all examples of models. In vision science, we typically use these models to better understand some aspect of a biological visual system, by trying to map the model representation to some aspect of the system being modeled, such as neuronal firing rate, behavioral responses, or fMRI BOLD. The goal of plenoptic is to facilitate understanding and improvement of these models.
- Parameters: values that govern a model's behavior, such as the numbers that make up a convolutional filter in a neural network, the standard deviation of a Gaussian, or the orientation of a Gabor. Most models have multiple parameters (some have a great deal!), and typically these parameters are fit to observed data after some experiments using optimization. In plenoptic, these model parameters are fixed and do not change.
- Representation: the model output. These are often a vector of numbers or a two-dimensional image-like representation. These may be abstract but are often mapped to some aspect of the system being modeled, such as neuronal firing rate, behavioral responses, or fMRI BOLD.
- Stimuli: the model input. Typically in vision science, these are images or videos. In plenoptic, we synthesize stimuli in order to facilitate some goal. See Metamers and Eigendistortions.
- Synthesis: the process by which stimuli are generated in plenoptic. This is generally accomplished via iterative optimization, though not always.
80 changes: 80 additions & 0 deletions cshl-vision-2024/pulls/dylan/_sources/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Welcome to plenoptic tutorial, CSHL Vision Course 2024

This site hosts the example notebook used for the plenoptic tutorial given at the Cold Spring Harbor Labs Computational Neuroscience: Vision course in July 2024. This one-hour(ish) tutorial aims to introduce the basics of using plenoptic in order to better understand computational visual models with simple examples. We hope to explain not just `plenoptic`'s syntax but also the type of reasoning that it facilitates.

The presentation I gave at the beginning of this session can be found [here](https://labforcomputationalvision.github.io/plenoptic_presentations/2024-07-12_CSHL/slides.html).

This website contains two versions of the notebook we'll use today: [with](introduction.md) and [without](introduction-stripped.md) explanatory text. Today you'll run the version without explanatory text, which contains cells of code, while listening to my description. If later you wish to revisit this material, the version with explanatory text should help you.

You may also find the [glossary](glossary.md) useful as you go through the notebook.

You can also [follow the setup instructions here](#setup) to download these notebooks and run them locally, but to avoid potential installation issues in this brief period of time, we'll use binder instead. Click on the `launch binder` badge on the upper left sidebar, which will then prompt you to login. Use the google account that you gave to the class organizers; if you get a 403 forbidden error or would like to use a different account, let me know so that I can give it permission. The binder instance provides a GPU with the environment necessary to run the notebook. See [the section below](#binder) for more details on the binder, including some important usage notes.

## Setup

:::{note}
If you would just like to install `plenoptic` to use it locally, follow [our installation instructions](https://plenoptic.readthedocs.io/en/latest/install.html). This tutorial contains some extra packages for this specific build.
:::

While we'll use the binder during this tutorial, if you'd like to run the notebooks locally, you'll need to set up a local environment. To do so:

0. Make sure you have `git` installed. It is installed by default on most Mac and Linux machines, but you may need to install it if you are on Windows. [These instructions](https://github.com/git-guides/install-git) should help.
1. Clone the github repo for this tutorial:
```shell
git clone https://github.com/plenoptic-org/plenoptic-cshl-vision-2024.git
```
2. Create a new python 3.11 virtual environment. If you do not have a preferred way of managing your python virtual environments, we recommend [miniconda](https://docs.anaconda.com/free/miniconda/). After installing it (if you have not done so already), run
```shell
conda create --name cshl2024 pip python=3.11
```
3. Activate your new environment:
```shell
conda activate cshl2024
```
4. Navigate to the cloned github repo and install the required dependencies.
```shell
cd plenoptic-cshl-vision-2024
pip install -r requirements.txt
```

:::{important}
You will also need `ffmpeg` installed in order to view the videos in the notebook. This is likely installed on your system already if you are on Linux or Mac (run `ffmpeg` in your command line to check). If not, you can install it via conda: `conda install -c conda-forge ffmpeg` or see their [install instructions](https://ffmpeg.org/download.html).

If you have `ffmpeg` installed and are still having issues, try running `conda update ffmpeg`.

:::

5. Run the setup script to prepare the notebook:
```shell
python scripts/setup.py
```

:::{important}
It's possible this step will fail (especially if you are on Windows). If so, go to the [notebook on this site](introduction-stripped.md) and download it manually.
:::
6. Open up jupyter, then double-click on the `introduction-stripped.ipynb` notebook:
```shell
jupyter lab
```
## Binder
Some usage notes:
- You are only allowed to have a single binder instance running at a time, so if you get the "already have an instance running error", go to the [binderhub page](https://binder.flatironinstitute.org/hub/hub/home) (or click on "check your currently running servers" on the right of the page) to join your running instance.
- If you lose connection halfway through the workshop, go to the [binderhub page](https://binder.flatironinstitute.org/hub/hub/home) to join your running instance rather than restarting the image.
- This is important because if you restart the image, **you will lose all data and progress**.
- The binder will be shutdown automatically after 1 day of inactivity or 7 days of total usage. Data will not persist after the binder instance shuts down, so **please download any notebooks** you want to keep.
- I will destroy this instance in 2 weeks, so that you can use it to play around during the course. You can download your notebooks to keep them after the fact. If you do so, see the [setup instructions](#setup) for how to create the environment for running them locally, and let me know if you have any problems!
## Contents
See description above for an explanation of the difference between these two
notebooks.
```{toctree}
glossary.md
introduction.md
introduction-stripped.md
```
Loading

0 comments on commit 1fc966a

Please sign in to comment.