Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 17 additions & 7 deletions .github/workflows/cache.yml
Original file line number Diff line number Diff line change
@@ -1,21 +1,31 @@
name: Build Cache [using jupyter-book]
on:
push:
branches:
- main
schedule:
# Execute cache weekly at 3am on Monday
- cron: '0 3 * * 1'
workflow_dispatch:
jobs:
cache:
runs-on: quantecon-gpu
container:
image: ghcr.io/quantecon/lecture-python-container:cuda-12.6.0-anaconda-2024-10-py312-b
options: --gpus all
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=quantecon_ubuntu2404/disk=large"
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v3
with:
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: "3.12"
environment-file: environment.yml
activate-environment: quantecon
- name: Install JAX, Numpyro
shell: bash -l {0}
run: |
pip install --upgrade "jax[cuda12-local]"
pip install numpyro
python scripts/test-jax-install.py
- name: Check nvidia drivers
shell: bash -l {0}
run: |
Expand Down
44 changes: 19 additions & 25 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,36 +1,30 @@
name: Build Project [using jupyter-book]
on: [pull_request]
jobs:
deploy-runner:
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v3
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Deploy runner on EC2
env:
REPO_TOKEN: ${{ secrets.QUANTECON_SERVICES_PAT }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
cml runner launch \
--cloud=aws \
--cloud-region=us-west-2 \
--cloud-type=p3.2xlarge \
--labels=cml-gpu \
--cloud-hdd-size=50
preview:
needs: deploy-runner
runs-on: [self-hosted, cml-gpu]
container:
image: docker://mmcky/quantecon-lecture-python:cuda-12.3.1-anaconda-2024-02-py311
options: --gpus all
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=quantecon_ubuntu2404/disk=large"
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
# Check nvidia drivers
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v3
with:
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: "3.12"
environment-file: environment.yml
activate-environment: quantecon
- name: Install JAX, Numpyro
shell: bash -l {0}
run: |
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
pip install pyro-ppl
pip install --upgrade "jax[cuda12-local]"
pip install numpyro
python scripts/test-jax-install.py
# Check nvidia drivers
- name: nvidia Drivers
shell: bash -l {0}
run: nvidia-smi
Expand Down
56 changes: 56 additions & 0 deletions .github/workflows/collab.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
name: Build Project on Google Collab (Execution)
on: [pull_request]
jobs:
execution-checks:
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=ubuntu24-gpu-x64/disk=large"
container:
image: docker://us-docker.pkg.dev/colab-images/public/runtime
options: --gpus all
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Check nvidia drivers
shell: bash -l {0}
run: |
nvidia-smi
- name: Check python version
shell: bash -l {0}
run: |
python --version
- name: Display Pip Versions
shell: bash -l {0}
run: pip list
- name: Download "build" folder (cache)
uses: dawidd6/action-download-artifact@v3
with:
workflow: cache.yml
branch: main
name: build-cache
path: _build
# Install build software
- name: Install Build Software
shell: bash -l {0}
run: |
pip install jupyter-book==1.0.3 quantecon-book-theme==0.8.2 sphinx-tojupyter==0.3.0 sphinxext-rediraffe==0.2.7 sphinxcontrib-youtube==1.3.0 sphinx-togglebutton==0.3.2 arviz sphinx-proof sphinx-exercise sphinx-reredirects
# Build of HTML (Execution Testing)
- name: Build HTML
shell: bash -l {0}
run: |
jb build lectures --path-output ./ -n -W --keep-going
- name: Upload Execution Reports
uses: actions/upload-artifact@v4
if: failure()
with:
name: execution-reports
path: _build/html/reports
- name: Preview Deploy to Netlify
uses: nwtgck/actions-netlify@v2
with:
publish-dir: '_build/html/'
production-branch: main
github-token: ${{ secrets.GITHUB_TOKEN }}
deploy-message: "Preview Deploy from GitHub Actions"
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
4 changes: 2 additions & 2 deletions .github/workflows/linkcheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
fail-fast: false
matrix:
os: ["ubuntu-latest"]
python-version: ["3.11"]
python-version: ["3.12"]
steps:
- name: Checkout
uses: actions/checkout@v4
Expand All @@ -23,7 +23,7 @@ jobs:
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: '3.11'
python-version: '3.12'
environment-file: environment.yml
activate-environment: quantecon
- name: Download "build" folder (cache)
Expand Down
41 changes: 15 additions & 26 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,38 +4,27 @@ on:
tags:
- 'publish*'
jobs:
deploy-runner:
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v3
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Deploy runner on EC2
env:
REPO_TOKEN: ${{ secrets.QUANTECON_SERVICES_PAT }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
cml runner launch \
--cloud=aws \
--cloud-region=us-west-2 \
--cloud-type=p3.2xlarge \
--labels=cml-gpu \
--cloud-hdd-size=50
publish:
if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags')
needs: deploy-runner
runs-on: [self-hosted, cml-gpu]
container:
image: docker://mmcky/quantecon-lecture-python:cuda-12.3.1-anaconda-2024-02-py311
options: --gpus all
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=quantecon_ubuntu2404/disk=large"
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Git (required to commit notebooks)
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v3
with:
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: "3.12"
environment-file: environment.yml
activate-environment: quantecon
- name: Install JAX, Numpyro
shell: bash -l {0}
run: apt-get install -y git
run: |
pip install --upgrade "jax[cuda12-local]"
pip install numpyro
python scripts/test-jax-install.py
- name: Check nvidia drivers
shell: bash -l {0}
run: |
Expand Down
22 changes: 9 additions & 13 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,17 @@ name: quantecon
channels:
- default
dependencies:
- python=3.11
- anaconda=2024.02
- python=3.12
- anaconda=2024.10
- pip
- pip:
- jupyter-book==0.15.1
- docutils==0.17.1
- quantecon-book-theme==0.7.1
- sphinx-reredirects==0.1.3
- jupyter-book==1.0.3
- quantecon-book-theme==0.7.6
- sphinx-tojupyter==0.3.0
- sphinxext-rediraffe==0.2.7
- sphinx-exercise==0.4.1
- ghp-import==2.1.0
- sphinxcontrib-youtube==1.2.0
- sphinx-reredirects==0.1.4
- sphinx-exercise==1.0.1
- sphinx-proof==0.2.0
- ghp-import==1.1.0
- sphinxcontrib-youtube==1.3.0 #Version 1.3.0 is required as quantecon-book-theme is only compatible with sphinx<=5
- sphinx-togglebutton==0.3.2
- arviz==0.13.0
- kaleido
# Docker Requirements
- pytz
9 changes: 9 additions & 0 deletions lectures/_admonition/gpu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
```{admonition} GPU
:class: warning

This lecture was built using a machine with the latest CUDA and CUDANN frameworks installed with access to a GPU.

To run this lecture on [Google Colab](https://colab.research.google.com/), click on the "play" icon top right, select Colab, and set the runtime environment to include a GPU.

To run this lecture on your own machine, you need to install the software listed following this notice.
```
14 changes: 12 additions & 2 deletions lectures/ar1_bayes.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,25 @@ kernelspec:

# Posterior Distributions for AR(1) Parameters

We'll begin with some Python imports.
```{include} _admonition/gpu.md
```

```{code-cell} ipython3
:tags: [hide-output]

!pip install numpyro jax
```

In addition to what's included in base Anaconda, we need to install the following packages

```{code-cell} ipython3
:tags: [hide-output]

!pip install arviz pymc numpyro jax
!pip install arviz pymc
```

We'll begin with some Python imports.

```{code-cell} ipython3

import arviz as az
Expand Down
34 changes: 20 additions & 14 deletions lectures/back_prop.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,36 @@ jupytext:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.11.5
jupytext_version: 1.16.7
kernelspec:
display_name: Python 3
display_name: Python 3 (ipykernel)
language: python
name: python3
---

# Introduction to Artificial Neural Networks

```{include} _admonition/gpu.md
```

```{code-cell} ipython3
:tags: [skip-execution]

!pip install --upgrade jax
```

```{code-cell} ipython3
import jax
## to check that gpu is activated in environment
print(f"JAX backend: {jax.devices()[0].platform}")
```

In addition to what's included in base Anaconda, we need to install the following packages

```{code-cell} ipython3
:tags: [hide-output]

!pip install --upgrade jax jaxlib kaleido
!pip install kaleido
!conda install -y -c plotly plotly plotly-orca retrying
```

Expand Down Expand Up @@ -593,15 +610,4 @@ Image(fig.to_image(format="png"))
# notebook locally
```

```{code-cell} ipython3
## to check that gpu is activated in environment

from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
```

```{note}
**Cloud Environment:** This lecture site is built in a server environment that doesn't have access to a `gpu`
If you run this lecture locally this lets you know where your code is being executed, either
via the `cpu` or the `gpu`
```
Loading
Loading