Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zygote #249

Open
wants to merge 7 commits into
base: zygote
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
notebooks
*.checkpoints
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
3 changes: 3 additions & 0 deletions vision/Auto Encoder/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Simple Auto Encoder

- Encoder decoder architecture
184 changes: 184 additions & 0 deletions vision/Auto Encoder/autoencoder.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
{
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": 3
},
"orig_nbformat": 2
},
"nbformat": 4,
"nbformat_minor": 2,
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"using Flux, Flux.Data.MNIST\n",
"using Flux: @epochs, onehotbatch, mse, throttle\n",
"using Base.Iterators\n",
"using CuArrays\n",
"using Images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Encode MNIST images as compressed vectors that can later be decoded back into images.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"imgs = MNIST.images()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Partition into batches of size 1000"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = [float(hcat(vec.(imgs)...)) for imgs in partition(imgs, 1000)]\n",
"data = gpu.(data)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"N = 32 # Size of the encoding"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- You can try to make the encoder/decoder network larger\n",
"- Also, the output of encoder is a coding of the given input.\n",
"- In this case, the input dimension is 28^2 and the output dimension of\n",
"- encoder is 32. This implies that the coding is a compressed representation.\n",
"- We can make lossy compression via this `encoder`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"encoder = Dense(28^2, N, leakyrelu) |> gpu\n",
"decoder = Dense(N, 28^2, leakyrelu) |> gpu"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"m = Chain(encoder, decoder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loss(x) = mse(m(x), x)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"evalcb = throttle(() -> @show(loss(data[1])), 5)\n",
"opt = ADAM()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@epochs 10 Flux.train!(loss, params(m), zip(data), opt, cb = evalcb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Sample output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"img(x::Vector) = Gray.(reshape(clamp.(x, 0, 1), 28, 28))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"function sample()\n",
" # 20 random digits\n",
" before = [imgs[i] for i in rand(1:length(imgs), 20)]\n",
" # Before and after images\n",
" after = img.(map(x -> cpu(m)(float(vec(x))).data, before))\n",
" # Stack them all together\n",
" hcat(vcat.(before, after)...)\n",
"end"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"save(\"sample.png\", sample())"
]
}
]
}
81 changes: 81 additions & 0 deletions vision/Auto Encoder/autoencoder.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# To add a new cell, type '# %%'
# To add a new markdown cell, type '# %% [markdown]'
# %%
using Flux, Flux.Data.MNIST
using Flux: @epochs, onehotbatch, mse, throttle
using Base.Iterators
using CuArrays
using Images

# %% [markdown]
# ## Encode MNIST images as compressed vectors that can later be decoded back into images.
#

# %%

imgs = MNIST.images()

# %% [markdown]
# # Partition into batches of size 1000

# %%
data = [float(hcat(vec.(imgs)...)) for imgs in partition(imgs, 1000)]
data = gpu.(data)


# %%
N = 32 # Size of the encoding

# %% [markdown]
# - You can try to make the encoder/decoder network larger
# - Also, the output of encoder is a coding of the given input.
# - In this case, the input dimension is 28^2 and the output dimension of
# - encoder is 32. This implies that the coding is a compressed representation.
# - We can make lossy compression via this `encoder`.

# %%
encoder = Dense(28^2, N, leakyrelu) |> gpu
decoder = Dense(N, 28^2, leakyrelu) |> gpu


# %%
m = Chain(encoder, decoder)


# %%
loss(x) = mse(m(x), x)


# %%

evalcb = throttle(() -> @show(loss(data[1])), 5)
opt = ADAM()

# %% [markdown]
# ## Train

# %%
@epochs 10 Flux.train!(loss, params(m), zip(data), opt, cb = evalcb)

# %% [markdown]
# # Sample output

# %%
img(x::Vector) = Gray.(reshape(clamp.(x, 0, 1), 28, 28))


# %%
function sample()
# 20 random digits
before = [imgs[i] for i in rand(1:length(imgs), 20)]
# Before and after images
after = img.(map(x -> cpu(m)(float(vec(x))).data, before))
# Stack them all together
hcat(vcat.(before, after)...)
end


# %%
save("sample.png", sample())


File renamed without changes.
File renamed without changes.
5 changes: 5 additions & 0 deletions vision/Compositional Pattern Network/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Generating Abstract Patterns with Compositional Pattern Producing Network

- [Link to post](https://blog.otoro.net/2016/03/25/generating-abstract-patterns-with-tensorflow/)


Loading