Skip to content

Commit

Permalink
v0.0.9 documentation updates (#417)
Browse files Browse the repository at this point in the history
* Move assets

* Add graphics to README

* Add setup docs

* Revert test code

* Add to setup

* Add texture projection instructions

* Document inpaint/outpaint

* Update upscaling and history docs

* Update option docs

* Update render pass docs

* Bump version to 0.0.9
  • Loading branch information
carson-katri committed Dec 15, 2022
1 parent f352cca commit 395141a
Show file tree
Hide file tree
Showing 43 changed files with 281 additions and 65 deletions.
28 changes: 23 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![Dream Textures, subtitle: Stable Diffusion built-in to Blender](readme_assets/banner.png)
![Dream Textures, subtitle: Stable Diffusion built-in to Blender](docs/assets/banner.png)

[![Latest Release](https://flat.badgen.net/github/release/carson-katri/dream-textures)](https://github.com/carson-katri/dream-textures/releases/latest)
[![Join the Discord](https://flat.badgen.net/badge/icon/discord?icon=discord&label)](https://discord.gg/EmDJ8CaWZ7)
Expand All @@ -7,7 +7,7 @@

* Create textures, concept art, background assets, and more with a simple text prompt
* Use the 'Seamless' option to create textures that tile perfectly with no visible seam
* Quickly create variations on an existing texture
* Texture entire scenes with 'Project Dream Texture' and depth to image
* Re-style animations with the Cycles render pass
* Run the models on your machine to iterate without slowdowns from a service

Expand All @@ -23,17 +23,35 @@ If you want a visual guide to installation, see this video tutorial from Ashlee

Here's a few quick guides:

## [Setting Up](docs/SETUP.md)
Setup instructions for various platforms and configurations.

## [Image Generation](docs/IMAGE_GENERATION.md)
Create textures, concept art, and more with text prompts. Learn how to use the various configuration options to get exactly what you're looking for.

## [Inpainting](docs/INPAINTING.md)
Fix up images and convert existing textures into seamless ones automatically.
![A graphic showing each step of the image generation process](docs/assets/image_generation.png)

## [Texture Projection](docs/TEXTURE_PROJECT.md)
Texture entire models and scenes with depth to image.

![A graphic showing each step of the texture projection process](docs/assets/texture_projection.png)

## [Inpaint/Outpaint](docs/INPAINT_OUTPAINT.md)
Inpaint to fix up images and convert existing textures into seamless ones automatically.

Outpaint to increase the size of an image by extending it in any direction.

![A graphic showing each step of the outpainting process](docs/assets/inpaint_outpaint.png)

## [Render Pass](docs/RENDER_PASS.md)
Perform style transfer and create novel animations with Stable Diffusion as a post processing step.

![A graphic showing each frame of a render pass, split with the original and generated result](docs/assets/render_pass.png)

## [AI Upscaling](docs/AI_UPSCALING.md)
Convert your low-res generations to 2K, 4K, and higher with Real-ESRGAN built-in.
Upscale your low-res generations 4x.

![A graphic showing each step of the upscaling process](docs/assets/upscale.png)

## [History](docs/HISTORY.md)
Recall, export, and import history entries for later use.
Expand Down
2 changes: 1 addition & 1 deletion __init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"author": "Dream Textures contributors",
"description": "Use Stable Diffusion to generate unique textures straight from the shader editor.",
"blender": (3, 0, 0),
"version": (0, 0, 8),
"version": (0, 0, 9),
"location": "Image Editor -> Sidebar -> Dream",
"category": "Paint"
}
Expand Down
24 changes: 17 additions & 7 deletions docs/AI_UPSCALING.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,24 @@
# AI Upscaling
Real-ESRGAN is built-in to the addon to upscale any generated image 2-4x the original size.
Use the Stable Diffusion upscaler to increase images 4x in size while retaining detail. You can guide the upscaler with a text prompt.

> You must setup the Real-ESRGAN weights separately from the Stable Diffusion weights before upscaling. The *AI Upscaling* panel contains instructions for downloading them.
> Upscaling uses the model `stabilityai/stable-diffusion-4x-upscaler`. This model will automatically be downloaded when the operator is first run.
1. Open the image to upscale an *Image Editor* space
Use the AI Upscaling panel to access this tool.

1. Open the image to upscale in an *Image Editor* space
2. Expand the *AI Upscaling* panel, located in the *Dream* sidebar tab
3. Choose a target size and click *Upscale*
3. Type a prompt to subtly influence the generation.
4. Optionally configure the tile size, blend, and other advanced options.

![](assets/ai_upscaling/panel.png)

The upscaled image will be opened in the *Image Editor*. The image will be named `Source Image Name (Upscaled)`.

## Tile Size
Due to the large VRAM consumption of the `stabilityai/stable-diffusion-4x-upscaler` model, the input image is split into tiles with each tile being upscaled independently, then stitched back together.

> Some GPUs will require Full Precision to be enabled.
The default tile size is 128x128, which will result in an image of size 512x512. These 512x512 images are stitched back together to form the final image.

![A screenshot of the AI Upscaling panel set to 2 times target size and full precision enabled](../readme_assets/upscaling.png)
You can increase or decrease the tile size depending on your GPU's capabilities.

The upscaled image will be opened in the *Image Editor*. The image will be named `Source Image Name (Upscaled)`.
The *Blend* parameter controls how much overlap is included in the tiles to help reduce visible seams.
4 changes: 2 additions & 2 deletions docs/HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ You can also export the selected prompt to JSON for later import. This is a more
2. Click the export icon button
3. Save the JSON file to your computer

![A screenshot of the History panel with the Export icon button highlighted](../readme_assets/history-export.png)
![A screenshot of the History panel with the Export icon button highlighted](assets/history/history-export.png)

### Import
1. Select the import icon button in the header of the *Dream Texture* panel
2. Open a valid prompt JSON file
3. Every configuration option will be loaded in

![A screenshot of the Dream Texture panel with the Import icon button highlighted](../readme_assets/history-import.png)
![A screenshot of the Dream Texture panel with the Import icon button highlighted](assets/history/history-import.png)
62 changes: 44 additions & 18 deletions docs/IMAGE_GENERATION.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,26 @@
# Image Generation
1. To open Dream Textures, go to an Image Editor or Shader Editor
1. Ensure the sidebar is visible by pressing *N* or checking *View* > *Sidebar*
2. Select the 'Dream' panel to open the interface
2. Select the *Dream* panel to open the interface

![A screenshot showing the 'Dream' panel in an Image Editor space](../readme_assets/opening-ui.png)
![A screenshot showing the 'Dream' panel in an Image Editor space](assets/image_generation/opening-ui.png)

Enter a prompt then click *Generate*. It can take anywhere from a few seconds to a few minutes to generate, depending on your graphics card.
Enter a prompt then click *Generate*. It can take anywhere from a few seconds to a few minutes to generate, depending on your GPU.

## Options

### Pipeline
Two options are currently available:
* Stable Diffusion - for local generation
* DreamStudio - for cloud processing

Only the options available for the version you installed and the keys provided in the add-on preferences will be available.

### Model
Choose from any installed model. Some options require specific kinds of model.

For example, []

### Prompt

A few presets are available to help you create great prompts. They work by asking you to fill in a few simple fields, then generate a full prompt string that is passed to Stable Diffusion.
Expand All @@ -18,6 +30,8 @@ The default preset is *Texture*. It asks for a subject, and adds the word `textu
### Seamless
Checking seamless will use a circular convolution to create a perfectly seamless image, which works great for textures.

You can also specify which axes should be seamless.

### Negative
Enabling negative prompts gives you finer control over your image. For example, if you asked for a `cloud city`, but you wanted to remove the buildings it added, you could enter the negative prompt `building`. This would tell Stable Diffusion to avoid drawing buildings. You can add as much content you want to the negative prompt, and it will avoid everything entered.

Expand All @@ -28,28 +42,40 @@ Most graphics cards with 4+GB of VRAM should be able to generate 512x512 images.

> Stable Diffusion was trained on 512x512 images, so you will get the best results at this size (or at least when leaving one dimensions at 512).
### Inpaint Open Image
See [Inpainting](INPAINTING.md) for more information.
### Source Image
Choose an image from a specific *File* or use the *Open Image*.

### Init Image
Specifies an image to mix with the latent noise. Open any image, and Stable Diffusion will match the style, composition, etc. from it.
Three actions are available that work on a source image.

#### Modify
Mixes the image with the noise with the ratio specified by the *Noise Strength*. This will make Stable Diffusion match the style, composition, etc. from it.

Stength specifies how much latent noise to mix with the image. A higher strength means more latent noise, and more deviation from the init image. If you want it to stick to the image more, decrease the strength.

> Depending on the strength value, some steps will be skipped. For example, if you specified `10` steps and set strength to `0.5`, only `5` steps would be used.
Fit to width/height will ensure the image is contained within the configured size.

The *Image Type* option has a few options:
1. Color - Mixes the image with noise

> The following options require a depth model to be selected, such as `stabilityai/stable-diffusion-2-depth`. Follow the instructions to [download a model](setup.md#download-a-model).
2. Color and Generated Depth - Uses MiDaS to infer the depth of the initial image and includes it in the conditioning. Can give results that more closely match the composition of the source image.
3. Color and Depth Map - Specify a secondary image to use as the depth map, instead of generating one with MiDaS.
4. Depth - Treats the intial image as a depth map, and ignores any color. The generated image will match the composition but not colors of the original.

### Advanced
You can have more control over the generation by trying different values for these parameters:
* Precision - the math precision
* Automatic - chooses the best option for your GPU
* Full Precision - uses 32-bit floats, required on some GPUs
* Half Precision - uses 16-bit floats, faster
* Autocast - uses the correct precision for each PyTorch operation
* Random Seed - when enabled, a seed will be selected for you
* Seed - the value used to seed RNG, if text is input instead of a number its hash will be used
* Steps - number of sampler steps, higher steps will give the sampler more time to converge and clear up artifacts
* CFG Scale - how strongly the prompt influences the output
* Sampler - the sampling method to use, all samplers (except for KEULER_A and KDPM_2A) will produce the same image if given enough steps
* Show Steps - whether to show each step in the Image Editor, can slow down generation significantly

* Random Seed - When enabled, a seed will be selected for you
* Seed - The value used to seed RNG, if text is input instead of a number its hash will be used
* Steps - Number of sampler steps, higher steps will give the sampler more time to converge and clear up artifacts
* CFG Scale - How strongly the prompt influences the output
* Scheduler - Some schedulers take fewer steps to produce a good result than others. Try each one and see what you prefer.
* Step Preview - Whether to show each step in the image editor. Defaults to 'Fast', which samples the latents without using the VAE. 'Accurrate' will run the latents through the VAE at each step and slow down generation significantly.
* Speed Optimizations - Various optimizations to increase generation speed, some at the cost of VRAM. Recommended default is *Half Precision*.
* Memory Optimizations - Various optimizations to reduce VRAM consumption, some at the cost of speed. Recommended default is *Attention Slicing* with *Automatic* slice size.

### Iterations
How many images to generate. This is only particularly useful when *Random Seed* is enabled.
22 changes: 0 additions & 22 deletions docs/INPAINTING.md

This file was deleted.

67 changes: 67 additions & 0 deletions docs/INPAINT_OUTPAINT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Inpaint/Outpaint

This guide shows how to use both [inpainting](#inpainting) and [outpainting](#outpainting).

> For both inpainting and outpainting you *must* use a model fine-tuned for inpainting, such as `stabilityai/stable-diffusion-2-inpainting`. Follow the instructions to [download a model](setup.md#download-a-model).
# Inpainting
Inpainting refers to filling in or replacing parts of an image. It can also be used to [make existing textures seamless](#making-textures-seamless).

The quickest way to inpaint is with the *Mark Inpaint Area* brush.

1. Use the *Mark Inpaint Area* brush to remove the edges of the image
2. Enter a prompt for what should fill the erased area
3. Enable *Source Image*, select the *Open Image* source and the *Inpaint* action
4. Choose the *Alpha Channel* mask source
5. Click *Generate*

![](assets/inpaint_outpaint/inpaint.png)

## Making Textures Seamless
Inpainting can also be used to make an existing texture seamless.

1. Use the *Mark Inpaint Area* brush to remove the edges of the image
2. Enter a prompt that describes the texture, and check *Seamless*
3. Enable *Source Image*, select the *Open Image* source and the *Inpaint* action
4. Click *Generate*

![](assets/inpaint_outpaint/seamless_inpaint.png)

# Outpainting
Outpainting refers to extending an image beyond its original size. Use an inpainting model such as `stabilityai/stable-diffusion-2-inpainting` for outpainting as well.

1. Select an image to outpaint and open it in an Image Editor
2. Choose a size, this is how large the outpaint will be
3. Enable *Source Image*, select the *Open Image* source and the *Outpaint* action
4. Set the origin of the outpaint. See [Choosing an Origin](#choosing-an-origin) for more info.

### Choosing an Origin
The top left corner of the image is (0, 0), with the bottom right corner being the (width, height).

You should always include overlap or the outpaint will be completely unrelated to the original. The add-on will warn you if you do not include any.

Take the image below for example. We want to outpaint the bottom right side. Let's figure out the correct origin.

Here's what we know:
1. We know our image is 512x960. You can find this in the sidebar on the *Image* tab.
2. We set the size of the outpaint to 512x512 in the *Dream* tab

With this information we can calculate:
1. The X origin will be the width of the image minus some overlap. The width is 512px, and we want 64px of overlap. So the X origin will be set to `512 - 64` or `448`.
2. The Y origin will be the height of the image minus the height of the outpaint size. The height of the image is 960px, and the height of the outpaint is 512px. So the Y origin will be set to `960 - 512` or `448`.

> Tip: You can enter math expressions into any Blender input field.
![](assets/inpaint_outpaint/outpaint_origin.png)

After selecting this origin, we can outpaint the bottom right side.

![](assets/inpaint_outpaint/outpaint.gif)

Here are other values we could have used for other parts of the image:

* Bottom Left: `(-512 + 64, 512 - 64)` or `(-448, 448)`
* Top Right: `(512 - 64, 0)` or `(448, 0)`
* Top Left: `(-512 + 64, 0)` or `(-448, 0)`
* Top: `(0, 0)`
* Bottom: `(0, 512 - 64)` or `(0, 448)`
Loading

0 comments on commit 395141a

Please sign in to comment.