Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
![ComfyUI Screenshot](https://github.com/user-attachments/assets/7ccaf2c1-9b72-41ae-9a89-5688c94b7abe)
</div>

ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Available on Windows, Linux, and macOS.
ComfyUI lets you design and execute advanced Stable Diffusion pipelines using a graph/nodes/flowchart-based interface. Available on Windows, Linux, and macOS.

## Get Started

Expand Down Expand Up @@ -84,7 +84,7 @@ See what ComfyUI can do with the [example workflows](https://comfyanonymous.gith
- [ACE Step](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
- 3D Models
- [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2)
- Asynchronous Queue system
- Asynchronous queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Smart memory management: can automatically run large models on GPUs with as low as 1GB vram with smart offloading.
- Works even if you don't have a GPU with: ```--cpu``` (slow)
Expand Down Expand Up @@ -168,7 +168,7 @@ ComfyUI follows a weekly release cycle targeting Friday but this regularly chang

## Windows Portable

There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).
There is a portable standalone build for Windows that should work for running on NVIDIA GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).

### [Direct link to download](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_nvidia.7z)

Expand Down Expand Up @@ -201,7 +201,7 @@ Put your VAE in: models/vae


### AMD GPUs (Linux only)
AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:
AMD users can install ROCm and PyTorch with pip if you don't have it already installed. This is the command to install the stable version:

```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4```

Expand All @@ -217,21 +217,21 @@ This is the command to install the nightly with ROCm 6.4 which might have some p

```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu```

This is the command to install the Pytorch xpu nightly which might have some performance improvements:
This is the command to install the PyTorch XPU nightly which might have some performance improvements:

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu```

(Option 2) Alternatively, Intel GPUs supported by Intel Extension for PyTorch (IPEX) can leverage IPEX for improved performance.

1. visit [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) for more information.
1. Visit [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) for more information.

### NVIDIA

Nvidia users should install stable pytorch using this command:
NVIDIA users should install stable PyTorch using this command:

```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu129```

This is the command to install pytorch nightly instead which might have performance improvements.
This is the command to install PyTorch nightly instead which might have performance improvements.

```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu129```

Expand All @@ -255,9 +255,9 @@ After this you should have everything installed and can proceed to running Comfy

#### Apple Mac silicon

You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.
You can install ComfyUI on Apple silicon (M1 or M2) with any recent macOS version.

1. Install pytorch nightly. For instructions, read the [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (make sure to install the latest pytorch nightly).
1. Install PyTorch nightly. For instructions, read the [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (make sure to install the latest PyTorch nightly).
1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux.
1. Install the ComfyUI [dependencies](#dependencies). If you have another Stable Diffusion UI [you might be able to reuse the dependencies](#i-already-have-another-ui-for-stable-diffusion-installed-do-i-really-have-to-install-all-of-these-dependencies).
1. Launch ComfyUI by running `python main.py`
Expand All @@ -266,7 +266,7 @@ You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS ve

#### DirectML (AMD Cards on Windows)

This is very badly supported and is not recommended. There are some unofficial builds of pytorch ROCm on windows that exist that will give you a much better experience than this. This readme will be updated once official pytorch ROCm builds for windows come out.
This is very badly supported and is not recommended. There are some unofficial builds of PyTorch ROCm on Windows that exist that will give you a much better experience than this. This README will be updated once official PyTorch ROCm builds for Windows come out.

```pip install torch-directml``` Then you can launch ComfyUI with: ```python main.py --directml```

Expand Down Expand Up @@ -308,11 +308,11 @@ For AMD 7600 and maybe other RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 pyt

### AMD ROCm Tips

You can enable experimental memory efficient attention on recent pytorch in ComfyUI on some AMD GPUs using this command, it should already be enabled by default on RDNA3. If this improves speed for you on latest pytorch on your GPU please report it so that I can enable it by default.
You can enable experimental memory-efficient attention on recent PyTorch in ComfyUI on some AMD GPUs using this command; it should already be enabled by default on RDNA3. If this improves speed for you on the latest PyTorch on your GPU, please report it so that I can enable it by default.

```TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention```

You can also try setting this env variable `PYTORCH_TUNABLEOP_ENABLED=1` which might speed things up at the cost of a very slow initial run.
You can also try setting this environment variable `PYTORCH_TUNABLEOP_ENABLED=1`, which might speed things up at the cost of a very slow initial run.

# Notes

Expand All @@ -328,7 +328,7 @@ You can use {day|night}, for wildcard/dynamic prompts. With this syntax "{wild|c

Dynamic prompts also support C-style comments, like `// comment` or `/* comment */`.

To use a textual inversion concepts/embeddings in a text prompt put them in the models/embeddings directory and use them in the CLIPTextEncode node like this (you can omit the .pt extension):
To use textual inversion concepts/embeddings in a text prompt, put them in the models/embeddings directory and use them in the CLIPTextEncode node like this (you can omit the .pt extension):

```embedding:embedding_filename.pt```

Expand All @@ -351,7 +351,7 @@ Use `--tls-keyfile key.pem --tls-certfile cert.pem` to enable TLS/SSL, the app w

[Discord](https://comfy.org/discord): Try the #help or #feedback channels.

[Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (it's like discord but open source).
[Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (it's like Discord but open-source).

See also: [https://www.comfy.org/](https://www.comfy.org/)

Expand Down
2 changes: 1 addition & 1 deletion api_server/routes/internal/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# ComfyUI Internal Routes

All routes under the `/internal` path are designated for **internal use by ComfyUI only**. These routes are not intended for use by external applications may change at any time without notice.
All routes under the `/internal` path are designated for **internal use by ComfyUI only**. These routes are not intended for use by external applications and may change at any time without notice.
14 changes: 7 additions & 7 deletions comfy_api_nodes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Introduction

Below are a collection of nodes that work by calling external APIs. More information available in our [docs](https://docs.comfy.org/tutorials/api-nodes/overview).
Below is a collection of nodes that work by calling external APIs. More information is available in our [docs](https://docs.comfy.org/tutorials/api-nodes/overview).

## Development

Expand All @@ -12,13 +12,13 @@ While developing, you should be testing against the Staging environment. To test

Follow the instructions [here](https://github.com/Comfy-Org/ComfyUI_frontend) to start the frontend server. By default, it will connect to Staging authentication.

> **Hint:** If you use --front-end-version argument for ComfyUI, it will use production authentication.
> **Hint:** If you use the `--front-end-version` argument for ComfyUI, it will use production authentication.

```bash
python run main.py --comfy-api-base https://stagingapi.comfy.org
python main.py --comfy-api-base https://stagingapi.comfy.org
```

To authenticate to staging, please login and then ask one of Comfy Org team to whitelist you for access to staging.
To authenticate to staging, please log in and then ask a member of the Comfy Org team to grant you access to staging.

API stubs are generated through automatic codegen tools from OpenAPI definitions. Since the Comfy Org OpenAPI definition contains many things from the Comfy Registry as well, we use redocly/cli to filter out only the paths relevant for API nodes.

Expand All @@ -37,7 +37,7 @@ curl -o openapi.yaml https://stagingapi.comfy.org/openapi
npm install -g @redocly/cli
redocly bundle openapi.yaml --output filtered-openapi.yaml --config comfy_api_nodes/redocly-dev.yaml --remove-unused-components

# Generate the pydantic datamodels for validation.
# Generate the Pydantic data models for validation.
datamodel-codegen --use-subclass-enum --field-constraints --strict-types bytes --input filtered-openapi.yaml --output comfy_api_nodes/apis/__init__.py --output-model-type pydantic_v2.BaseModel

```
Expand All @@ -47,7 +47,7 @@ datamodel-codegen --use-subclass-enum --field-constraints --strict-types bytes -

Before merging to comfyanonymous/ComfyUI master, follow these steps:

1. Add the "Released" tag to the ComfyUI OpenAPI yaml file for each endpoint you are using in the nodes.
1. Add the "Released" tag to the ComfyUI OpenAPI YAML file for each endpoint you are using in the nodes.
1. Make sure the ComfyUI API is deployed to prod with your changes.
1. Run the code generation again with `redocly.yaml` and the production OpenAPI yaml file.

Expand All @@ -59,7 +59,7 @@ curl -o openapi.yaml https://api.comfy.org/openapi
npm install -g @redocly/cli
redocly bundle openapi.yaml --output filtered-openapi.yaml --config comfy_api_nodes/redocly.yaml --remove-unused-components

# Generate the pydantic datamodels for validation.
# Generate the Pydantic data models for validation.
datamodel-codegen --use-subclass-enum --field-constraints --strict-types bytes --input filtered-openapi.yaml --output comfy_api_nodes/apis/__init__.py --output-model-type pydantic_v2.BaseModel

```
6 changes: 3 additions & 3 deletions tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ Additional requirements for running tests:
```
pip install pytest
pip install websocket-client==1.6.1
opencv-python==4.6.0.66
scikit-image==0.21.0
pip install opencv-python==4.6.0.66
pip install scikit-image==0.21.0
```
Run inference tests:
```
Expand All @@ -26,4 +26,4 @@ Compares images in 2 directories to ensure they are the same
3) Run inference and quality comparison tests
```
pytest
```
```