Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -242,4 +242,7 @@ proxy_config.txt
gradio_outputs/
acestep/third_parts/vllm/
test_lora_scale_fix.py
lokr_output/
lokr_output/

# macOS
.DS_Store
5 changes: 5 additions & 0 deletions acestep/ui/streamlit/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
__pycache__/
*.pyc
.cache/
projects/
streamlit.log
22 changes: 22 additions & 0 deletions acestep/ui/streamlit/.streamlit/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[theme]
primaryColor = "#FF6B9D"
backgroundColor = "#0F1419"
secondaryBackgroundColor = "#262730"
textColor = "#FAFAFA"
font = "sans serif"

[client]
toolbarMode = "minimal"
showErrorDetails = true

[browser]
gatherUsageStats = false

[logger]
level = "info"

[server]
maxUploadSize = 200
enableXsrfProtection = true
port = 8501
headless = true
1 change: 1 addition & 0 deletions acestep/ui/streamlit/.streamlit/secrets.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Empty secrets file - prevents email prompt
152 changes: 152 additions & 0 deletions acestep/ui/streamlit/INSTALL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
"""
ACE Studio Streamlit - Installation & Setup Guide
"""

# Installation Instructions

## Prerequisites

- Python 3.8+ (tested with 3.11)
- ACE-Step main project installed (parent directory)
- pip or uv for package management

## Step 1: Install Dependencies

From the `acestep/ui/streamlit` directory:

```bash
pip install -r requirements.txt
```

Or with uv (faster):

```bash
uv pip install -r requirements.txt
```

## Step 2: Configure (Optional)

Edit `config.py` to customize:
- Default generation parameters
- UI appearance
- Storage paths
- Audio formats

## Step 3: Run the App

```bash
streamlit run main.py
```

The app will open at `http://localhost:8501`

## System Requirements

### Minimum
- 4GB VRAM (CPU only)
- Intel i5 or equivalent
- 2GB RAM
Comment on lines +44 to +48
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Contradictory "VRAM (CPU only)" specification.

"4GB VRAM" refers to GPU memory, which is unavailable in a CPU-only setup. The minimum row should either drop the VRAM column or replace it with something like "No discrete GPU required."

✏️ Suggested fix
 ### Minimum
-- 4GB VRAM (CPU only)
+- No discrete GPU required (CPU mode)
 - Intel i5 or equivalent
 - 2GB RAM
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Minimum
- 4GB VRAM (CPU only)
- Intel i5 or equivalent
- 2GB RAM
### Minimum
- No discrete GPU required (CPU mode)
- Intel i5 or equivalent
- 2GB RAM
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/ui/ace_studio_streamlit/INSTALL.md` around lines 44 - 48, The "4GB
VRAM (CPU only)" line under the "### Minimum" section is contradictory; edit the
INSTALL.md to either remove the VRAM requirement or clarify it — for example
replace the line "4GB VRAM (CPU only)" with "No discrete GPU required (optional:
4GB VRAM recommended for GPU acceleration)" or split into two lines: "No
discrete GPU required" and an optional note "Optional: 4GB VRAM for GPU
acceleration" so the "### Minimum" list is consistent and unambiguous.


### Recommended
- 8GB+ VRAM (GPU)
- RTX 3060 or equivalent
- 8GB+ RAM

### Optimal
- 16GB+ VRAM
- RTX 4090 or A100
- 16GB+ RAM

## GPU Support

### CUDA (NVIDIA)
Preinstalled CUDA 12.1+

### ROCm (AMD)
Set environment variable:
```bash
export PYTORCH_HIP_ALLOC_CONF=":256:8"
```
Comment on lines +65 to +69
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

PyTorch PYTORCH_HIP_ALLOC_CONF environment variable format documentation

💡 Result:

PYTORCH_HIP_ALLOC_CONF uses the same allocator-config string format as PYTORCH_ALLOC_CONF / PYTORCH_CUDA_ALLOC_CONF: a comma-separated list of key:value pairs:

PYTORCH_HIP_ALLOC_CONF=<option>:<value>,<option2>:<value2>,...

Example:

export PYTORCH_HIP_ALLOC_CONF=garbage_collection_threshold:0.8,max_split_size_mb:128

The official PyTorch docs describe the format and supported keys under “Optimizing memory usage with PYTORCH_ALLOC_CONF (e.g., backend, max_split_size_mb, garbage_collection_threshold, expandable_segments, etc.).[1] PyTorch also documents (in the allocator config implementation/PR history) that PYTORCH_HIP_ALLOC_CONF is accepted as a backward-compatibility alias (alongside PYTORCH_CUDA_ALLOC_CONF) for the same configuration mechanism.[2]

Sources: [1] [2]


🌐 Web query:

HIPBLAS_WORKSPACE_CONFIG format ROCm environment variable

💡 Result:

HIPBLAS_WORKSPACE_CONFIG uses a colon-delimited list of (SIZE, COUNT) pairs:

  • Format: :[SIZE]:[COUNT] (and you can repeat pairs to provide multiple bins), e.g. :S1:C1:S2:C2:... [1]
  • Example: HIPBLAS_WORKSPACE_CONFIG=:4096:2:16:8 (two allocations of size 4096 and eight allocations of size 16 KiB; total ~8 MiB) [1]
  • Disable workspaces: HIPBLAS_WORKSPACE_CONFIG=:0:0 [1]
  • Alias accepted: CUBLAS_WORKSPACE_CONFIG is also accepted on ROCm for convenience [1]

Source: PyTorch HIP/ROCm notes (hipBLAS workspaces section). [1]


🌐 Web query:

PyTorch HIP memory allocation environment variables official documentation

💡 Result:

Official PyTorch docs do not document a PYTORCH_HIP_ALLOC_CONF variable. For HIP/ROCm memory-allocation behavior, the documented knobs are:

  • Disable HIP caching allocator (debugging): PYTORCH_NO_HIP_MEMORY_CACHING=1 (and PyTorch also accepts PYTORCH_NO_CUDA_MEMORY_CACHING=1 “for ease of porting”). [1]
  • Tune the caching allocator via PYTORCH_ALLOC_CONF: PyTorch documents allocator tuning under CUDA as PYTORCH_ALLOC_CONF (with PYTORCH_CUDA_ALLOC_CONF as a backward-compatible alias) and lists options like backend, max_split_size_mb, garbage_collection_threshold, and expandable_segments. [2], [3]

Official documentation pages:

  • HIP (ROCm) semantics (memory management; PYTORCH_NO_HIP_MEMORY_CACHING) [1]
  • CUDA Environment Variables (PYTORCH_ALLOC_CONF / PYTORCH_CUDA_ALLOC_CONF) [2]
  • CUDA semantics → Memory management / “Optimizing memory usage with PYTORCH_ALLOC_CONF” (allocator options) [3]

Sources: [1] https://docs.pytorch.org/docs/stable/notes/hip [2] https://docs.pytorch.org/docs/stable/cuda_environment_variables.html [3] https://docs.pytorch.org/docs/stable/notes/cuda.html


Wrong environment variable name/value for ROCm memory configuration.

The snippet mixes up two distinct environment variables:

  • HIPBLAS_WORKSPACE_CONFIG takes the :[SIZE]:[COUNT] format — e.g. HIPBLAS_WORKSPACE_CONFIG=:4096:2:16:8.
  • PYTORCH_HIP_ALLOC_CONF takes comma-separated key=value pairs such as expandable_segments:True to avoid memory fragmentation.

The current line PYTORCH_HIP_ALLOC_CONF=":256:8" uses the hipBLAS format with the wrong variable name, so it will be silently ignored. Replace it with the intended variable:

✏️ Suggested fix (pick one based on intent)
 ### ROCm (AMD)
 Set environment variable:
 ```bash
-export PYTORCH_HIP_ALLOC_CONF=":256:8"
+# To reduce memory fragmentation:
+export PYTORCH_HIP_ALLOC_CONF=expandable_segments:True
+# Or to control hipBLAS workspace size:
+export HIPBLAS_WORKSPACE_CONFIG=:4096:2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/ui/ace_studio_streamlit/INSTALL.md` around lines 65 - 69, The ROCm
env var line is wrong: replace export PYTORCH_HIP_ALLOC_CONF=":256:8" with the
correct configuration depending on intent — if avoiding fragmentation set
PYTORCH_HIP_ALLOC_CONF to a comma-separated key=value string like
expandable_segments:True, or if tuning hipBLAS workspace set
HIPBLAS_WORKSPACE_CONFIG to the colon-separated format (e.g. :4096:2); update
the INSTALL.md example to show one or both correct export lines and remove the
incorrect usage of PYTORCH_HIP_ALLOC_CONF with the colon-format.


### MPS (Apple Silicon)
Automatic detection and use

### CPU
Works but slow; set device to CPU in Settings

## Troubleshooting Installation

### Module not found errors
```bash
# Reinstall ACE-Step dependencies
cd .. # Go to main ACE-Step dir
pip install -e .
```
Comment on lines +79 to +84
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Troubleshooting path is one level short for editable install.

From acestep/ui/streamlit, cd .. lands at acestep/ui, so pip install -e . at Line 83 is unlikely to target the ACE-Step project root.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/ui/streamlit/INSTALL.md` around lines 79 - 84, The troubleshooting
step runs pip install -e . from the wrong directory (acestep/ui/streamlit);
change the sequence so pip targets the project root by either running cd ../..
before pip install -e . or by running pip install -e ../.. directly; update the
INSTALL.md section containing the current commands in the "Module not found
errors" block to use one of these corrected command sequences.


### Streamlit port already in use
```bash
streamlit run main.py --server.port 8502
```

### Clear cache and restart
```bash
streamlit cache clear
streamlit run main.py
```

## Docker Deployment (Optional)

Create `Dockerfile`:
```dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8501
CMD ["streamlit", "run", "main.py", "--server.port=8501", "--server.address=0.0.0.0"]
```

Build and run:
```bash
docker build -t ace-studio .
docker run -p 8501:8501 -v $(pwd)/projects:/app/projects ace-studio
```
Comment on lines +99 to +114
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Docker snippet will fail: the parent acestep package is not copied into the image.

The Dockerfile copies only ace_studio_streamlit/, but the app imports from the parent ACE-Step package (acestep). The CMD will crash with ModuleNotFoundError on first run. The Docker instructions need to either include the parent project or document how to mount/install it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ace_studio_streamlit/INSTALL.md` around lines 99 - 114, The Docker snippet
will produce ModuleNotFoundError because the parent package 'acestep' is not
included in the image; update the Docker workflow to either copy/install the
parent package into the image or document mounting it at runtime: modify the
Dockerfile (the created Dockerfile and its CMD running "streamlit run main.py")
to COPY the parent package sources (or run pip install -e ../acestep or pip
install . from the parent) into the container before installing requirements, or
change the run instructions to bind-mount the parent repo into /app so imports
of 'acestep' resolve; ensure the final image contains the 'acestep' package or
the README notes the required mount/install step.


## Environment Variables

Optional `.env` file:

```env
# GPU Configuration
DEVICE=cuda
OFFLOAD_CPU=1
FLASHATTN=1

# Model Configuration
DIT_MODEL=acestep-v15-turbo
LLM_MODEL=1.7B

# UI Configuration
MAX_BATCH_SIZE=4
DEFAULT_DURATION=120
DEFAULT_BPM=120

# Storage
PROJECTS_DIR=./projects
CACHE_DIR=./.cache
```
Comment on lines +116 to +138
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

.env file usage is undocumented — how is it loaded?

The section shows a .env template but does not mention the loader (e.g., python-dotenv / dotenv_values). Without that step, the variables have no effect and users will be confused. Either document the loader command or note that variables must be set in the shell environment directly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/ui/ace_studio_streamlit/INSTALL.md` around lines 116 - 138, The
`.env` template under the "Environment Variables" section is missing
instructions on how it's loaded; update the INSTALL.md to state explicitly how
to load the file (either export variables in the shell or use a dotenv loader)
and provide the exact loader option to use (e.g., recommend installing and using
python-dotenv or using dotenv_values to load `.env` at app startup). Mention the
`.env` filename and give a short instruction for the application to pick up
these variables (e.g., call out using python-dotenv in the app startup or
sourcing the file in the shell) so users know whether to export variables or
rely on a library.


## Next Steps

1. Go to **Dashboard** for quick start
2. Try **Generate** to create first song
3. Explore **Edit** features
4. Check **Settings** for optimal configuration

## Getting Help

- 📖 See README.md for usage guide
- 🐛 Report issues on GitHub
- 💬 Ask in Discord community
- 📚 Check ACE-Step documentation
Loading