Skip to content

Commit 116b05f

Browse files
committed
[CI] Compile with pytorch 2.4.0.dev20240514
1 parent da11d1b commit 116b05f

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

.github/workflows/publish.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ jobs:
4444
# manylinux docker image, but I haven't figured out how to install CUDA on manylinux.
4545
os: [ubuntu-20.04]
4646
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
47-
torch-version: ['2.0.1', '2.1.2', '2.2.2', '2.3.1', '2.4.0.dev20240512']
47+
torch-version: ['2.0.1', '2.1.2', '2.2.2', '2.3.1', '2.4.0.dev20240514']
4848
cuda-version: ['11.8.0', '12.2.2']
4949
# We need separate wheels that either uses C++11 ABI (-D_GLIBCXX_USE_CXX11_ABI) or not.
5050
# Pytorch wheels currently don't use it, but nvcr images have Pytorch compiled with C++11 ABI.

flash_attn/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
__version__ = "2.6.0"
1+
__version__ = "2.6.0.post1"
22

33
from flash_attn.flash_attn_interface import (
44
flash_attn_func,

training/Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ RUN pip install transformers==4.25.1 datasets==2.8.0 pytorch-lightning==1.8.6 tr
8585
RUN pip install git+https://github.com/mlcommons/[email protected]
8686

8787
# Install FlashAttention
88-
RUN pip install flash-attn==2.6.0
88+
RUN pip install flash-attn==2.6.0.post1
8989

9090
# Install CUDA extensions for fused dense
91-
RUN pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/fused_dense_lib
91+
RUN pip install git+https://github.com/HazyResearch/[email protected].post1#subdirectory=csrc/fused_dense_lib

0 commit comments

Comments
 (0)