Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] File not found in autotuner cache in multi-node setting on SLURM #5646

Open
jubueche opened this issue Jun 12, 2024 · 1 comment
Open
Assignees
Labels
bug Something isn't working training

Comments

@jubueche
Copy link

Describe the bug
I am training an LLM using DeepSpeed and 12 nodes a 8 V100s per node. My training is generally working well (thanks DeepSpeed), but when I run multiple training runs in parallel, I run into trouble.
I am getting these kinds of errors

Traceback (most recent call last):
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 473, in matmul_ext_update_autotune_table
    fp16_matmul._update_autotune_table()
    fp16_matmul._update_autotune_table()
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
    TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
    TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
    cache_manager.put(autotune_table)
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
    cache_manager.put(autotune_table)
    os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle'
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
    os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/.cache/Fp16Matmul_2d_kernel.pickle'

I thought that this is because maybe the directories are shared between the multiple runs, which can create race conditions.
My TMPDIR, TRITON_CACHE_DIR, and TORCH_EXTENSIONS_DIR are set as follows

export TMPDIR=$HOME/scratch/.cache
export TRITON_CACHE_DIR=$HOME/scratch/.cache
export TORCH_EXTENSIONS_DIR=$HOME/scratch/.cache/torch-extensions

To fix this, I tried to allocate one cache folder per run like so

export TMPDIR=$HOME/scratch/.cache
export TRITON_CACHE_DIR=$HOME/scratch/$SLURM_JOBID/.cache
export TORCH_EXTENSIONS_DIR=$HOME/scratch/$SLURM_JOBID/.cache/torch-extensions

mkdir -p $TRITON_CACHE_DIR
mkdir -p $TORCH_EXTENSIONS_DIR

but that also didn't work. Now I am getting this error

Traceback (most recent call last):
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 473, in matmul_ext_update_autotune_table
    fp16_matmul._update_autotune_table()
    fp16_matmul._update_autotune_table()
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
    TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
    TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
    cache_manager.put(autotune_table)
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
    cache_manager.put(autotune_table)
    os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle'
  File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
    os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle'

ds_report output

[2024-06-12 03:08:15,154] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-06-12 03:08:15,765] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-devel package with yum
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
No ROCm runtime is found, using ROCM_HOME='/opt/rocm-4.3.0'
 [WARNING]  NVIDIA Inference is only supported on Ampere and newer architectures
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
 [WARNING]  using untested triton version (2.3.0), only 1.0.0 is known to be compatible
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-devel package with yum
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
 [WARNING]  NVIDIA Inference is only supported on Ampere and newer architectures
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
 [WARNING]  using untested triton version (2.3.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/gpfs/u/home/ANFM/ANFMbchl/scratch/miniconda3/envs/torch-nightly/lib/python3.10/site-packages/torch']
torch version .................... 2.3.0+cu121
deepspeed install path ........... ['/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed']
deepspeed info ................... 0.14.3+488a823, 488a823, master
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1
shared memory (/dev/shm) size .... 377.69 GB
@jubueche jubueche added bug Something isn't working training labels Jun 12, 2024
@jubueche
Copy link
Author

Possibly related: #5205

@loadams loadams self-assigned this Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working training
Projects
None yet
Development

No branches or pull requests

2 participants