Skip to content

Commit

Permalink
Add more devices to `torchbenchmark.util.experiment.instantiator.list…
Browse files Browse the repository at this point in the history
…_devices()` (#2545)

Summary:
from #2543 (comment)

This change will allow all userbenchmarks to run on available devices.

## Userbenchmark - test_bench - BERT_pytorch

cuda:

```
$ python run_benchmark.py test_bench --models BERT_pytorch --device cuda
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='cuda', batch_size=None, extra_args=[], extra_env=None, output_dir=None) ... [done]
{
    "name": "test_bench",
    "environ": {
        "pytorch_git_version": "ac47a2d9714278889923ddd40e4210d242d8d4ee",
        "pytorch_version": "2.6.0.dev20241121+cu124",
        "device": "Tesla T4"
    },
    "metrics": {
        "model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=[], metric=latencies": 122.69141,
        "model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=[], metric=cpu_peak_mem": 0.6962890625,
        "model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=[], metric=gpu_peak_mem": 1.573486328125
    }
}
```

mps:

```
$ python run_benchmark.py test_bench --models BERT_pytorch --device mps
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='mps', batch_size=None, extra_args=[], extra_env=None, output_dir=None) ... [done]
{
    "name": "test_bench",
    "environ": {
        "pytorch_git_version": "dd2e6d61409aac22198ec771560a38adb0018ba2",
        "pytorch_version": "2.6.0.dev20241120"
    },
    "metrics": {
        "model=BERT_pytorch, test=eval, device=mps, bs=None, extra_args=[], metric=latencies": 133.299,
        "model=BERT_pytorch, test=eval, device=mps, bs=None, extra_args=[], metric=cpu_peak_mem": 19.832832,
        "model=BERT_pytorch, test=eval, device=mps, bs=None, extra_args=[], metric=gpu_peak_mem": "failed"
    }
}
```

ascend npu:

```
python run_benchmark.py test_bench --models BERT_pytorch --device npu
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='npu', batch_size=None, extra_args=[], extra_env=None, output_dir=None) ... [done]
{
    "name": "test_bench",
    "environ": {
        "pytorch_git_version": "64141411e0de61b61857e216ae7a8766f4f5969b",
        "pytorch_version": "2.6.0.dev20240923"
    },
    "metrics": {
        "model=BERT_pytorch, test=eval, device=npu, bs=None, extra_args=[], metric=latencies": 21.688104,
        "model=BERT_pytorch, test=eval, device=npu, bs=None, extra_args=[], metric=cpu_peak_mem": 47.261696,
        "model=BERT_pytorch, test=eval, device=npu, bs=None, extra_args=[], metric=gpu_peak_mem": "failed"
    }
}
```

cc: xuzhao9 jgong5 FFFrog

Pull Request resolved: #2545

Reviewed By: xuzhao9

Differential Revision: D66457386

Pulled By: FindHao

fbshipit-source-id: 0f3a8aba97a2cb2efc3f77f01bcd28cfc7182e0b
  • Loading branch information
shink authored and facebook-github-bot committed Nov 25, 2024
1 parent 0fd86c0 commit 820f213
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 3 deletions.
5 changes: 3 additions & 2 deletions torchbenchmark/util/experiment/instantiator.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,9 @@ def list_devices() -> List[str]:
devices = ["cpu"]
import torch

if torch.cuda.is_available():
devices.append("cuda")
device_type = torch._C._get_accelerator().type
if device_type != "cpu":
devices.append(device_type)
return devices


Expand Down
2 changes: 1 addition & 1 deletion torchbenchmark/util/experiment/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def get_peak_memory(
raise ValueError(
f"Expected metrics_needed to be non-empty, get: {metrics_needed}"
)
if metrics_gpu_backend in ["dcgm", "nvml"]:
if device == "cuda" and metrics_gpu_backend in ["dcgm", "nvml"]:
from torchbenchmark._components.model_analyzer.TorchBenchAnalyzer import (
ModelAnalyzer,
)
Expand Down

0 comments on commit 820f213

Please sign in to comment.