Skip to content

Commit

Permalink
Add llama benchmark run to GHA (#1398)
Browse files Browse the repository at this point in the history
* Add llama benchmark run to github action

Summary:
Added
1. output json result following https://github.com/pytorch/pytorch/wiki/How-to-integrate-with-PyTorch-OSS-benchmark-database
2. benchmark run for llama model
3. upload the result to S3

Test Plan:
CI/dashboard

Reviewers:

Subscribers:

Tasks:

Tags:

* Update dashboard_perf_test.yml to run benchmark

* Fix script path

* Run ruff and ufmt

* Fix model path

* mkdir

* Ready to land

---------

Co-authored-by: Jerry Zhang <[email protected]>
  • Loading branch information
huydhn and jerryzh168 authored Dec 10, 2024
1 parent a6f8676 commit f258d82
Show file tree
Hide file tree
Showing 2 changed files with 574 additions and 183 deletions.
46 changes: 46 additions & 0 deletions .github/workflows/dashboard_perf_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
name: A100-perf-nightly

on:
workflow_dispatch:
schedule:
- cron: 0 7 * * 0-6

jobs:
benchmark:
runs-on: linux.aws.a100
strategy:
matrix:
torch-spec:
- '--pre torch --index-url https://download.pytorch.org/whl/nightly/cu124'
steps:
- uses: actions/checkout@v3

- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
with:
python-version: "3.9"

- name: Run benchmark
shell: bash
run: |
set -eux
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} pip install ${{ matrix.torch-spec }}
${CONDA_RUN} pip install -r dev-requirements.txt
${CONDA_RUN} pip install .
export CHECKPOINT_PATH=checkpoints
export MODEL_REPO=meta-llama/Llama-2-7b-chat-hf
${CONDA_RUN} python scripts/download.py --repo_id meta-llama/Llama-2-7b-chat-hf --hf_token ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
${CONDA_RUN} python scripts/convert_hf_checkpoint.py --checkpoint_dir "${CHECKPOINT_PATH}/${MODEL_REPO}"
mkdir -p ${{ runner.temp }}/benchmark-results
${CONDA_RUN} python torchao/_models/llama/generate.py --checkpoint_path "${CHECKPOINT_PATH}/${MODEL_REPO}/model.pth" --compile --output_json_path ${{ runner.temp }}/benchmark-results/benchmark-results.json
- name: Upload the benchmark results to OSS benchmark database for the dashboard
uses: pytorch/test-infra/.github/actions/upload-benchmark-results@main
with:
benchmark-results-dir: ${{ runner.temp }}/benchmark-results
dry-run: false
schema-version: v3
github-token: ${{ secrets.GITHUB_TOKEN }}
Loading

0 comments on commit f258d82

Please sign in to comment.