Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incomplete instructions regarding 'Add New Bug Benchmark' #156

Open
jose opened this issue Dec 4, 2023 · 3 comments
Open

Incomplete instructions regarding 'Add New Bug Benchmark' #156

jose opened this issue Dec 4, 2023 · 3 comments

Comments

@jose
Copy link

jose commented Dec 4, 2023

Hi @nus-apr,

Although this page/document provides some initial instructions on how to add a new bug benchmark, it does not provide instructions for the second and third points listed.

  • Benchmark Image: a Dockerfile describing how to construct the benchmark container
  • Benchmark metadata file: A Json file containing an array of objects with the following features

(The last sentence ends weirdly). It would be great if you could provide more info.

@Marti2203
Copy link
Collaborator

Hi @jose ,
We appreciate your interest in Cerberus! I have pushed commit a111e52 to the dev branch with extra information in the aforementioned document, your feedback is highly appreciated

@jose
Copy link
Author

jose commented Dec 7, 2023

Thanks @Marti2203 for your prompt reply.

I believe there is a typo in that commit. Where it's written

Create a new file in app/core/drivers/benchmarks/ with the Benchmark name (i.e. NewBenchmark.py) that contains the following code:

should be

Create a new file in app/drivers/benchmarks/ with the Benchmark name (i.e. NewBenchmark.py) that contains the following code:

right?

Another question that's not yet clear to me is, how does one create the meta-data.json file? Manually?

@Marti2203
Copy link
Collaborator

Hi,
Thank you for catching that incorrect path. I will fix it in a commit now. To create the meta-data.json file, you can construct it manually or make a script that generates it. For benchmarks, which are more uniform in structure and metadata, I have used a script (examples are ITSP and Refactory benchmarks) .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants