Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Weights for Instances? #1109

Open
jsfreischuetz opened this issue May 23, 2024 · 2 comments
Open

[Question] Weights for Instances? #1109

jsfreischuetz opened this issue May 23, 2024 · 2 comments
Assignees

Comments

@jsfreischuetz
Copy link

I am exploring some ideas for evaluating a configuration space across different instances and was wondering if there was a native way to support weighting the instances. The reason for this is that there is a known cost of evaluation on each instance type which is different.
I am looking to bias towards evaluating on cheap instances and then "verify" on more expensive instances. This would be a similar idea to budgets in Successive Halving of Hyperband, however, I would like to make arbitrary budgets associated with instances.

@jsfreischuetz jsfreischuetz changed the title Weights for Instances? [Question] Weights for Instances? May 23, 2024
@lhennig0103
Copy link
Collaborator

lhennig0103 commented Jul 2, 2024

After assigning a known cost to the instances in the scenario, you can sort them to make sure that a configuration is evaluated on increasingly expensive instances.
tf_instances contains the instances given in the scenario. In the abstract_facade.py, the list tf_instances is passed to get_instance_seed_keys_of_interest(), determining the instances assigned to a configuration.
Otherwise, derive a distribution for the known instance cost and replace the instances in the tf_instances list.
Note that max_config_calls caps the number of config calls, so it needs to be adapted to the evaluation needs.

@jsfreischuetz
Copy link
Author

jsfreischuetz commented Jul 16, 2024

@lhennig0103 @benjamc

Is there any way to make this act similar to a multi-task optimization problem (https://proceedings.neurips.cc/paper_files/paper/2013/file/f33ba15effa5c10e873bf3842afb46a6-Paper.pdf), where the optimization would result in one configuration that performed well across all of the instances/tasks/objectives? This would require the acquisition function to be aware of the cost of evaluating a sample on an instance, and prioritize instances with higher information gain per cost, potentially only evaluating each configuration on a single instance.

I can trivially construct a multi-objective optimization, but this requires evaluating every task/objective for every configuration. I was hoping for an alternative where instances or another feature could be used to minimize the number of samples, and hopefully transfer some information between the tasks/instances

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants