You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am exploring some ideas for evaluating a configuration space across different instances and was wondering if there was a native way to support weighting the instances. The reason for this is that there is a known cost of evaluation on each instance type which is different.
I am looking to bias towards evaluating on cheap instances and then "verify" on more expensive instances. This would be a similar idea to budgets in Successive Halving of Hyperband, however, I would like to make arbitrary budgets associated with instances.
The text was updated successfully, but these errors were encountered:
jsfreischuetz
changed the title
Weights for Instances?
[Question] Weights for Instances?
May 23, 2024
After assigning a known cost to the instances in the scenario, you can sort them to make sure that a configuration is evaluated on increasingly expensive instances.
tf_instances contains the instances given in the scenario. In the abstract_facade.py, the list tf_instances is passed to get_instance_seed_keys_of_interest(), determining the instances assigned to a configuration.
Otherwise, derive a distribution for the known instance cost and replace the instances in the tf_instances list.
Note that max_config_calls caps the number of config calls, so it needs to be adapted to the evaluation needs.
Is there any way to make this act similar to a multi-task optimization problem (https://proceedings.neurips.cc/paper_files/paper/2013/file/f33ba15effa5c10e873bf3842afb46a6-Paper.pdf), where the optimization would result in one configuration that performed well across all of the instances/tasks/objectives? This would require the acquisition function to be aware of the cost of evaluating a sample on an instance, and prioritize instances with higher information gain per cost, potentially only evaluating each configuration on a single instance.
I can trivially construct a multi-objective optimization, but this requires evaluating every task/objective for every configuration. I was hoping for an alternative where instances or another feature could be used to minimize the number of samples, and hopefully transfer some information between the tasks/instances
I am exploring some ideas for evaluating a configuration space across different instances and was wondering if there was a native way to support weighting the instances. The reason for this is that there is a known cost of evaluation on each instance type which is different.
I am looking to bias towards evaluating on cheap instances and then "verify" on more expensive instances. This would be a similar idea to budgets in Successive Halving of Hyperband, however, I would like to make arbitrary budgets associated with instances.
The text was updated successfully, but these errors were encountered: