Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
f6ee35b
Add MO facade with todos
Jan 9, 2023
2b97fca
Add NoAggregatuonStrategy
Jan 9, 2023
556ad37
Update aggregation strategy
Jan 9, 2023
672389f
Limit value to bounds region
Jan 9, 2023
09160b7
Factor out creating a unique list
Jan 9, 2023
1359f19
More debug logging
Jan 9, 2023
733f94d
Factor out sorting of costs
Jan 9, 2023
3e015c0
Better docstring
Jan 9, 2023
171958b
Add MO acq maximizer
Jan 9, 2023
0fe8e7d
Update acq optimizer
Jan 9, 2023
0059155
Stop local search after max steps is reached
Jan 9, 2023
5b0a1bf
Abstract away population trimming and pareto front calculation
Jan 9, 2023
a0bed50
Add MO intensifier draft
Jan 9, 2023
325cb5c
Add comment
Jan 10, 2023
227ceb7
Add todos
Jan 10, 2023
c320f04
Pass rh's incumbents to acquisition function
Jan 10, 2023
67eefec
Add incumbents data structure in runhistory
Jan 10, 2023
b297a98
Add property for incumbents
Jan 10, 2023
6042bed
Add EHVI acq fun
Jan 10, 2023
a96172d
Update PHVI
Jan 10, 2023
75a2077
Add ACLib runner draft
Jan 10, 2023
4b2d101
Merge branch 'development' into mosmac
jeroenrook Feb 27, 2023
a5902d5
Native objective support
jeroenrook Mar 1, 2023
5e7d880
Fix typo
jeroenrook Mar 1, 2023
3cdf96a
Initial modifications for mo facade
jeroenrook Mar 1, 2023
087d7c8
Make the HV based acquisition functions work
jeroenrook Mar 1, 2023
1b20106
Logic fix
jeroenrook Mar 1, 2023
a057733
AClib runner
jeroenrook Mar 3, 2023
6c0bcd1
AClib runner fixes
jeroenrook Mar 3, 2023
71409ce
MO utils initial expansion
jeroenrook Mar 3, 2023
0587938
MO intensifier
jeroenrook Mar 3, 2023
d05fc42
Merge branch 'development' into mosmac
jeroenrook Mar 3, 2023
bd31d32
Expanded debugging message
jeroenrook Mar 20, 2023
4322cfb
Allow saving the intensifier when no incumbent is chosen yet.
jeroenrook Mar 20, 2023
6113c18
Bugfix for passing checks when MO model with features
jeroenrook Mar 20, 2023
8cd499f
Added support to retrain the surrogate model and acquisition loop in …
jeroenrook Mar 22, 2023
a26b7c9
Added a minimal number of configuration that need to be yielded befor…
jeroenrook Mar 28, 2023
37ae763
Remove sleep call used for testing
jeroenrook Mar 28, 2023
9b85222
Only compute Pareto fronts on the same subset of isb_keys.
jeroenrook Mar 28, 2023
8c114c0
Compute actual isb differences
jeroenrook Apr 3, 2023
2bc7383
Aclib runner
jeroenrook Apr 3, 2023
6ddc94c
Reset counter when retrain is triggered
jeroenrook Apr 3, 2023
24a749f
Comparison on one config from the incumbent
jeroenrook Apr 12, 2023
944425b
Make dask runner work
jeroenrook Apr 13, 2023
8496461
Added different intermediate update methods that can be mixed with th…
jeroenrook Apr 20, 2023
da0bb6b
Make normalization of costs in the mo setting a choice
jeroenrook Apr 26, 2023
2ca601c
In the native MO setting the EPM are trained by using the costs retri…
jeroenrook Apr 26, 2023
603182a
Generic HVI class
jeroenrook Apr 27, 2023
a109f48
Decomposed the intensifier decision logic and created mixins to easil…
jeroenrook May 2, 2023
17ce0a3
Changed the intensifier
jeroenrook May 3, 2023
fd317b0
Commit everythin
jeroenrook May 3, 2023
b50db2b
csvs
jeroenrook May 4, 2023
38b22d4
Merge remote-tracking branch 'origin/main' into mosmac
jeroenrook May 22, 2023
69d466b
README change
jeroenrook Nov 15, 2023
fdd33f6
README change
jeroenrook Nov 15, 2023
bf2a2f0
Even bigger push
jeroenrook Mar 3, 2025
1d71cf4
Merge remote-tracking branch 'origin/development' into mosmac-merge
jeroenrook Mar 27, 2025
7d7290d
Remove EHVI acquisition function
jeroenrook Mar 27, 2025
aec7609
README
jeroenrook Mar 27, 2025
cb9eab6
Fix failing tests. Disentangle normalisation and aggregation
jeroenrook Mar 27, 2025
373dc08
Fix failing pytests
jeroenrook Mar 27, 2025
85f822a
Merge remote-tracking branch 'automl/development' into mosmac-merge
Oct 6, 2025
cc2762d
resolving tests
Oct 7, 2025
c6c4b8b
intensifier fix for MF. Passes tests
Oct 7, 2025
f390582
fix merging retrain. test passes
Oct 7, 2025
04643e5
format: ruff
benjamc Oct 7, 2025
dd2ff58
build(setup.py): add dependency pygmo
benjamc Oct 7, 2025
5b0c318
style: pydocstyle, flake
benjamc Oct 7, 2025
2fab658
readd paretofront
benjamc Oct 7, 2025
31eda8c
refactor(pareto_front.py): delete illegal functions
benjamc Oct 7, 2025
cf824ae
fix some mypy
benjamc Oct 7, 2025
0b7e947
style: mypy
benjamc Oct 8, 2025
a159eb4
refactor(expected_hypervolume): rm duplicate function
benjamc Oct 8, 2025
51418df
refactor(expected_hypervolume): delete proxy method which was a comme…
benjamc Oct 8, 2025
e0e59a9
refactor(expected_hypervolume): delete ehvi method which was a commen…
benjamc Oct 8, 2025
538f4df
rename hypervolume.py
benjamc Oct 8, 2025
9f84c6d
style(hypervolume.py): fix mypy
benjamc Oct 8, 2025
3f06d61
style: pre-commit fix
benjamc Oct 8, 2025
41742c9
refactor crowding distance: optional normalization
Oct 8, 2025
3168346
Remove develop comparisons
Oct 17, 2025
b7b635f
Add PHVI to init
Oct 17, 2025
2b0b4a2
PHVI test
Oct 17, 2025
19883a5
Fix: MOLocalSearch
Oct 17, 2025
2dee6e6
Test: MOLocalSearch
Oct 17, 2025
b0b222d
Test: MOFacade + fixes to make facade work for MO
Nov 5, 2025
1ab8fc2
Test: Intensifier Mixins
Nov 5, 2025
ed5cc03
Test: Multi-objective
Nov 5, 2025
aa43f04
Remove TODOs
Nov 5, 2025
47e0024
Precommit
Nov 5, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 7 additions & 5 deletions examples/2_multi_fidelity/1_mlp_epochs.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def configspace(self) -> ConfigurationSpace:

return cs

def train(self, config: Configuration, seed: int = 0, budget: int = 25) -> float:
def train(self, config: Configuration, seed: int = 0, instance: str = "0", budget: int = 25) -> dict[str, float]:
# For deactivated parameters (by virtue of the conditions),
# the configuration stores None-values.
# This is not accepted by the MLP, so we replace them with placeholder values.
Expand All @@ -105,7 +105,7 @@ def train(self, config: Configuration, seed: int = 0, budget: int = 25) -> float
cv = StratifiedKFold(n_splits=5, random_state=seed, shuffle=True) # to make CV splits consistent
score = cross_val_score(classifier, dataset.data, dataset.target, cv=cv, error_score="raise")

return 1 - np.mean(score)
return {"accuracy": 1 - np.mean(score)}


def plot_trajectory(facades: list[AbstractFacade]) -> None:
Expand Down Expand Up @@ -146,9 +146,11 @@ def plot_trajectory(facades: list[AbstractFacade]) -> None:
mlp.configspace,
walltime_limit=60, # After 60 seconds, we stop the hyperparameter optimization
n_trials=500, # Evaluate max 500 different trials
min_budget=1, # Train the MLP using a hyperparameter configuration for at least 5 epochs
max_budget=25, # Train the MLP using a hyperparameter configuration for at most 25 epochs
n_workers=8,
instances=[str(i) for i in range(10)],
objectives="accuracy",
# min_budget=1, # Train the MLP using a hyperparameter configuration for at least 5 epochs
# max_budget=25, # Train the MLP using a hyperparameter configuration for at most 25 epochs
n_workers=4,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • test example

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works

)

# We want to run five random configurations before starting the optimization.
Expand Down
172 changes: 172 additions & 0 deletions examples/3_multi_objective/3_phvi.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
"""ParEGO
# Flags: doc-Runnable

An example of how to use multi-objective optimization with ParEGO. Both accuracy and run-time are going to be
optimized on the digits dataset using an MLP, and the configurations are shown in a plot, highlighting the best ones in
a Pareto front. The red cross indicates the best configuration selected by SMAC.

In the optimization, SMAC evaluates the configurations on two different seeds. Therefore, the plot shows the
mean accuracy and run-time of each configuration.
"""
from __future__ import annotations

import time
import warnings

import matplotlib.pyplot as plt
import numpy as np
from ConfigSpace import (
Categorical,
Configuration,
ConfigurationSpace,
EqualsCondition,
Float,
InCondition,
Integer,
)
from sklearn.datasets import load_digits
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.neural_network import MLPClassifier

from smac import HyperparameterOptimizationFacade as HPOFacade
from smac import Scenario
from smac.facade.multi_objective_facade import MultiObjectiveFacade as MOfacade

__copyright__ = "Copyright 2025, Leibniz University Hanover, Institute of AI"
__license__ = "3-clause BSD"


digits = load_digits()


class MLP:
@property
def configspace(self) -> ConfigurationSpace:
cs = ConfigurationSpace()

n_layer = Integer("n_layer", (1, 5), default=1)
n_neurons = Integer("n_neurons", (8, 256), log=True, default=10)
activation = Categorical("activation", ["logistic", "tanh", "relu"], default="tanh")
solver = Categorical("solver", ["lbfgs", "sgd", "adam"], default="adam")
batch_size = Integer("batch_size", (30, 300), default=200)
learning_rate = Categorical("learning_rate", ["constant", "invscaling", "adaptive"], default="constant")
learning_rate_init = Float("learning_rate_init", (0.0001, 1.0), default=0.001, log=True)

cs.add([n_layer, n_neurons, activation, solver, batch_size, learning_rate, learning_rate_init])

use_lr = EqualsCondition(child=learning_rate, parent=solver, value="sgd")
use_lr_init = InCondition(child=learning_rate_init, parent=solver, values=["sgd", "adam"])
use_batch_size = InCondition(child=batch_size, parent=solver, values=["sgd", "adam"])

# We can also add multiple conditions on hyperparameters at once:
cs.add([use_lr, use_batch_size, use_lr_init])

return cs

def train(self, config: Configuration, seed: int = 0, budget: int = 10) -> dict[str, float]:
lr = config.get("learning_rate", "constant")
lr_init = config.get("learning_rate_init", 0.001)
batch_size = config.get("batch_size", 200)

start_time = time.time()

with warnings.catch_warnings():
warnings.filterwarnings("ignore")

classifier = MLPClassifier(
hidden_layer_sizes=[config["n_neurons"]] * config["n_layer"],
solver=config["solver"],
batch_size=batch_size,
activation=config["activation"],
learning_rate=lr,
learning_rate_init=lr_init,
max_iter=int(np.ceil(budget)),
random_state=seed,
)

# Returns the 5-fold cross validation accuracy
cv = StratifiedKFold(n_splits=5, random_state=seed, shuffle=True) # to make CV splits consistent
score = cross_val_score(classifier, digits.data, digits.target, cv=cv, error_score="raise")

return {
"1 - accuracy": 1 - np.mean(score),
"time": time.time() - start_time,
}


def plot_pareto(smac: AbstractFacade, incumbents: list[Configuration]) -> None:
"""Plots configurations from SMAC and highlights the best configurations in a Pareto front."""
average_costs = []
average_pareto_costs = []
for config in smac.runhistory.get_configs():
# Since we use multiple seeds, we have to average them to get only one cost value pair for each configuration
average_cost = smac.runhistory.average_cost(config)

if config in incumbents:
average_pareto_costs += [average_cost]
else:
average_costs += [average_cost]

# Let's work with a numpy array
costs = np.vstack(average_costs)
pareto_costs = np.vstack(average_pareto_costs)
pareto_costs = pareto_costs[pareto_costs[:, 0].argsort()] # Sort them

costs_x, costs_y = costs[:, 0], costs[:, 1]
pareto_costs_x, pareto_costs_y = pareto_costs[:, 0], pareto_costs[:, 1]

plt.scatter(costs_x, costs_y, marker="x", label="Configuration")
plt.scatter(pareto_costs_x, pareto_costs_y, marker="x", c="r", label="Incumbent")
plt.step(
[pareto_costs_x[0]] + pareto_costs_x.tolist() + [np.max(costs_x)], # We add bounds
[np.max(costs_y)] + pareto_costs_y.tolist() + [np.min(pareto_costs_y)], # We add bounds
where="post",
linestyle=":",
)

plt.title("Pareto-Front")
plt.xlabel(smac.scenario.objectives[0])
plt.ylabel(smac.scenario.objectives[1])
plt.legend()
plt.show()


if __name__ == "__main__":
mlp = MLP()
objectives = ["1 - accuracy", "time"]

# Define our environment variables
scenario = Scenario(
mlp.configspace,
objectives=objectives,
walltime_limit=30, # After 30 seconds, we stop the hyperparameter optimization
n_trials=200, # Evaluate max 200 different trials
n_workers=1,
)

# We want to run five random configurations before starting the optimization.
# initial_design = MOfacade.get_initial_design(scenario, n_configs=5)
# multi_objective_algorithm = ParEGO(scenario)
# intensifier = HPOFacade.get_intensifier(scenario, max_config_calls=2)

# Create our SMAC object and pass the scenario and the train method
smac = MOfacade(
scenario,
mlp.train,
overwrite=True,
)

# Let's optimize
incumbents = smac.optimize()

# Get cost of default configuration
default_cost = smac.validate(mlp.configspace.get_default_configuration())
print(f"Validated costs from default config: \n--- {default_cost}\n")

print("Validated costs from the Pareto front (incumbents):")
for incumbent in incumbents:
cost = smac.validate(incumbent)
print("---", cost)

# Let's plot a pareto front
plot_pareto(smac, incumbents)
3 changes: 3 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ def read_file(filepath: str) -> str:
"pyrfr": [
"pyrfr>=0.9.0",
],
"mosmac": [
"pygmo"
],
"dev": [
"setuptools",
"types-setuptools",
Expand Down
2 changes: 2 additions & 0 deletions smac/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
HyperbandFacade,
HyperparameterOptimizationFacade,
MultiFidelityFacade,
MultiObjectiveFacade,
RandomFacade,
)
from smac.runhistory.runhistory import RunHistory
Expand All @@ -45,6 +46,7 @@
"AlgorithmConfigurationFacade",
"RandomFacade",
"HyperbandFacade",
"MultiObjectiveFacade",
"Callback",
]
except ModuleNotFoundError as e:
Expand Down
2 changes: 2 additions & 0 deletions smac/acquisition/function/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
)
from smac.acquisition.function.confidence_bound import LCB
from smac.acquisition.function.expected_improvement import EI, EIPS
from smac.acquisition.function.hypervolume import PHVI
from smac.acquisition.function.integrated_acquisition_function import (
IntegratedAcquisitionFunction,
)
Expand All @@ -21,4 +22,5 @@
"TS",
"PriorAcquisitionFunction",
"IntegratedAcquisitionFunction",
"PHVI",
]
2 changes: 1 addition & 1 deletion smac/acquisition/function/abstract_acquisition_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def update(self, model: AbstractModel, **kwargs: Any) -> None:
self._update(**kwargs)

def _update(self, **kwargs: Any) -> None:
"""Update acsquisition function attributes
"""Update acquisition function attributes

Might be different for each child class.
"""
Expand Down
2 changes: 1 addition & 1 deletion smac/acquisition/function/expected_improvement.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ def meta(self) -> dict[str, Any]: # noqa: D102
return meta

def _update(self, **kwargs: Any) -> None:
"""Update acsquisition function attributes
"""Update acquisition function attributes

Parameters
----------
Expand Down
Loading