Skip to content

Conversation

nastaran78
Copy link
Collaborator

@nastaran78 nastaran78 commented Oct 13, 2025

Summary

This pull request introduces a new public API, neps.import_trials, allowing users to import pre-evaluated configurations into an optimization run. This enables "warm-starting" an optimization using results from external sources, such as a previous random search or another HPO algorithm.

Motivation

Currently, neps primarily supports continuing an optimization from trials it generated itself. However, a common use case is to leverage existing data from prior experiments to guide a new, more advanced optimization process. For example, a user might want to use a set of randomly evaluated configurations as the initial design for a Bayesian Optimization run.

This feature addresses that gap by providing a formal, validated way to inject external data into the pipeline.

Implementation Details

New Public API: A new function, neps.import_trials, has been added to neps/api.py. This function serves as the high-level entry point for users.

Optimizer-Specific Logic: The top-level API function delegates the core import logic to the specific optimizer instance being used. This ensures that each optimizer can handle validation and state integration according to its unique requirements (e.g., handling fidelities for PriMO, creating rung IDs for bracket-based optimizers, etc.).

For Optimizer Developers

To support this feature, all optimizer classes must now implement an import_trials method. This method is responsible for:

Validating that the provided configurations and results are compatible with the optimizer's search space and requirements.

Correctly formatting the data and integrating it into the optimizer's internal state.

Usage Example

import neps
from neps.state.pipeline_eval import UserResultDict

# Assume 'pipeline_space' and 'optimizer' are already defined
evaluated_trials = [
    (
        {  # Configuration dictionary
            "float1": 0.5417,
            "float2": 3.3333,
            "categorical": 1,
            "integer1": 0,
            "integer2": 1000,
        },
        # Result dictionary
        UserResultDict(objective_to_minimize=-1011.5417)
    ),
    # ... more pre-evaluated trials can be added here
]

neps.import_trials(
    pipeline_space=pipeline_space,
    evaluated_trials=evaluated_trials,
    root_directory=f"results_{optimizer}",
    optimizer=optimizer
)

@Meganton
Copy link
Collaborator

Does the user have to use a UserResultDict? As this is not a requirement for the evaluate_pipeline function, it shouldn't be one here, imo

@Meganton
Copy link
Collaborator

Also I wonder wether it is the best solution to have to implement this method for every single optimizer... This seems like a lot of overhead, especially considering the incoming changes with the new NePS-spaces. Could this not be one central method, in runtime.py e.g.?

@nastaran78
Copy link
Collaborator Author

nastaran78 commented Oct 13, 2025

Does the user have to use a UserResultDict? As this is not a requirement for the evaluate_pipeline function, it shouldn't be one here, imo

@Meganton here we exclude raw types like float to enforce a clear data schema via the objective_to_minimize key and exclude Exception as this API is only for successfully evaluated trials.

Also I wonder wether it is the best solution to have to implement this method for every single optimizer... This seems like a lot of overhead, especially considering the incoming changes with the new NePS-spaces. Could this not be one central method, in runtime.py e.g.?

A single, central method is not feasible. Requiring each optimizer to implement import_trials is a polymorphic design (Strategy Pattern). This is necessary because different optimizers have unique validation constraints (e.g., ifBO's [0, 1] objective range) and state management needs, particularly for how trial IDs are generated. This approach maintains encapsulation and avoids a complex central function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants