You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Excellent BO algorithm by the way, but I'm a bit confused about how the GP is made local via the trust region in Turbo-1.
From reading your paper, I got the idea that GP is only trained with local X, Y data (using data that fits within the trust region). However, when looking at the code, I can see that upon each iteration of the optimize loop, a new GP is trained using the full X and Y datasets (since we use self._X, which is global).
Have I misunderstood how this works?
The text was updated successfully, but these errors were encountered:
Looking at the code I am having doubts too. I have been timing the GP fittings per trust region for Turbo-M and I only see increasing durations of fitting times until the TR converges, suggesting all points are used for fitting the GP within a TR regardless of adapted boundaries.
@dme65 Is this not implemented as it was intended?
Excellent BO algorithm by the way, but I'm a bit confused about how the GP is made local via the trust region in Turbo-1.
From reading your paper, I got the idea that GP is only trained with local X, Y data (using data that fits within the trust region). However, when looking at the code, I can see that upon each iteration of the optimize loop, a new GP is trained using the full X and Y datasets (since we use
self._X
, which is global).Have I misunderstood how this works?
The text was updated successfully, but these errors were encountered: