You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How could I check the convergence of uBoost when using uniforming_rate (alpha) != 0?. When I plot the log-loss metric vs number of boostings I see it increases, with a rate proportional to the alpha value used. You can see this trend in the plot attached. On the other hand, I can make the log-loss to converge with another hyper-parameter configuration (for the same alpha) but then I don't get an uniform selection. How can I deal with this?, does it mean that the log-loss is not a good metric to check convergence in this case?. uboost_vs_adaboost.pdf
Thanks very much,
Gino
The text was updated successfully, but these errors were encountered:
for uBoost convergence is something poorly defined.
first, uBoost has no optimization target (contrary say to AdaBoost, GBDT, GB+FL)
second, the way it operates quite often drives to bias in weights in one of direction - thus probabilities may become very biased, thus loss may diverge
Among options, I recommend to monitor ROC AUC on validation set or some similar discriminative measure.
Hello,
How could I check the convergence of uBoost when using uniforming_rate (alpha) != 0?. When I plot the log-loss metric vs number of boostings I see it increases, with a rate proportional to the alpha value used. You can see this trend in the plot attached. On the other hand, I can make the log-loss to converge with another hyper-parameter configuration (for the same alpha) but then I don't get an uniform selection. How can I deal with this?, does it mean that the log-loss is not a good metric to check convergence in this case?.
uboost_vs_adaboost.pdf
Thanks very much,
Gino
The text was updated successfully, but these errors were encountered: