You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I read the paper (thanks) but I am still puzzled - I don't see any ground-breaking improvements in precision or performance over RF or GB? What is the big benefit?
Thanks
The text was updated successfully, but these errors were encountered:
Hi Hristo,
indeed, there is no ground-breaking improvement. However in both cases algorithm was compared on the problems where competitors are very good and using tree hyper-parameters that are known to be appropriate for competitors (RF and GB are used with very different kinds of trees).
Since this is one algorithm, I think it is a good result.
While I'm far from insisting that someone should use this approach, the following points look very interesting to me
in this modification boosting becomes a converging algorithm. From theoretical point it is a nice property
we wanted to reproduce the behavior of random forest, when one can simply leave ensemble training for hours and not to worry that 'there are too many trees', and convergence made this possible
we replaced GB's shrinkage parameter with a capacity parameter, which characterizes the state the algorithm convergences to (thus, can be changed at any moment). Tuning of learning rate (+ number of trees) is a common procedure for GB, which we tried to avoid - in InfiniteBoost we introduced automated search for capacity. This works in some cases.
I read the paper (thanks) but I am still puzzled - I don't see any ground-breaking improvements in precision or performance over RF or GB? What is the big benefit?
Thanks
The text was updated successfully, but these errors were encountered: