SGLB: Stochastic Gradient Langevin Boosting
In this paper, the authors introduce Stochastic Gradient Langevin Boosting (SGLB) – a powerful and efficient ML framework, which may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of Langevin Diffusion equation specifically designed for gradient boosting. This allows guarantee the global convergence, while standard gradient boosting algorithms can guarantee only local optima, which is a problem for multimodal loss functions. To illustrate the advantages of SGLB, they apply it to a classification task with
The algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms classic gradient boosting methods.
paper: https://arxiv.org/abs/2001.07248
release: https://github.com/catboost/catboost/releases/tag/v0.21
#langevin #boosting #catboost
In this paper, the authors introduce Stochastic Gradient Langevin Boosting (SGLB) – a powerful and efficient ML framework, which may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of Langevin Diffusion equation specifically designed for gradient boosting. This allows guarantee the global convergence, while standard gradient boosting algorithms can guarantee only local optima, which is a problem for multimodal loss functions. To illustrate the advantages of SGLB, they apply it to a classification task with
0-1
loss function, which is known to be multimodal, and to a standard Logistic regression task that is convex.The algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms classic gradient boosting methods.
paper: https://arxiv.org/abs/2001.07248
release: https://github.com/catboost/catboost/releases/tag/v0.21
#langevin #boosting #catboost