Gradient Boost (GBM) is a stage-wise additive ensemble that uses a Gradient Descent boosting scheme for training boosters (Decision Trees) to correct the error residuals of a series of weak base learners. Stochastic gradient boosting is achieved by varying the ratio of samples to subsample uniformly at random from the training set. GBM also utilizes progress monitoring via an internal validation set for snapshotting and early stopping.
Note: If there are not enough training samples to build an internal validation set with the user-specified holdout ratio then progress monitoring will be disabled.
Data Type Compatibility: Depends on base learners
|1||booster||RegressionTree||Learner||The regressor that will fix up the error residuals of the weak base learner.|
|2||rate||0.1||float||The learning rate of the ensemble i.e. the shrinkage applied to each step.|
|3||ratio||0.5||float||The ratio of samples to subsample from the training set to train each booster.|
|4||estimators||1000||int||The maximum number of boosters to train in the ensemble.|
|5||min change||1e-4||float||The minimum change in the training loss necessary to continue training.|
|6||window||10||int||The number of epochs without improvement in the validation score to wait before considering an early stop.|
|7||hold out||0.1||float||The proportion of training samples to use for progress monitoring.|
|8||metric||RMSE||Metric||The metric used to score the generalization performance of the model during training.|
|9||base||DummyRegressor||Learner||The weak base learner to be boosted.|
use Rubix\ML\Regressors\GradientBoost; use Rubix\ML\Regressors\RegressionTree; use Rubix\ML\CrossValidation\Metrics\SMAPE; use Rubix\ML\Regressors\DummyRegressor; use Rubix\ML\Other\Strategies\Constant; $estimator = new GradientBoost(new RegressionTree(3), 0.1, 0.8, 1000, 1e-4, 10, 0.1, new SMAPE(), new DummyRegressor(new Constant(0.0)));
Return the validation score at each epoch from the last training session:
public scores() : float|null
Return the loss at each epoch from the last training session:
public steps() : float|null
- J. H. Friedman. (2001). Greedy Function Approximation: A Gradient Boosting Machine.
- J. H. Friedman. (1999). Stochastic Gradient Boosting.
- Y. Wei. et al. (2017). Early stopping for kernel boosting algorithms: A general analysis with localized complexities.