MLP Regressor#
A multilayer feed forward neural network with a continuous output layer suitable for regression problems. Like the Multilayer Perceptron classifier, the MLP Regressor is able to handle complex non-linear regression problems by forming higher-order representations of the input features using intermediate hidden layers.
Interfaces: Estimator, Learner, Online, Verbose, Persistable
Data Type Compatibility: Continuous
Parameters#
# | Param | Default | Type | Description |
---|---|---|---|---|
1 | hidden | array | An array composing the user-specified hidden layers of the network in order. | |
2 | batch size | 100 | int | The number of training samples to process at a time. |
3 | optimizer | Adam | object | The gradient descent optimizer used to update the network parameters. |
4 | alpha | 1e-4 | float | The amount of L2 regularization to apply to the parameters of the network. |
5 | epochs | 1000 | int | The maximum number of training epochs. i.e. the number of times to iterate over the entire training set before terminating. |
6 | min change | 1e-4 | float | The minimum change in the training loss necessary to continue training. |
7 | window | 3 | int | The number of epochs without improvement in the validation score to wait before considering an early stop. |
8 | holdout | 0.1 | float | The proportion of training samples to use for validation and progress monitoring. |
9 | cost fn | LeastSquares | object | The function that computes the loss associated with an erroneous activation during training. |
10 | metric | RSquared | object | The metric used to score the generalization performance of the model during training. |
Additional Methods#
Return the training loss at each epoch:
public steps() : array
Return the validation scores at each epoch:
public scores() : array
Returns the underlying neural network instance or null
if untrained:
public network() : Network|null
Example#
use Rubix\ML\Regressors\MLPRegressor;
use Rubix\ML\NeuralNet\Layers\Dense;
use Rubix\ML\NeuralNet\Layers\Activation;
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU;
use Rubix\ML\NeuralNet\Optimizers\RMSProp;
use Rubix\ML\CrossValidation\Metrics\RSquared;
$estimator = new MLPRegressor([
new Dense(100),
new Activation(new ReLU()),
new Dense(100),
new Activation(new ReLU(),
new Dense(50),
new Activation(new ReLU()),
new Dense(50),
new Activation(new ReLU()),
], 256, new RMSProp(0.001), 1e-3, 100, 1e-5, 3, 0.1, new LeastSquares(), new RSquared());
References#
- G. E. Hinton. (1989). Connectionist learning procedures.
- L. Prechelt. (1997). Early Stopping - but when?