Scaled Exponential Linear Unit is a self-normalizing activation function based on the ELU activation function. Neuronal activations of SELU networks automatically converge toward zero mean and unit variance, unlike explicitly normalized networks such as those with Batch Norm.
This actvation function does not have any parameters.
use Rubix\ML\NeuralNet\ActivationFunctions\SELU; $activationFunction = new SELU();
- G. Klambauer et al. (2017). Self-Normalizing Neural Networks.