Source

SELU#

Scaled Exponential Linear Unit is a self-normalizing activation function based on the ELU activation function. Neuronal activations of SELU networks automatically converge toward zero mean and unit variance, unlike explicitly normalized networks such as those with Batch Norm.

Parameters#

This actvation function does not have any parameters.

Example#

use Rubix\ML\NeuralNet\ActivationFunctions\SELU;

$activationFunction = new SELU();

References#

  • G. Klambauer et al. (2017). Self-Normalizing Neural Networks.