public class ActivationPReLU extends BaseActivationFunction
| Constructor and Description |
|---|
ActivationPReLU(INDArray alpha) |
ActivationPReLU(INDArray alpha,
long[] sharedAxes) |
| Modifier and Type | Method and Description |
|---|---|
org.nd4j.linalg.primitives.Pair<INDArray,INDArray> |
backprop(INDArray in,
INDArray epsilon)
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays: (a) The gradient dL/dz, calculated from dL/da, and (b) The parameter gradients dL/dW, where w is the weights in the activation function. |
INDArray |
getActivation(INDArray in,
boolean training)
Carry out activation function on the input array (usually known as 'preOut' or 'z')
Implementations must overwrite "in", transform in place and return "in"
Can support separate behaviour during test
|
String |
toString() |
assertShape, getGradientViewArray, getParametersViewArray, numParams, setGradientViewArray, setParametersViewArraypublic ActivationPReLU(INDArray alpha)
public ActivationPReLU(INDArray alpha, long[] sharedAxes)
public INDArray getActivation(INDArray in, boolean training)
IActivationpublic org.nd4j.linalg.primitives.Pair<INDArray,INDArray> backprop(INDArray in, INDArray epsilon)
IActivationin - Input, before applying the activation function (z, or 'preOut')epsilon - Gradient to be backpropagated: dL/da, where L is the loss functionCopyright © 2018. All rights reserved.