Uses of Class
org.nd4j.autodiff.loss.LossReduce
-
-
Uses of LossReduce in org.nd4j.autodiff.loss
Methods in org.nd4j.autodiff.loss that return LossReduce Modifier and Type Method Description static LossReduceLossReduce. valueOf(String name)Returns the enum constant of this type with the specified name.static LossReduce[]LossReduce. values()Returns an array containing the constants of this enum type, in the order they are declared. -
Uses of LossReduce in org.nd4j.autodiff.samediff.ops
Methods in org.nd4j.autodiff.samediff.ops with parameters of type LossReduce Modifier and Type Method Description SDVariableSDLoss. absoluteDifference(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Absolute difference loss:sum_i abs( label[i] - predictions[i] )SDVariableSDLoss. absoluteDifference(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Absolute difference loss:sum_i abs( label[i] - predictions[i] )SDVariableSDLoss. cosineDistance(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, int dimension)Cosine distance loss:1 - cosineSimilarity(x,y)or1 - sum_i label[i] * prediction[i], which is
equivalent to cosine distance when both the predictions and labels are normalized.
Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
along the cosine distance dimension (with keepDims=true).SDVariableSDLoss. cosineDistance(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, int dimension)Cosine distance loss:1 - cosineSimilarity(x,y)or1 - sum_i label[i] * prediction[i], which is
equivalent to cosine distance when both the predictions and labels are normalized.
Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
along the cosine distance dimension (with keepDims=true).SDVariableSDLoss. hingeLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Hinge loss: a loss function used for training classifiers.
ImplementsL = max(0, 1 - t * predictions)where t is the label values after internally converting to {-1,1}
from the user specified {0,1}.SDVariableSDLoss. hingeLoss(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Hinge loss: a loss function used for training classifiers.
ImplementsL = max(0, 1 - t * predictions)where t is the label values after internally converting to {-1,1}
from the user specified {0,1}.SDVariableSDLoss. huberLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double delta)Huber loss function, used for robust regression.SDVariableSDLoss. huberLoss(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double delta)Huber loss function, used for robust regression.SDVariableSDLoss. logLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double epsilon)Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.SDVariableSDLoss. logLoss(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double epsilon)Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.SDVariableSDLoss. logPoisson(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, boolean full)Log poisson loss: a loss function used for training classifiers.
ImplementsL = exp(c) - z * cwhere c is log(predictions) and z is labels.SDVariableSDLoss. logPoisson(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, boolean full)Log poisson loss: a loss function used for training classifiers.
ImplementsL = exp(c) - z * cwhere c is log(predictions) and z is labels.SDVariableSDLoss. meanPairwiseSquaredError(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3SDVariableSDLoss. meanPairwiseSquaredError(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3SDVariableSDLoss. meanSquaredError(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Mean squared error loss function.SDVariableSDLoss. meanSquaredError(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)Mean squared error loss function.SDVariableSDLoss. sigmoidCrossEntropy(String name, SDVariable label, SDVariable predictionLogits, SDVariable weights, LossReduce lossReduce, double labelSmoothing)Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function.SDVariableSDLoss. sigmoidCrossEntropy(SDVariable label, SDVariable predictionLogits, SDVariable weights, LossReduce lossReduce, double labelSmoothing)Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function.SDVariableSDLoss. softmaxCrossEntropy(String name, SDVariable oneHotLabels, SDVariable logitPredictions, SDVariable weights, LossReduce lossReduce, double labelSmoothing)Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c])wherep = softmax(logits)
If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
otherwise, the output is a scalar.SDVariableSDLoss. softmaxCrossEntropy(SDVariable oneHotLabels, SDVariable logitPredictions, SDVariable weights, LossReduce lossReduce, double labelSmoothing)Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c])wherep = softmax(logits)
If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
otherwise, the output is a scalar. -
Uses of LossReduce in org.nd4j.autodiff.samediff.serde
Methods in org.nd4j.autodiff.samediff.serde that return LossReduce Modifier and Type Method Description static LossReduceFlatBuffersMapper. getLossReduceFromByte(byte input)Convert the input byte to the equivalentLossReduce, will throw anIllegalArgumentExceptionif the value is not foundMethods in org.nd4j.autodiff.samediff.serde with parameters of type LossReduce Modifier and Type Method Description static byteFlatBuffersMapper. getLossFunctionAsByte(@NonNull LossReduce lossReduce)Convert theLossReduceenum to its flatbuffers equivalent bytes. -
Uses of LossReduce in org.nd4j.linalg.api.ops.impl.loss
Fields in org.nd4j.linalg.api.ops.impl.loss declared as LossReduce Modifier and Type Field Description protected LossReduceBaseLoss. lossReduceConstructors in org.nd4j.linalg.api.ops.impl.loss with parameters of type LossReduce Constructor Description AbsoluteDifferenceLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)AbsoluteDifferenceLoss(SameDiff sameDiff, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)AbsoluteDifferenceLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce)BaseLoss(@NonNull LossReduce lossReduce, @NonNull INDArray predictions, INDArray weights, @NonNull INDArray labels)BaseLoss(@NonNull SameDiff sameDiff, @NonNull LossReduce lossReduce, @NonNull SDVariable predictions, SDVariable weights, @NonNull SDVariable labels)CosineDistanceLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, int dimension)CosineDistanceLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce, int dimension)CosineDistanceLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce, int dimension)HingeLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)HingeLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce)HingeLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce)HuberLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, double delta)HuberLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double delta)HuberLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce, double delta)LogLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, double epsilon)LogLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double epsilon)LogLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce, double epsilon)LogPoissonLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)LogPoissonLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, boolean full)LogPoissonLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce, boolean full)LogPoissonLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce, boolean full)MeanPairwiseSquaredErrorLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)MeanPairwiseSquaredErrorLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce)MeanPairwiseSquaredErrorLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce)MeanSquaredErrorLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)MeanSquaredErrorLoss(SameDiff sameDiff, SDVariable labels, SDVariable predictions, SDVariable weights, LossReduce lossReduce)MeanSquaredErrorLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce)SigmoidCrossEntropyLoss(SameDiff sameDiff, LossReduce reductionMode, SDVariable logits, SDVariable weights, SDVariable labels)SigmoidCrossEntropyLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable logits, SDVariable weights, SDVariable labels, double labelSmoothing)SigmoidCrossEntropyLoss(SameDiff sameDiff, SDVariable labels, SDVariable logits, SDVariable weights, LossReduce lossReduce, double labelSmoothing)SigmoidCrossEntropyLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce, double labelSmoothing)SoftmaxCrossEntropyLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable logits, SDVariable weights, SDVariable labels)SoftmaxCrossEntropyLoss(SameDiff sameDiff, LossReduce lossReduce, SDVariable logits, SDVariable weights, SDVariable labels, double labelSmoothing)SoftmaxCrossEntropyLoss(SameDiff sameDiff, SDVariable labels, SDVariable logits, SDVariable weights, LossReduce lossReduce, double labelSmoothing)SoftmaxCrossEntropyLoss(INDArray labels, INDArray predictions, INDArray weights, LossReduce lossReduce, double labelSmoothing)WeightedCrossEntropyLoss(@NonNull LossReduce lossReduce, @NonNull INDArray predictions, INDArray weights, @NonNull INDArray labels)WeightedCrossEntropyLoss(@NonNull SameDiff sameDiff, @NonNull LossReduce lossReduce, @NonNull SDVariable predictions, SDVariable weights, @NonNull SDVariable labels) -
Uses of LossReduce in org.nd4j.linalg.api.ops.impl.loss.bp
Fields in org.nd4j.linalg.api.ops.impl.loss.bp declared as LossReduce Modifier and Type Field Description protected LossReduceBaseLossBp. lossReduceConstructors in org.nd4j.linalg.api.ops.impl.loss.bp with parameters of type LossReduce Constructor Description AbsoluteDifferenceLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)BaseLossBp(@NonNull SameDiff sameDiff, @NonNull LossReduce lossReduce, @NonNull SDVariable predictions, @NonNull SDVariable weights, @NonNull SDVariable labels)CosineDistanceLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, int dimension)HingeLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)HuberLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, double delta)LogLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, double epsilon)LogPoissonLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)LogPoissonLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels, boolean full)MeanPairwiseSquaredErrorLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)MeanSquaredErrorLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable predictions, SDVariable weights, SDVariable labels)SigmoidCrossEntropyLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable logits, SDVariable weights, SDVariable labels, double labelSmoothing)SoftmaxCrossEntropyLossBp(SameDiff sameDiff, LossReduce lossReduce, SDVariable logits, SDVariable weights, SDVariable labels, double labelSmoothing) -
Uses of LossReduce in org.nd4j.linalg.factory.ops
Methods in org.nd4j.linalg.factory.ops with parameters of type LossReduce Modifier and Type Method Description INDArrayNDLoss. absoluteDifference(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce)Absolute difference loss:sum_i abs( label[i] - predictions[i] )INDArrayNDLoss. cosineDistance(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce, int dimension)Cosine distance loss:1 - cosineSimilarity(x,y)or1 - sum_i label[i] * prediction[i], which is
equivalent to cosine distance when both the predictions and labels are normalized.
Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
along the cosine distance dimension (with keepDims=true).INDArrayNDLoss. hingeLoss(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce)Hinge loss: a loss function used for training classifiers.
ImplementsL = max(0, 1 - t * predictions)where t is the label values after internally converting to {-1,1}
from the user specified {0,1}.INDArrayNDLoss. huberLoss(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce, double delta)Huber loss function, used for robust regression.INDArrayNDLoss. logLoss(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce, double epsilon)Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.INDArrayNDLoss. logPoisson(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce, boolean full)Log poisson loss: a loss function used for training classifiers.
ImplementsL = exp(c) - z * cwhere c is log(predictions) and z is labels.INDArrayNDLoss. meanPairwiseSquaredError(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce)Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3INDArrayNDLoss. meanSquaredError(INDArray label, INDArray predictions, INDArray weights, LossReduce lossReduce)Mean squared error loss function.INDArrayNDLoss. sigmoidCrossEntropy(INDArray label, INDArray predictionLogits, INDArray weights, LossReduce lossReduce, double labelSmoothing)Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function.INDArrayNDLoss. softmaxCrossEntropy(INDArray oneHotLabels, INDArray logitPredictions, INDArray weights, LossReduce lossReduce, double labelSmoothing)Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c])wherep = softmax(logits)
If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
otherwise, the output is a scalar.
-