public class LayerNorm extends DynamicCustomOp
DynamicCustomOp.DynamicCustomOpsBuilderaxis, bArguments, dArguments, iArguments, inplaceCall, inputArguments, outputArguments, outputVariables, tArgumentsdimensions, extraArgs, inPlace, ownName, ownNameSetWithDefault, sameDiff, scalarValue| Constructor and Description |
|---|
LayerNorm(@NonNull INDArray input,
@NonNull INDArray gain,
boolean channelsFirst,
int... dimensions) |
LayerNorm(INDArray input,
INDArray gain,
INDArray result,
boolean channelsFirst,
int... dimensions) |
LayerNorm(INDArray input,
INDArray gain,
INDArray bias,
INDArray result,
boolean channelsFirst,
int... dimensions) |
LayerNorm(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions) |
LayerNorm(@NonNull SameDiff sameDiff,
@NonNull SDVariable input,
@NonNull SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
| Modifier and Type | Method and Description |
|---|---|
List<DataType> |
calculateOutputDataTypes(List<DataType> dataTypes)
Calculate the data types for the output arrays.
|
List<SDVariable> |
doDiff(List<SDVariable> gradient)
The actual implementation for automatic differentiation.
|
int |
numOutputArguments() |
String |
onnxName()
The opName of this function in onnx
|
String |
opName()
This method returns op opName as string
|
void |
setDimensions(int[] dimensions) |
String |
tensorflowName()
The opName of this function tensorflow
|
addBArgument, addDArgument, addIArgument, addIArgument, addInputArgument, addOutputArgument, addTArgument, assertValidForExecution, bArgs, builder, calculateOutputShape, calculateOutputShape, clearArrays, dArgs, getBArgument, getDescriptor, getIArgument, getInputArgument, getOutputArgument, getTArgument, iArgs, initFromOnnx, initFromTensorFlow, inputArguments, numBArguments, numDArguments, numIArguments, numInputArguments, numTArguments, opHash, opNum, opType, outputArguments, outputVariables, outputVariables, removeIArgument, removeInputArgument, removeOutputArgument, removeTArgument, setInputArgument, setInputArguments, setOutputArgument, tArgs, toString, wrapFilterNull, wrapOrNull, wrapOrNullarg, arg, argNames, args, attributeAdaptersForFunction, configFieldName, diff, dup, equals, getNumOutputs, getValue, hashCode, isConfigProperties, larg, mappingsForFunction, onnxNames, outputs, outputVariable, outputVariablesNames, propertiesForFunction, rarg, replaceArg, setInstanceId, setPropertiesForFunction, setValueFor, tensorflowNamesclone, finalize, getClass, notify, notifyAll, wait, wait, waitisInplaceCallpublic LayerNorm(@NonNull
@NonNull SameDiff sameDiff,
@NonNull
@NonNull SDVariable input,
@NonNull
@NonNull SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
public LayerNorm(SameDiff sameDiff, SDVariable input, SDVariable gain, boolean channelsFirst, int... dimensions)
public LayerNorm(INDArray input, INDArray gain, INDArray bias, INDArray result, boolean channelsFirst, int... dimensions)
public LayerNorm(@NonNull
@NonNull INDArray input,
@NonNull
@NonNull INDArray gain,
boolean channelsFirst,
int... dimensions)
public void setDimensions(int[] dimensions)
public String opName()
DynamicCustomOpopName in interface CustomOpopName in class DynamicCustomOppublic String tensorflowName()
DifferentialFunctiontensorflowName in class DynamicCustomOppublic String onnxName()
DifferentialFunctiononnxName in class DynamicCustomOppublic List<SDVariable> doDiff(List<SDVariable> gradient)
DifferentialFunctiondoDiff in class DynamicCustomOppublic List<DataType> calculateOutputDataTypes(List<DataType> dataTypes)
DifferentialFunctionDifferentialFunction.calculateOutputShape(), this method differs in that it does not
require the input arrays to be populated.
This is important as it allows us to do greedy datatype inference for the entire net - even if arrays are not
available.calculateOutputDataTypes in class DifferentialFunctiondataTypes - The data types of the inputspublic int numOutputArguments()
numOutputArguments in interface CustomOpnumOutputArguments in class DynamicCustomOpCopyright © 2021. All rights reserved.