public class BiasAddGrad extends DynamicCustomOp
DynamicCustomOp.DynamicCustomOpsBuilder| Modifier and Type | Field and Description |
|---|---|
protected boolean |
nchw |
axis, bArguments, dArguments, iArguments, inplaceCall, inputArguments, outputArguments, outputVariables, tArgumentsdimensions, extraArgs, inPlace, ownName, ownNameSetWithDefault, sameDiff, scalarValue| Constructor and Description |
|---|
BiasAddGrad(@NonNull INDArray input,
@NonNull INDArray bias,
@NonNull INDArray gradient) |
BiasAddGrad(@NonNull INDArray input,
@NonNull INDArray bias,
@NonNull INDArray gradient,
boolean nchw) |
BiasAddGrad(@NonNull INDArray input,
@NonNull INDArray bias,
@NonNull INDArray gradient,
INDArray output) |
BiasAddGrad(SameDiff sameDiff,
SDVariable input,
SDVariable bias,
SDVariable gradient,
boolean nchw) |
| Modifier and Type | Method and Description |
|---|---|
List<DataType> |
calculateOutputDataTypes(List<DataType> inputDataTypes)
Calculate the data types for the output arrays.
|
List<SDVariable> |
doDiff(List<SDVariable> f1)
The actual implementation for automatic differentiation.
|
String |
onnxName()
The opName of this function in onnx
|
String |
opName()
This method returns op opName as string
|
int |
opNum()
The number of the op (mainly for old legacy XYZ ops
like
Op) |
addBArgument, addDArgument, addIArgument, addIArgument, addInputArgument, addOutputArgument, addTArgument, assertValidForExecution, bArgs, builder, calculateOutputShape, calculateOutputShape, clearArrays, dArgs, getBArgument, getDescriptor, getIArgument, getInputArgument, getOutputArgument, getTArgument, iArgs, initFromOnnx, initFromTensorFlow, inputArguments, numBArguments, numDArguments, numIArguments, numInputArguments, numOutputArguments, numTArguments, opHash, opType, outputArguments, outputVariables, outputVariables, removeIArgument, removeInputArgument, removeOutputArgument, removeTArgument, setInputArgument, setInputArguments, setOutputArgument, tArgs, tensorflowName, toString, wrapFilterNull, wrapOrNull, wrapOrNullarg, arg, argNames, args, attributeAdaptersForFunction, configFieldName, diff, dup, equals, getNumOutputs, getValue, hashCode, isConfigProperties, larg, mappingsForFunction, onnxNames, outputs, outputVariable, outputVariablesNames, propertiesForFunction, rarg, replaceArg, setInstanceId, setPropertiesForFunction, setValueFor, tensorflowNamesclone, finalize, getClass, notify, notifyAll, wait, wait, waitisInplaceCallpublic BiasAddGrad(SameDiff sameDiff, SDVariable input, SDVariable bias, SDVariable gradient, boolean nchw)
public BiasAddGrad(@NonNull
@NonNull INDArray input,
@NonNull
@NonNull INDArray bias,
@NonNull
@NonNull INDArray gradient,
INDArray output)
public BiasAddGrad(@NonNull
@NonNull INDArray input,
@NonNull
@NonNull INDArray bias,
@NonNull
@NonNull INDArray gradient,
boolean nchw)
public int opNum()
DifferentialFunctionOp)opNum in class DynamicCustomOppublic String opName()
DynamicCustomOpopName in interface CustomOpopName in class DynamicCustomOppublic List<SDVariable> doDiff(List<SDVariable> f1)
DifferentialFunctiondoDiff in class DynamicCustomOppublic String onnxName()
DifferentialFunctiononnxName in class DynamicCustomOppublic List<DataType> calculateOutputDataTypes(List<DataType> inputDataTypes)
DifferentialFunctionDifferentialFunction.calculateOutputShape(), this method differs in that it does not
require the input arrays to be populated.
This is important as it allows us to do greedy datatype inference for the entire net - even if arrays are not
available.calculateOutputDataTypes in class DifferentialFunctioninputDataTypes - The data types of the inputsCopyright © 2021. All rights reserved.