public static final class OnnxMl.TrainingInfoProto.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder> implements OnnxMl.TrainingInfoProtoOrBuilder
Training information TrainingInfoProto stores information for training a model. In particular, this defines two functionalities: an initialization-step and a training-algorithm-step. Initialization resets the model back to its original state as if no training has been performed. Training algorithm improves the model based on input data. The semantics of the initialization-step is that the initializers in ModelProto.graph and in TrainingInfoProto.algorithm are first initialized as specified by the initializers in the graph, and then updated by the "initialization_binding" in every instance in ModelProto.training_info. The field "algorithm" defines a computation graph which represents a training algorithm's step. After the execution of a TrainingInfoProto.algorithm, the initializers specified by "update_binding" may be immediately updated. If the targeted training algorithm contains consecutive update steps (such as block coordinate descent methods), the user needs to create a TrainingInfoProto for each step.Protobuf type
onnx.TrainingInfoProto| Modifier and Type | Method and Description |
|---|---|
OnnxMl.TrainingInfoProto.Builder |
addAllInitializationBinding(java.lang.Iterable<? extends OnnxMl.StringStringEntryProto> values)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
addAllUpdateBinding(java.lang.Iterable<? extends OnnxMl.StringStringEntryProto> values)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
addInitializationBinding(int index,
OnnxMl.StringStringEntryProto.Builder builderForValue)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
addInitializationBinding(int index,
OnnxMl.StringStringEntryProto value)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
addInitializationBinding(OnnxMl.StringStringEntryProto.Builder builderForValue)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
addInitializationBinding(OnnxMl.StringStringEntryProto value)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.StringStringEntryProto.Builder |
addInitializationBindingBuilder()
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.StringStringEntryProto.Builder |
addInitializationBindingBuilder(int index)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field,
java.lang.Object value) |
OnnxMl.TrainingInfoProto.Builder |
addUpdateBinding(int index,
OnnxMl.StringStringEntryProto.Builder builderForValue)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
addUpdateBinding(int index,
OnnxMl.StringStringEntryProto value)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
addUpdateBinding(OnnxMl.StringStringEntryProto.Builder builderForValue)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
addUpdateBinding(OnnxMl.StringStringEntryProto value)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.StringStringEntryProto.Builder |
addUpdateBindingBuilder()
Gradient-based training is usually an iterative procedure.
|
OnnxMl.StringStringEntryProto.Builder |
addUpdateBindingBuilder(int index)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto |
build() |
OnnxMl.TrainingInfoProto |
buildPartial() |
OnnxMl.TrainingInfoProto.Builder |
clear() |
OnnxMl.TrainingInfoProto.Builder |
clearAlgorithm()
This field represents a training algorithm step.
|
OnnxMl.TrainingInfoProto.Builder |
clearField(com.google.protobuf.Descriptors.FieldDescriptor field) |
OnnxMl.TrainingInfoProto.Builder |
clearInitialization()
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.TrainingInfoProto.Builder |
clearInitializationBinding()
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof) |
OnnxMl.TrainingInfoProto.Builder |
clearUpdateBinding()
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
clone() |
OnnxMl.GraphProto |
getAlgorithm()
This field represents a training algorithm step.
|
OnnxMl.GraphProto.Builder |
getAlgorithmBuilder()
This field represents a training algorithm step.
|
OnnxMl.GraphProtoOrBuilder |
getAlgorithmOrBuilder()
This field represents a training algorithm step.
|
OnnxMl.TrainingInfoProto |
getDefaultInstanceForType() |
static com.google.protobuf.Descriptors.Descriptor |
getDescriptor() |
com.google.protobuf.Descriptors.Descriptor |
getDescriptorForType() |
OnnxMl.GraphProto |
getInitialization()
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.StringStringEntryProto |
getInitializationBinding(int index)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.StringStringEntryProto.Builder |
getInitializationBindingBuilder(int index)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
java.util.List<OnnxMl.StringStringEntryProto.Builder> |
getInitializationBindingBuilderList()
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
int |
getInitializationBindingCount()
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
java.util.List<OnnxMl.StringStringEntryProto> |
getInitializationBindingList()
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.StringStringEntryProtoOrBuilder |
getInitializationBindingOrBuilder(int index)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
java.util.List<? extends OnnxMl.StringStringEntryProtoOrBuilder> |
getInitializationBindingOrBuilderList()
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.GraphProto.Builder |
getInitializationBuilder()
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.GraphProtoOrBuilder |
getInitializationOrBuilder()
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.StringStringEntryProto |
getUpdateBinding(int index)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.StringStringEntryProto.Builder |
getUpdateBindingBuilder(int index)
Gradient-based training is usually an iterative procedure.
|
java.util.List<OnnxMl.StringStringEntryProto.Builder> |
getUpdateBindingBuilderList()
Gradient-based training is usually an iterative procedure.
|
int |
getUpdateBindingCount()
Gradient-based training is usually an iterative procedure.
|
java.util.List<OnnxMl.StringStringEntryProto> |
getUpdateBindingList()
Gradient-based training is usually an iterative procedure.
|
OnnxMl.StringStringEntryProtoOrBuilder |
getUpdateBindingOrBuilder(int index)
Gradient-based training is usually an iterative procedure.
|
java.util.List<? extends OnnxMl.StringStringEntryProtoOrBuilder> |
getUpdateBindingOrBuilderList()
Gradient-based training is usually an iterative procedure.
|
boolean |
hasAlgorithm()
This field represents a training algorithm step.
|
boolean |
hasInitialization()
This field describes a graph to compute the initial tensors
upon starting the training process.
|
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable |
internalGetFieldAccessorTable() |
boolean |
isInitialized() |
OnnxMl.TrainingInfoProto.Builder |
mergeAlgorithm(OnnxMl.GraphProto value)
This field represents a training algorithm step.
|
OnnxMl.TrainingInfoProto.Builder |
mergeFrom(com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
OnnxMl.TrainingInfoProto.Builder |
mergeFrom(com.google.protobuf.Message other) |
OnnxMl.TrainingInfoProto.Builder |
mergeFrom(OnnxMl.TrainingInfoProto other) |
OnnxMl.TrainingInfoProto.Builder |
mergeInitialization(OnnxMl.GraphProto value)
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.TrainingInfoProto.Builder |
mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) |
OnnxMl.TrainingInfoProto.Builder |
removeInitializationBinding(int index)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
removeUpdateBinding(int index)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
setAlgorithm(OnnxMl.GraphProto.Builder builderForValue)
This field represents a training algorithm step.
|
OnnxMl.TrainingInfoProto.Builder |
setAlgorithm(OnnxMl.GraphProto value)
This field represents a training algorithm step.
|
OnnxMl.TrainingInfoProto.Builder |
setField(com.google.protobuf.Descriptors.FieldDescriptor field,
java.lang.Object value) |
OnnxMl.TrainingInfoProto.Builder |
setInitialization(OnnxMl.GraphProto.Builder builderForValue)
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.TrainingInfoProto.Builder |
setInitialization(OnnxMl.GraphProto value)
This field describes a graph to compute the initial tensors
upon starting the training process.
|
OnnxMl.TrainingInfoProto.Builder |
setInitializationBinding(int index,
OnnxMl.StringStringEntryProto.Builder builderForValue)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
setInitializationBinding(int index,
OnnxMl.StringStringEntryProto value)
This field specifies the bindings from the outputs of "initialization" to
some initializers in "ModelProto.graph.initializer" and
the "algorithm.initializer" in the same TrainingInfoProto.
|
OnnxMl.TrainingInfoProto.Builder |
setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field,
int index,
java.lang.Object value) |
OnnxMl.TrainingInfoProto.Builder |
setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) |
OnnxMl.TrainingInfoProto.Builder |
setUpdateBinding(int index,
OnnxMl.StringStringEntryProto.Builder builderForValue)
Gradient-based training is usually an iterative procedure.
|
OnnxMl.TrainingInfoProto.Builder |
setUpdateBinding(int index,
OnnxMl.StringStringEntryProto value)
Gradient-based training is usually an iterative procedure.
|
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof, internalGetMapField, internalGetMutableMapField, isClean, markClean, newBuilderForField, onBuilt, onChanged, setUnknownFieldsProto3findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeDelimitedFrom, mergeDelimitedFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toStringaddAll, addAll, mergeFrom, newUninitializedMessageExceptionequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitpublic static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder clear()
clear in interface com.google.protobuf.Message.Builderclear in interface com.google.protobuf.MessageLite.Builderclear in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
getDescriptorForType in interface com.google.protobuf.Message.BuildergetDescriptorForType in interface com.google.protobuf.MessageOrBuildergetDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto getDefaultInstanceForType()
getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuildergetDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilderpublic OnnxMl.TrainingInfoProto build()
build in interface com.google.protobuf.Message.Builderbuild in interface com.google.protobuf.MessageLite.Builderpublic OnnxMl.TrainingInfoProto buildPartial()
buildPartial in interface com.google.protobuf.Message.BuilderbuildPartial in interface com.google.protobuf.MessageLite.Builderpublic OnnxMl.TrainingInfoProto.Builder clone()
clone in interface com.google.protobuf.Message.Builderclone in interface com.google.protobuf.MessageLite.Builderclone in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, java.lang.Object value)
setField in interface com.google.protobuf.Message.BuildersetField in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
clearField in interface com.google.protobuf.Message.BuilderclearField in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
clearOneof in interface com.google.protobuf.Message.BuilderclearOneof in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, java.lang.Object value)
setRepeatedField in interface com.google.protobuf.Message.BuildersetRepeatedField in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, java.lang.Object value)
addRepeatedField in interface com.google.protobuf.Message.BuilderaddRepeatedField in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder mergeFrom(com.google.protobuf.Message other)
mergeFrom in interface com.google.protobuf.Message.BuildermergeFrom in class com.google.protobuf.AbstractMessage.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder mergeFrom(OnnxMl.TrainingInfoProto other)
public final boolean isInitialized()
isInitialized in interface com.google.protobuf.MessageLiteOrBuilderisInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public OnnxMl.TrainingInfoProto.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException
mergeFrom in interface com.google.protobuf.Message.BuildermergeFrom in interface com.google.protobuf.MessageLite.BuildermergeFrom in class com.google.protobuf.AbstractMessage.Builder<OnnxMl.TrainingInfoProto.Builder>java.io.IOExceptionpublic boolean hasInitialization()
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;hasInitialization in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.GraphProto getInitialization()
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;getInitialization in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.TrainingInfoProto.Builder setInitialization(OnnxMl.GraphProto value)
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;public OnnxMl.TrainingInfoProto.Builder setInitialization(OnnxMl.GraphProto.Builder builderForValue)
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;public OnnxMl.TrainingInfoProto.Builder mergeInitialization(OnnxMl.GraphProto value)
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;public OnnxMl.TrainingInfoProto.Builder clearInitialization()
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;public OnnxMl.GraphProto.Builder getInitializationBuilder()
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;public OnnxMl.GraphProtoOrBuilder getInitializationOrBuilder()
This field describes a graph to compute the initial tensors upon starting the training process. Initialization graph has no input and can have multiple outputs. Usually, trainable tensors in neural networks are randomly initialized. To achieve that, for each tensor, the user can put a random number operator such as RandomNormal or RandomUniform in TrainingInfoProto.initialization.node and assign its random output to the specific tensor using "initialization_binding". This graph can also set the initializers in "algorithm" in the same TrainingInfoProto; a use case is resetting the number of training iteration to zero. By default, this field is an empty graph and its evaluation does not produce any output. Thus, no initializer would be changed by default.
optional .onnx.GraphProto initialization = 1;getInitializationOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilderpublic boolean hasAlgorithm()
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;hasAlgorithm in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.GraphProto getAlgorithm()
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;getAlgorithm in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.TrainingInfoProto.Builder setAlgorithm(OnnxMl.GraphProto value)
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;public OnnxMl.TrainingInfoProto.Builder setAlgorithm(OnnxMl.GraphProto.Builder builderForValue)
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;public OnnxMl.TrainingInfoProto.Builder mergeAlgorithm(OnnxMl.GraphProto value)
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;public OnnxMl.TrainingInfoProto.Builder clearAlgorithm()
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;public OnnxMl.GraphProto.Builder getAlgorithmBuilder()
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;public OnnxMl.GraphProtoOrBuilder getAlgorithmOrBuilder()
This field represents a training algorithm step. Given required inputs,
it computes outputs to update initializers in its own or inference graph's
initializer lists. In general, this field contains loss node, gradient node,
optimizer node, increment of iteration count.
An execution of the training algorithm step is performed by executing the
graph obtained by combining the inference graph (namely "ModelProto.graph")
and the "algorithm" graph. That is, the actual the actual
input/initializer/output/node/value_info/sparse_initializer list of
the training graph is the concatenation of
"ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
in that order. This combined graph must satisfy the normal ONNX conditions.
Now, let's provide a visualization of graph combination for clarity.
Let the inference graph (i.e., "ModelProto.graph") be
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
and the "algorithm" graph be
tensor_d -> Add -> tensor_e
The combination process results
tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
Notice that an input of a node in the "algorithm" graph may reference the
output of a node in the inference graph (but not the other way round). Also, inference
node cannot reference inputs of "algorithm". With these restrictions, inference graph
can always be run independently without training information.
By default, this field is an empty graph and its evaluation does not
produce any output. Evaluating the default training step never
update any initializers.
optional .onnx.GraphProto algorithm = 2;getAlgorithmOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilderpublic java.util.List<OnnxMl.StringStringEntryProto> getInitializationBindingList()
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;getInitializationBindingList in interface OnnxMl.TrainingInfoProtoOrBuilderpublic int getInitializationBindingCount()
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;getInitializationBindingCount in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.StringStringEntryProto getInitializationBinding(int index)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;getInitializationBinding in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.TrainingInfoProto.Builder setInitializationBinding(int index, OnnxMl.StringStringEntryProto value)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder setInitializationBinding(int index, OnnxMl.StringStringEntryProto.Builder builderForValue)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder addInitializationBinding(OnnxMl.StringStringEntryProto value)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder addInitializationBinding(int index, OnnxMl.StringStringEntryProto value)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder addInitializationBinding(OnnxMl.StringStringEntryProto.Builder builderForValue)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder addInitializationBinding(int index, OnnxMl.StringStringEntryProto.Builder builderForValue)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder addAllInitializationBinding(java.lang.Iterable<? extends OnnxMl.StringStringEntryProto> values)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder clearInitializationBinding()
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.TrainingInfoProto.Builder removeInitializationBinding(int index)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.StringStringEntryProto.Builder getInitializationBindingBuilder(int index)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.StringStringEntryProtoOrBuilder getInitializationBindingOrBuilder(int index)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;getInitializationBindingOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilderpublic java.util.List<? extends OnnxMl.StringStringEntryProtoOrBuilder> getInitializationBindingOrBuilderList()
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;getInitializationBindingOrBuilderList in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.StringStringEntryProto.Builder addInitializationBindingBuilder()
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public OnnxMl.StringStringEntryProto.Builder addInitializationBindingBuilder(int index)
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public java.util.List<OnnxMl.StringStringEntryProto.Builder> getInitializationBindingBuilderList()
This field specifies the bindings from the outputs of "initialization" to some initializers in "ModelProto.graph.initializer" and the "algorithm.initializer" in the same TrainingInfoProto. See "update_binding" below for details. By default, this field is empty and no initializer would be changed by the execution of "initialization".
repeated .onnx.StringStringEntryProto initialization_binding = 3;public java.util.List<OnnxMl.StringStringEntryProto> getUpdateBindingList()
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;getUpdateBindingList in interface OnnxMl.TrainingInfoProtoOrBuilderpublic int getUpdateBindingCount()
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;getUpdateBindingCount in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.StringStringEntryProto getUpdateBinding(int index)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;getUpdateBinding in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.TrainingInfoProto.Builder setUpdateBinding(int index, OnnxMl.StringStringEntryProto value)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder setUpdateBinding(int index, OnnxMl.StringStringEntryProto.Builder builderForValue)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder addUpdateBinding(OnnxMl.StringStringEntryProto value)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder addUpdateBinding(int index, OnnxMl.StringStringEntryProto value)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder addUpdateBinding(OnnxMl.StringStringEntryProto.Builder builderForValue)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder addUpdateBinding(int index, OnnxMl.StringStringEntryProto.Builder builderForValue)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder addAllUpdateBinding(java.lang.Iterable<? extends OnnxMl.StringStringEntryProto> values)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder clearUpdateBinding()
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.TrainingInfoProto.Builder removeUpdateBinding(int index)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.StringStringEntryProto.Builder getUpdateBindingBuilder(int index)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.StringStringEntryProtoOrBuilder getUpdateBindingOrBuilder(int index)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;getUpdateBindingOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilderpublic java.util.List<? extends OnnxMl.StringStringEntryProtoOrBuilder> getUpdateBindingOrBuilderList()
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;getUpdateBindingOrBuilderList in interface OnnxMl.TrainingInfoProtoOrBuilderpublic OnnxMl.StringStringEntryProto.Builder addUpdateBindingBuilder()
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public OnnxMl.StringStringEntryProto.Builder addUpdateBindingBuilder(int index)
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public java.util.List<OnnxMl.StringStringEntryProto.Builder> getUpdateBindingBuilderList()
Gradient-based training is usually an iterative procedure. In one gradient
descent iteration, we apply
x = x - r * g
where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
gradient of "x" with respect to a chosen loss. To avoid adding assignments
into the training graph, we split the update equation into
y = x - r * g
x = y
The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
tell that "y" should be assigned to "x", the field "update_binding" may
contain a key-value pair of strings, "x" (key of StringStringEntryProto)
and "y" (value of StringStringEntryProto).
For a neural network with multiple trainable (mutable) tensors, there can
be multiple key-value pairs in "update_binding".
The initializers appears as keys in "update_binding" are considered
mutable variables. This implies some behaviors
as described below.
1. We have only unique keys in all "update_binding"s so that two
variables may not have the same name. This ensures that one
variable is assigned up to once.
2. The keys must appear in names of "ModelProto.graph.initializer" or
"TrainingInfoProto.algorithm.initializer".
3. The values must be output names of "algorithm" or "ModelProto.graph.output".
4. Mutable variables are initialized to the value specified by the
corresponding initializer, and then potentially updated by
"initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
This field usually contains names of trainable tensors
(in ModelProto.graph), optimizer states such as momentums in advanced
stochastic gradient methods (in TrainingInfoProto.graph),
and number of training iterations (in TrainingInfoProto.graph).
By default, this field is empty and no initializer would be changed
by the execution of "algorithm".
repeated .onnx.StringStringEntryProto update_binding = 4;public final OnnxMl.TrainingInfoProto.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
setUnknownFields in interface com.google.protobuf.Message.BuildersetUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>public final OnnxMl.TrainingInfoProto.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
mergeUnknownFields in interface com.google.protobuf.Message.BuildermergeUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>