class Rank[T] extends Operation[Tensor[_], Tensor[Int], T]
- Alphabetic
- By Inheritance
- Rank
- Operation
- AbstractModule
- InferShape
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new Rank()(implicit arg0: ClassTag[T], ev: TensorNumeric[T])
Value Members
-
final
def
!=(arg0: scala.Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: scala.Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
accGradParameters(input: Tensor[_], gradOutput: Tensor[Int]): Unit
Computing the gradient of the module with respect to its own parameters.
Computing the gradient of the module with respect to its own parameters. Many modules do not perform this step as they do not have any parameters. The state variable name for the parameters is module dependent. The module is expected to accumulate the gradients with respect to the parameters in some variable.
- Definition Classes
- AbstractModule
-
def
apply(name: String): Option[AbstractModule[Activity, Activity, T]]
Find a module with given name.
Find a module with given name. If there is no module with given name, it will return None. If there are multiple modules with the given name, an exception will be thrown.
- Definition Classes
- AbstractModule
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
final
def
backward(input: Tensor[_], gradOutput: Tensor[Int]): Tensor[_]
Performs a back-propagation step through the module, with respect to the given input.
Performs a back-propagation step through the module, with respect to the given input. In general this method makes the assumption forward(input) has been called before, with the same input. This is necessary for optimization reasons. If you do not respect this rule, backward() will compute incorrect gradients.
- input
input data
- gradOutput
gradient of next layer
- returns
gradient corresponding to input data
- Definition Classes
- Operation → AbstractModule
-
var
backwardTime: Long
- Attributes
- protected
- Definition Classes
- AbstractModule
-
def
clearState(): Rank.this.type
Clear cached activities to save storage space or network bandwidth.
Clear cached activities to save storage space or network bandwidth. Note that we use Tensor.set to keep some information like tensor share
The subclass should override this method if it allocate some extra resource, and call the super.clearState in the override method
- Definition Classes
- AbstractModule
-
final
def
clone(deepCopy: Boolean): AbstractModule[Tensor[_], Tensor[Int], T]
Clone the module, deep or shallow copy
Clone the module, deep or shallow copy
- Definition Classes
- AbstractModule
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
final
def
cloneModule(): Rank.this.type
Clone the model
Clone the model
- Definition Classes
- AbstractModule
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(other: scala.Any): Boolean
- Definition Classes
- AbstractModule → AnyRef → Any
-
final
def
evaluate(dataSet: LocalDataSet[MiniBatch[T]], vMethods: Array[_ <: ValidationMethod[T]]): Array[(ValidationResult, ValidationMethod[T])]
use ValidationMethod to evaluate module on the given local dataset
use ValidationMethod to evaluate module on the given local dataset
- Definition Classes
- AbstractModule
-
final
def
evaluate(dataset: RDD[MiniBatch[T]], vMethods: Array[_ <: ValidationMethod[T]]): Array[(ValidationResult, ValidationMethod[T])]
use ValidationMethod to evaluate module on the given rdd dataset
use ValidationMethod to evaluate module on the given rdd dataset
- Definition Classes
- AbstractModule
-
final
def
evaluate(dataset: RDD[Sample[T]], vMethods: Array[_ <: ValidationMethod[T]], batchSize: Option[Int] = None): Array[(ValidationResult, ValidationMethod[T])]
use ValidationMethod to evaluate module on the given rdd dataset
use ValidationMethod to evaluate module on the given rdd dataset
- dataset
dataset for test
- vMethods
validation methods
- batchSize
total batchsize of all partitions, optional param and default 4 * partitionNum of dataset
- Definition Classes
- AbstractModule
-
def
evaluate(): Rank.this.type
Set the module to evaluate mode
Set the module to evaluate mode
- Definition Classes
- AbstractModule
-
final
def
evaluateImage(imageFrame: ImageFrame, vMethods: Array[_ <: ValidationMethod[T]], batchSize: Option[Int] = None): Array[(ValidationResult, ValidationMethod[T])]
use ValidationMethod to evaluate module on the given ImageFrame
use ValidationMethod to evaluate module on the given ImageFrame
- imageFrame
ImageFrame for valudation
- vMethods
validation methods
- batchSize
total batch size of all partitions
- Definition Classes
- AbstractModule
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
forward(input: Tensor[_]): Tensor[Int]
Takes an input object, and computes the corresponding output of the module.
Takes an input object, and computes the corresponding output of the module. After a forward, the output state variable should have been updated to the new value.
- input
input data
- returns
output data
- Definition Classes
- AbstractModule
-
var
forwardTime: Long
- Attributes
- protected
- Definition Classes
- AbstractModule
-
def
freeze(names: String*): Rank.this.type
freeze the module, i.e.
freeze the module, i.e. their parameters(weight/bias, if exists) are not changed in training process if names is not empty, set an array of layers that match the given
to be "freezed",names- names
an array of layer names
- returns
current graph model
- Definition Classes
- AbstractModule
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getExtraParameter(): Array[Tensor[T]]
Get extra parameter in this module.
Get extra parameter in this module. Extra parameter means the trainable parameters beside weight and bias. Such as runningMean and runningVar in BatchNormalization.
The subclass should override this method if it has some parameters besides weight and bias.
- returns
an array of tensor
- Definition Classes
- AbstractModule
-
final
def
getInputShape(): Shape
Return the inputShape for the current Layer and the first dim is batch.
Return the inputShape for the current Layer and the first dim is batch.
- Definition Classes
- InferShape
-
final
def
getName(): String
Get the module name, default name is className@namePostfix
Get the module name, default name is className@namePostfix
- Definition Classes
- AbstractModule
-
final
def
getNumericType(): TensorDataType
Get numeric type of module parameters
Get numeric type of module parameters
- Definition Classes
- AbstractModule
-
final
def
getOutputShape(): Shape
Return the outputShape for the current Layer and the first dim is batch.
Return the outputShape for the current Layer and the first dim is batch.
- Definition Classes
- InferShape
-
def
getParametersTable(): Table
This function returns a table contains ModuleName, the parameter names and parameter value in this module.
This function returns a table contains ModuleName, the parameter names and parameter value in this module.
The result table is a structure of Table(ModuleName -> Table(ParameterName -> ParameterValue)), and the type is Table[String, Table[String, Tensor[T]]].
For example, get the weight of a module named conv1: table[Table]("conv1")[Tensor[T]]("weight").
The names of the parameters follow such convention:
1. If there's one parameter, the parameter is named as "weight", the gradient is named as "gradWeight"
2. If there're two parameters, the first parameter is named as "weight", the first gradient is named as "gradWeight"; the second parameter is named as "bias", the seconcd gradient is named as "gradBias"
3. If there're more parameters, the weight is named as "weight" with a seq number as suffix, the gradient is named as "gradient" with a seq number as suffix
Custom modules should override this function the default impl if the convention doesn't meet the requirement.
- returns
Table
- Definition Classes
- AbstractModule
-
final
def
getPrintName(): String
- Attributes
- protected
- Definition Classes
- AbstractModule
-
final
def
getScaleB(): Double
Get the scale of gradientBias
Get the scale of gradientBias
- Definition Classes
- AbstractModule
-
final
def
getScaleW(): Double
Get the scale of gradientWeight
Get the scale of gradientWeight
- Definition Classes
- AbstractModule
-
def
getTimes(): Array[(AbstractModule[_ <: Activity, _ <: Activity, T], Long, Long)]
Get the forward/backward cost time for the module or its submodules
Get the forward/backward cost time for the module or its submodules
- Definition Classes
- AbstractModule
-
final
def
getTimesGroupByModuleType(): Array[(String, Long, Long)]
Get the forward/backward cost time for the module or its submodules and group by module type.
Get the forward/backward cost time for the module or its submodules and group by module type.
- returns
(module type name, forward time, backward time)
- Definition Classes
- AbstractModule
-
final
def
getWeightsBias(): Array[Tensor[T]]
Get weight and bias for the module
-
var
gradInput: Tensor[_]
The cached gradient of activities.
The cached gradient of activities. So we don't compute it again when need it
- Definition Classes
- AbstractModule
-
final
def
hasName: Boolean
Whether user set a name to the module before
Whether user set a name to the module before
- Definition Classes
- AbstractModule
-
def
hashCode(): Int
- Definition Classes
- AbstractModule → AnyRef → Any
-
def
inputs(first: (ModuleNode[T], Int), nodesWithIndex: (ModuleNode[T], Int)*): ModuleNode[T]
Build graph: some other modules point to current module
Build graph: some other modules point to current module
- first
distinguish from another inputs when input parameter list is empty
- nodesWithIndex
upstream module nodes and the output tensor index. The start index is 1.
- returns
node containing current module
- Definition Classes
- AbstractModule
-
def
inputs(nodes: Array[ModuleNode[T]]): ModuleNode[T]
Build graph: some other modules point to current module
Build graph: some other modules point to current module
- nodes
upstream module nodes in an array
- returns
node containing current module
- Definition Classes
- AbstractModule
-
def
inputs(nodes: ModuleNode[T]*): ModuleNode[T]
Build graph: some other modules point to current module
Build graph: some other modules point to current module
- nodes
upstream module nodes
- returns
node containing current module
- Definition Classes
- AbstractModule
-
var
inputsFormats: Seq[Int]
- Attributes
- protected
- Definition Classes
- AbstractModule
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
isTraining(): Boolean
Check if the model is in training mode
Check if the model is in training mode
- Definition Classes
- AbstractModule
-
var
line: String
- Attributes
- protected
- Definition Classes
- AbstractModule
-
final
def
loadModelWeights(srcModel: Module[Float], matchAll: Boolean = true): Rank.this.type
copy weights from another model, mapping by layer name
copy weights from another model, mapping by layer name
- srcModel
model to copy from
- matchAll
whether to match all layers' weights and bias,
- returns
current module
- Definition Classes
- AbstractModule
-
final
def
loadWeights(weightPath: String, matchAll: Boolean = true): Rank.this.type
load pretrained weights and bias to current module
load pretrained weights and bias to current module
- weightPath
file to store weights and bias
- matchAll
whether to match all layers' weights and bias, if not, only load existing pretrained weights and bias
- returns
current module
- Definition Classes
- AbstractModule
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
var
output: Tensor[Int]
The cached output.
The cached output. So we don't compute it again when need it
- Definition Classes
- AbstractModule
-
var
outputsFormats: Seq[Int]
- Attributes
- protected
- Definition Classes
- AbstractModule
-
def
parameters(): (Array[Tensor[T]], Array[Tensor[T]])
This function returns two arrays.
This function returns two arrays. One for the weights and the other the gradients Custom modules should override this function if they have parameters
- returns
(Array of weights, Array of grad)
- Definition Classes
- AbstractModule
-
final
def
predict(dataset: RDD[Sample[T]], batchSize: Int = -1, shareBuffer: Boolean = false): RDD[Activity]
module predict, return the probability distribution
module predict, return the probability distribution
- dataset
dataset for prediction
- batchSize
total batchSize for all partitions. if -1, default is 4 * partitionNumber of datatset
- shareBuffer
whether to share same memory for each batch predict results
- Definition Classes
- AbstractModule
-
final
def
predictClass(dataset: RDD[Sample[T]], batchSize: Int = -1): RDD[Int]
module predict, return the predict label
module predict, return the predict label
- dataset
dataset for prediction
- batchSize
total batchSize for all partitions. if -1, default is 4 * partitionNumber of dataset
- Definition Classes
- AbstractModule
-
final
def
predictImage(imageFrame: ImageFrame, outputLayer: String = null, shareBuffer: Boolean = false, batchPerPartition: Int = 4, predictKey: String = ImageFeature.predict, featurePaddingParam: Option[PaddingParam[T]] = None): ImageFrame
model predict images, return imageFrame with predicted tensor, if you want to call predictImage multiple times, it is recommended to use Predictor for DistributedImageFrame or LocalPredictor for LocalImageFrame
model predict images, return imageFrame with predicted tensor, if you want to call predictImage multiple times, it is recommended to use Predictor for DistributedImageFrame or LocalPredictor for LocalImageFrame
- imageFrame
imageFrame that contains images
- outputLayer
if outputLayer is not null, the output of layer that matches outputLayer will be used as predicted output
- shareBuffer
whether to share same memory for each batch predict results
- batchPerPartition
batch size per partition, default is 4
- predictKey
key to store predicted result
- featurePaddingParam
featurePaddingParam if the inputs have variant size
- Definition Classes
- AbstractModule
-
def
processInputs(first: (ModuleNode[T], Int), nodesWithIndex: (ModuleNode[T], Int)*): ModuleNode[T]
- Attributes
- protected
- Definition Classes
- AbstractModule
-
def
processInputs(nodes: Seq[ModuleNode[T]]): ModuleNode[T]
- Attributes
- protected
- Definition Classes
- AbstractModule
-
final
def
quantize(): Module[T]
Quantize this module, which reduces the precision of the parameter.
Quantize this module, which reduces the precision of the parameter. Get a higher speed with a little accuracy cost.
- Definition Classes
- AbstractModule
-
def
release(): Unit
if the model contains native resources such as aligned memory, we should release it by manual.
if the model contains native resources such as aligned memory, we should release it by manual. JVM GC can't release them reliably.
- Definition Classes
- AbstractModule
-
def
reset(): Unit
Reset module parameters, which is re-initialize the parameter with given initMethod
Reset module parameters, which is re-initialize the parameter with given initMethod
- Definition Classes
- AbstractModule
-
def
resetTimes(): Unit
Reset the forward/backward record time for the module or its submodules
Reset the forward/backward record time for the module or its submodules
- Definition Classes
- AbstractModule
-
final
def
saveCaffe(prototxtPath: String, modelPath: String, useV2: Boolean = true, overwrite: Boolean = false): Rank.this.type
Save this module to path in caffe readable format
Save this module to path in caffe readable format
- Definition Classes
- AbstractModule
-
final
def
saveDefinition(path: String, overWrite: Boolean = false): Rank.this.type
Save this module definition to path.
Save this module definition to path.
- path
path to save module, local file system, HDFS and Amazon S3 is supported. HDFS path should be like "hdfs://[host]:[port]/xxx" Amazon S3 path should be like "s3a://bucket/xxx"
- overWrite
if overwrite
- returns
self
- Definition Classes
- AbstractModule
-
final
def
saveModule(path: String, weightPath: String = null, overWrite: Boolean = false): Rank.this.type
Save this module to path with protobuf format
Save this module to path with protobuf format
- path
path to save module, local file system, HDFS and Amazon S3 is supported. HDFS path should be like "hdfs://[host]:[port]/xxx" Amazon S3 path should be like "s3a://bucket/xxx"
- weightPath
where to store weight
- overWrite
if overwrite
- returns
self
- Definition Classes
- AbstractModule
-
final
def
saveTF(inputs: Seq[(String, Seq[Int])], path: String, byteOrder: ByteOrder = ByteOrder.LITTLE_ENDIAN, dataFormat: TensorflowDataFormat = TensorflowDataFormat.NHWC): Rank.this.type
Save this module to path in tensorflow readable format
Save this module to path in tensorflow readable format
- Definition Classes
- AbstractModule
-
final
def
saveTorch(path: String, overWrite: Boolean = false): Rank.this.type
Save this module to path in torch7 readable format
Save this module to path in torch7 readable format
- Definition Classes
- AbstractModule
-
final
def
saveWeights(path: String, overWrite: Boolean): Unit
save weights and bias to file
save weights and bias to file
- path
file to save
- overWrite
whether to overwrite or not
- Definition Classes
- AbstractModule
-
var
scaleB: Double
- Attributes
- protected
- Definition Classes
- AbstractModule
-
var
scaleW: Double
The scale of gradient weight and gradient bias before gradParameters being accumulated.
The scale of gradient weight and gradient bias before gradParameters being accumulated.
- Attributes
- protected
- Definition Classes
- AbstractModule
-
def
setExtraParameter(extraParam: Array[Tensor[T]]): Rank.this.type
Set extra parameter to this module.
Set extra parameter to this module. Extra parameter means the trainable parameters beside weight and bias. Such as runningMean and runningVar in BatchNormalization.
- returns
this
- Definition Classes
- AbstractModule
-
def
setInputFormats(formats: Seq[Int]): Rank.this.type
set input formats for graph
set input formats for graph
- Definition Classes
- AbstractModule
-
final
def
setLine(line: String): Rank.this.type
Set the line separator when print the module
Set the line separator when print the module
- Definition Classes
- AbstractModule
-
final
def
setName(name: String): Rank.this.type
Set the module name
Set the module name
- Definition Classes
- AbstractModule
-
def
setOutputFormats(formats: Seq[Int]): Rank.this.type
set output formats for graph
set output formats for graph
- Definition Classes
- AbstractModule
-
def
setScaleB(b: Double): Rank.this.type
Set the scale of gradientBias
Set the scale of gradientBias
- b
the value of the scale of gradientBias
- returns
this
- Definition Classes
- AbstractModule
-
def
setScaleW(w: Double): Rank.this.type
Set the scale of gradientWeight
Set the scale of gradientWeight
- w
the value of the scale of gradientWeight
- returns
this
- Definition Classes
- AbstractModule
-
final
def
setWeightsBias(newWeights: Array[Tensor[T]]): Rank.this.type
Set weight and bias for the module
Set weight and bias for the module
- newWeights
array of weights and bias
- Definition Classes
- AbstractModule
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toGraph(startNodes: ModuleNode[T]*): Graph[T]
Generate graph module with start nodes
Generate graph module with start nodes
- Definition Classes
- AbstractModule
-
def
toString(): String
- Definition Classes
- AbstractModule → AnyRef → Any
-
var
train: Boolean
Module status.
Module status. It is useful for modules like dropout/batch normalization
- Attributes
- protected
- Definition Classes
- AbstractModule
-
def
training(): Rank.this.type
Set the module to training mode
Set the module to training mode
- Definition Classes
- AbstractModule
-
def
unFreeze(names: String*): Rank.this.type
"unfreeze" module, i.e.
"unfreeze" module, i.e. make the module parameters(weight/bias, if exists) to be trained(updated) in training process if names is not empty, unfreeze layers that match given names
- names
array of module names to unFreeze
- Definition Classes
- AbstractModule
-
final
def
updateGradInput(input: Tensor[_], gradOutput: Tensor[Int]): Tensor[_]
Computing the gradient of the module with respect to its own input.
Computing the gradient of the module with respect to its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.
- Definition Classes
- Operation → AbstractModule
-
def
updateOutput(input: Tensor[_]): Tensor[Int]
Computes the output using the current parameter set of the class and input.
Computes the output using the current parameter set of the class and input. This function returns the result which is stored in the output field.
- Definition Classes
- Rank → AbstractModule
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
def
zeroGradParameters(): Unit
If the module has parameters, this will zero the accumulation of the gradients with respect to these parameters.
If the module has parameters, this will zero the accumulation of the gradients with respect to these parameters. Otherwise, it does nothing.
- Definition Classes
- AbstractModule
Deprecated Value Members
-
def
save(path: String, overWrite: Boolean = false): Rank.this.type
Save this module to path.
Save this module to path.
- path
path to save module, local file system, HDFS and Amazon S3 is supported. HDFS path should be like "hdfs://[host]:[port]/xxx" Amazon S3 path should be like "s3a://bucket/xxx"
- overWrite
if overwrite
- returns
self
- Definition Classes
- AbstractModule
- Annotations
- @deprecated
- Deprecated
(Since version 0.3.0) please use recommended saveModule(path, overWrite)