abstract class Optimizer[T, D] extends AnyRef
Optimizer is an abstract class which is used to train a model automatically with some certain optimization algorithms.
- T
numeric type, which can be Float or Double
- D
the type of elements in DataSet, such as MiniBatch
- Alphabetic
- By Inheritance
- Optimizer
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
Optimizer(model: Module[T], dataset: DataSet[D], criterion: Criterion[T])(implicit arg0: ClassTag[T], ev: TensorNumeric[T])
- model
the model to be trained
- dataset
the data set used to train a model
- criterion
the criterion used to evaluate the loss of the model given an input
Abstract Value Members
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
var
checkSingleton: Boolean
- Attributes
- protected
-
var
checkpointPath: Option[String]
- Attributes
- protected
-
var
checkpointTrigger: Option[Trigger]
- Attributes
- protected
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
var
computeThresholdbatchSize: Int
- Attributes
- protected
-
var
criterion: Criterion[T]
- Attributes
- protected
-
var
dataset: DataSet[D]
- Attributes
- protected
-
def
disableGradientClipping(): Optimizer.this.type
Disable gradient clipping
-
var
dropPercentage: Double
- Attributes
- protected
-
var
endWhen: Trigger
- Attributes
- protected
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
getCheckpointPath(): Option[String]
Get the directory of saving checkpoint
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
var
isOverWrite: Boolean
- Attributes
- protected
-
var
maxDropPercentage: Double
- Attributes
- protected
-
var
model: Module[T]
- Attributes
- protected
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
var
optimMethods: Map[String, OptimMethod[T]]
- Attributes
- protected
-
def
overWriteCheckpoint(): Optimizer.this.type
Enable overwrite saving checkpoint
-
var
parameterProcessors: ArrayBuffer[ParameterProcessor]
a list of ParameterProcessor, orders matter
a list of ParameterProcessor, orders matter
- Attributes
- protected
- def prepareInput(): Unit
- def reserveOptim(reserve: Boolean): Optimizer.this.type
-
def
setCheckpoint(path: String, trigger: Trigger): Optimizer.this.type
Set a check point saved at
pathtriggered bytriggerSet a check point saved at
pathtriggered bytrigger- path
the directory to save
- trigger
how often to save the check point
- returns
the optimizer
-
def
setConstantGradientClipping(min: Double, max: Double): Optimizer.this.type
Set constant gradient clipping
Set constant gradient clipping
- min
the minimum value to clip by
- max
the maximum value to clip by
-
def
setCriterion(newCriterion: Criterion[T]): Optimizer.this.type
Set a new criterion to the optimizer
Set a new criterion to the optimizer
- newCriterion
new criterion
-
def
setDropModuleProperty(dropPercentage: Double, maxDropPercentage: Double, batchsize: Int = 100, warmupIteration: Int = 200): Optimizer.this.type
Set dropping a certain percentage (
dropPercentage) of models during distributed training to accelerate, because some cached model may take too long.Set dropping a certain percentage (
dropPercentage) of models during distributed training to accelerate, because some cached model may take too long.- dropPercentage
drop percentage
- maxDropPercentage
max drop percentage
- batchsize
batch size
- warmupIteration
how may iteration to warm up
- returns
this optimizer
-
def
setEndWhen(endWhen: Trigger): Optimizer.this.type
When to stop, passed in a Trigger
When to stop, passed in a Trigger
- endWhen
when to end
- returns
the optimizer
-
def
setGradientClippingByl2Norm(l2NormThreshold: Double): Optimizer.this.type
Clip gradient to a maximum L2-norm
Clip gradient to a maximum L2-norm
- l2NormThreshold
gradient L2-Norm threshold
-
def
setModel(newModel: Module[T]): Optimizer.this.type
Set a model to the optimizer.
Set a model to the optimizer. Notice: if current optimMethod in this optimizer is not a global optimMethod, this setModel will throw an exception. You should use setModelAndOptimMethods instead.
- newModel
new model
-
def
setModelAndOptimMethods(newModel: Module[T], newOptimMethods: Map[String, OptimMethod[T]]): Optimizer.this.type
Set new model and new optimMethods to the optimizer.
Set new model and new optimMethods to the optimizer.
- newModel
new model
- newOptimMethods
new optimMethods
-
def
setOptimMethod(method: OptimMethod[T]): Optimizer.this.type
Set an optimization method
Set an optimization method
- method
optimization method
-
def
setOptimMethods(method: Map[String, OptimMethod[T]]): Optimizer.this.type
Set optimization methods for each submodule.
Set optimization methods for each submodule.
- method
A mapping of submodule -> OptimMethod
-
def
setState(state: Table): Optimizer.this.type
Set a state(learning rate, epochs...) to the optimizer
Set a state(learning rate, epochs...) to the optimizer
- state
the state to be saved
-
def
setTrainData(sampleRDD: RDD[Sample[T]], batchSize: Int, featurePaddingParam: PaddingParam[T] = null, labelPaddingParam: PaddingParam[T] = null): Optimizer.this.type
Set new train dataset.
Set new train dataset.
- sampleRDD
training Samples
- batchSize
mini batch size
- featurePaddingParam
feature padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
- labelPaddingParam
label padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
- returns
the optimizer
-
def
setTrainData(sampleRDD: RDD[Sample[T]], batchSize: Int, miniBatchImpl: MiniBatch[T]): Optimizer.this.type
Set new train dataset.
Set new train dataset. User can supply a customized implementation of trait MiniBatch to define how data is organized and retrieved in a mini batch.
- sampleRDD
training Samples
- batchSize
mini batch size
- miniBatchImpl
An User-Defined MiniBatch implementation.
- returns
the Optimizer
-
def
setTrainSummary(trainSummary: TrainSummary): Optimizer.this.type
Enable train summary.
-
def
setValidation(trigger: Trigger, sampleRDD: RDD[Sample[T]], vMethods: Array[ValidationMethod[T]], batchSize: Int, miniBatch: MiniBatch[T]): Optimizer.this.type
Set validate evaluation
Set validate evaluation
- trigger
how often to evaluation validation set
- sampleRDD
validate data set in type of RDD of Sample
- vMethods
a set of validation method ValidationMethod
- batchSize
batch size
- miniBatch
construct MiniBatch with a specified miniBatch type
-
def
setValidation(trigger: Trigger, sampleRDD: RDD[Sample[T]], vMethods: Array[ValidationMethod[T]], batchSize: Int): Optimizer.this.type
Set a validate evaluation
Set a validate evaluation
- trigger
how often to evaluation validation set
- sampleRDD
validate data set in type of RDD of Sample
- vMethods
a set of validation method ValidationMethod
- batchSize
batch size
- returns
this optimizer
-
def
setValidation(trigger: Trigger, sampleRDD: RDD[Sample[T]], vMethods: Array[ValidationMethod[T]], batchSize: Int, featurePaddingParam: PaddingParam[T], labelPaddingParam: PaddingParam[T]): Optimizer.this.type
Set a validate evaluation
Set a validate evaluation
- trigger
how often to evaluation validation set
- sampleRDD
validate data set in type of RDD of Sample
- vMethods
a set of validation method ValidationMethod
- batchSize
batch size
- featurePaddingParam
feature padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
- labelPaddingParam
label padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
- returns
this optimizer
-
def
setValidation(trigger: Trigger, dataset: DataSet[MiniBatch[T]], vMethods: Array[ValidationMethod[T]]): Optimizer.this.type
Set a validate evaluation
Set a validate evaluation
- trigger
how often to evaluation validation set
- dataset
validate data set in type of DataSet of MiniBatch
- vMethods
a set of validation method ValidationMethod
- returns
this optimizer
-
def
setValidationSummary(validationSummary: ValidationSummary): Optimizer.this.type
Enable validation summary.
-
var
state: Table
- Attributes
- protected
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
var
trainSummary: Option[TrainSummary]
- Attributes
- protected
-
var
validationDataSet: Option[DataSet[MiniBatch[T]]]
- Attributes
- protected
-
var
validationMethods: Option[Array[ValidationMethod[T]]]
- Attributes
- protected
-
var
validationSummary: Option[ValidationSummary]
- Attributes
- protected
-
var
validationTrigger: Option[Trigger]
- Attributes
- protected
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
var
warmupIterationNum: Int
- Attributes
- protected