package optim
- Alphabetic
- Public
- All
Type Members
- abstract class AbstractOptimizer extends AnyRef
-
class
AccuracyResult extends ValidationResult
Represent an accuracy result.
Represent an accuracy result. Accuracy means a ratio of correct number and total number.
-
class
Adadelta[T] extends OptimMethod[T]
Adadelta implementation for SGD: http://arxiv.org/abs/1212.5701
-
class
Adagrad[T] extends OptimMethod[T]
An implementation of Adagrad.
An implementation of Adagrad. See the original paper: http://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf
-
class
Adam[T] extends OptimMethod[T]
An implementation of Adam http://arxiv.org/pdf/1412.6980.pdf
-
class
Adamax[T] extends OptimMethod[T]
An implementation of Adamax http://arxiv.org/pdf/1412.6980.pdf
- class ArrayBufferAccumulator extends AccumulatorV2[ArrayBuffer[Double], ArrayBuffer[Double]]
-
class
ContiguousResult extends ValidationResult
A generic result type who's data is contiguous float.
-
class
DistriOptimizer[T] extends Optimizer[T, MiniBatch[T]]
The optimizer run on a distributed cluster.
-
class
DistriOptimizerV2[T] extends Optimizer[T, MiniBatch[T]]
The optimizer run on a distributed cluster.
-
class
DistriValidator[T] extends Validator[T, MiniBatch[T]]
Validate model on a distributed cluster.
-
class
Evaluator[T] extends Serializable
model evaluator
-
class
Ftrl[T] extends OptimMethod[T]
An implementation of Ftrl https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf.
An implementation of Ftrl https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf. Support L1 penalty, L2 penalty and shrinkage-type L2 penalty.
-
class
HitRatio[T] extends ValidationMethod[T]
Hit Ratio(HR).
Hit Ratio(HR). HR intuitively measures whether the test item is present on the top-k list.
-
class
L1L2Regularizer[T] extends Regularizer[T]
Apply both L1 and L2 regularization
Apply both L1 and L2 regularization
- T
type parameters Float or Double
- Annotations
- @SerialVersionUID()
-
case class
L1Regularizer[T](l1: Double)(implicit evidence$3: ClassTag[T], ev: TensorNumeric[T]) extends L1L2Regularizer[T] with Product with Serializable
Apply L1 regularization
Apply L1 regularization
- T
type parameters Float or Double
- l1
l1 regularization rate
- Annotations
- @SerialVersionUID()
-
case class
L2Regularizer[T](l2: Double)(implicit evidence$4: ClassTag[T], ev: TensorNumeric[T]) extends L1L2Regularizer[T] with Product with Serializable
Apply L2 regularization
Apply L2 regularization
- T
type parameters Float or Double
- l2
l2 regularization rate
- Annotations
- @SerialVersionUID()
-
class
LBFGS[T] extends OptimMethod[T]
This implementation of L-BFGS relies on a user-provided line search function (state.lineSearch).
This implementation of L-BFGS relies on a user-provided line search function (state.lineSearch). If this function is not provided, then a simple learningRate is used to produce fixed size steps. Fixed size steps are much less costly than line searches, and can be useful for stochastic problems.
The learning rate is used even when a line search is provided. This is also useful for large-scale stochastic problems, where opfunc is a noisy approximation of f(x). In that case, the learning rate allows a reduction of confidence in the step size.
-
class
LarsSGD[T] extends SGD[T]
An implementation of LARS https://arxiv.org/abs/1708.03888 Lars.createOptimForModule is recommended to be used to create LARS optim methods for multiple layers
-
trait
LineSearch[T] extends AnyRef
Line Search strategy
-
class
LocalOptimizer[T] extends Optimizer[T, MiniBatch[T]]
Optimize a model on a single machine
-
class
LocalPredictor[T] extends Serializable
Predictor for local data
-
class
LocalValidator[T] extends Validator[T, MiniBatch[T]]
Validate a model on a single machine Use given dataset with certain validation methods such as Top1Accuracy as an argument of its
testmethod -
class
Loss[T] extends ValidationMethod[T]
This evaluation method is calculate loss of output with respect to target
-
class
LossResult extends ContiguousResult
Use loss as a validation result
- case class LossWithElapsedTime(index: Int, loss: Double, elapsed: Long) extends Product with Serializable
-
class
MAE[T] extends ValidationMethod[T]
This evaluation method is calculate mean absolute error of output with respect to target
- class MAPMultiIOUValidationResult extends ValidationResult
- class MAPType extends Serializable
-
class
MAPValidationResult extends ValidationResult
The MAP Validation Result.
The MAP Validation Result. The results are not calculated until result() or format() is called require class label beginning with 0
-
class
MeanAveragePrecision[T] extends ValidationMethod[T]
Calculate the Mean Average Precision (MAP).
Calculate the Mean Average Precision (MAP). The algorithm follows VOC Challenge after 2007 Require class label beginning with 0
-
class
MeanAveragePrecisionObjectDetection[T] extends ValidationMethod[T]
MeanAveragePrecision for Object Detection The class label begins with 0
MeanAveragePrecision for Object Detection The class label begins with 0
The expected output from the last layer should be a Tensor[Float] or a Table If output is a tensor, it should be [num_of_batch X (1 + maxDetection * 6)] matrix The format of the matrix should be [<batch>, <batch>, ...], where each row vector is <batch> = [<size_of_batch>, <sample>,...]. Each sample has format: <sample> = <label, score, bbox x4> imgId is the batch number of the sample. imgId begins with 0. Multiple samples may share one imgId
If output is a table, it is a table of tables. output(i) is the results of the i-th image in the batch, where i = 1 to sizeof(batch) output(i) is a table, which contains the same keys (fields) of image info in the "target" Please refer to RoiMiniBatch/RoiImageInfo's documents. Besides, the inner tables also contain the scores for the detections in the image.
The "target" (Ground truth) is a table with the same structure of "output", except that it does not have "score" field
-
class
Metrics extends Serializable
Performance metrics for the training process.
Performance metrics for the training process. Beyond collect local metrics(e.g. throughput) in driver node, it can also be used to collect distributed metrics (e.g. time of some steps among the workers). The is useful for performance analysis.
-
class
NDCG[T] extends ValidationMethod[T]
Normalized Discounted Cumulative Gain(NDCG).
Normalized Discounted Cumulative Gain(NDCG). NDCG accounts for the position of the hit by assigning higher scores to hits at top ranks.
-
trait
OptimMethod[T] extends Serializable
Similar to torch Optim method, which is used to update the parameter
-
abstract
class
Optimizer[T, D] extends AnyRef
Optimizer is an abstract class which is used to train a model automatically with some certain optimization algorithms.
Optimizer is an abstract class which is used to train a model automatically with some certain optimization algorithms.
- T
numeric type, which can be Float or Double
- D
the type of elements in DataSet, such as MiniBatch
- trait OptimizerLogger extends AnyRef
- class PRAUCResult extends ValidationResult
-
class
ParallelAdam[T] extends OptimMethod[T]
An multi-thread implementation of Adam http://arxiv.org/pdf/1412.6980.pdf
-
class
ParallelOptimizer[T] extends Optimizer[T, MiniBatch[T]]
The optimizer run on a distributed cluster.
- case class ParamSegments[T](start: Int, length: Int, method: OptimMethod[T]) extends Product with Serializable
-
class
PrecisionRecallAUC[T] extends ValidationMethod[T]
Precision Recall Area Under Curve will compute the precision-recall pairs and get the area under the curve.
Precision Recall Area Under Curve will compute the precision-recall pairs and get the area under the curve.
Note: It will gather all output probabilities and targets to driver and will compute the precision, recall and the auc every calling of
result()- T
class tag for tensor numeric
-
class
PredictionService[T] extends AnyRef
In this service, concurrency is kept not greater than numThreads by a
BlockingQueue, which contains available model instances.Thread-safe Prediction Service for Concurrent Calls
In this service, concurrency is kept not greater than numThreads by a
BlockingQueue, which contains available model instances.
numThreads model instances sharing weights/bias will be put into theBlockingQueueduring initialization.
When predict method called, service will try to take an instance fromBlockingQueue, which means if all instances are on serving, the predicting request will be blocked until some instances are released.
If exceptions caught during predict, a scalar Tensor[String] will be returned with thrown message. -
class
Predictor[T] extends Serializable
Predictor for distributed data
Predictor for distributed data
NOTE: The
predictClass,predictandpredictImagewill call the relevant methods of objectPredictor. Why we do this? Because every these methods uses the ClassTagT. If we do these jobs in the methods of classPredictor, when we domapPartition, Spark will find all used values and do serialization. TheTis the argument of constructor, the serialization will package the wholePredictorclass, which contains themodel. It will send a duplicate model to the workers. So we should move these methods to objectPredictor. -
class
RMSprop[T] extends OptimMethod[T]
An implementation of RMSprop
-
trait
Regularizer[T] extends Serializable
It is a trait for all regularizers.
It is a trait for all regularizers. Any regularizers need to inherit the result.
- T
type parameters Float or Double
-
class
SGD[T] extends OptimMethod[T]
A plain implementation of SGD
-
class
Top1Accuracy[T] extends ValidationMethod[T]
Caculate the percentage that output's max probability index equals target
-
class
Top5Accuracy[T] extends ValidationMethod[T]
Calculate the percentage that target in output's top5 probability indexes
- class TrainingContext[T] extends Serializable
-
class
TreeNNAccuracy[T] extends ValidationMethod[T]
This is a metric to measure the accuracy of Tree Neural Network/Recursive Neural Network
-
trait
Trigger extends Serializable
A trigger specifies a timespot or several timespots during training, and a corresponding action will be taken when the timespot(s) is reached.
-
trait
ValidationMethod[T] extends Serializable
A method defined to evaluate the model.
A method defined to evaluate the model. This trait can be extended by user-defined method. Such as Top1Accuracy
-
trait
ValidationResult extends Serializable
A result that calculate the numeric value of a validation method.
A result that calculate the numeric value of a validation method. User-defined valuation results must override the + operation and result() method. It is executed over the samples in each batch.
-
abstract
class
Validator[T, D] extends AnyRef
Validator is an abstract class which is used to test a model automatically with some certain validation methods such as Top1Accuracy, as an argument of its
testmethod.Validator is an abstract class which is used to test a model automatically with some certain validation methods such as Top1Accuracy, as an argument of its
testmethod.- T
numeric type, which can be Float or Double
- D
the type of elements in DataSet, such as MiniBatch
Value Members
- object DistriOptimizer extends AbstractOptimizer
- object DistriOptimizerV2 extends AbstractOptimizer
- object DistriValidator
- object EvaluateMethods
- object Evaluator extends Serializable
- object L1L2Regularizer extends Serializable
- object LarsSGD extends Serializable
- object LocalOptimizer
- object LocalPredictor extends Serializable
- object LocalValidator
- object MAPCOCO extends MAPType
- object MAPPascalVoc2007 extends MAPType
- object MAPPascalVoc2010 extends MAPType
- object MAPUtil
- object MeanAveragePrecision extends Serializable
- object OptimMethod extends Serializable
- object Optimizer
- object ParallelAdam extends Serializable
- object ParallelOptimizer extends AbstractOptimizer
- object PredictionService
- object Predictor extends Serializable
- object SGD extends Serializable
- object StateEntry
- object Trigger extends Serializable