class LBFGS[T] extends OptimMethod[T]

This implementation of L-BFGS relies on a user-provided line search function (state.lineSearch). If this function is not provided, then a simple learningRate is used to produce fixed size steps. Fixed size steps are much less costly than line searches, and can be useful for stochastic problems.

The learning rate is used even when a line search is provided. This is also useful for large-scale stochastic problems, where opfunc is a noisy approximation of f(x). In that case, the learning rate allows a reduction of confidence in the step size.

Linear Supertypes
OptimMethod[T], Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. LBFGS
  2. OptimMethod
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new LBFGS(maxIter: Int = 20, maxEval: Double = Double.MaxValue, tolFun: Double = 1e-5, tolX: Double = 1e-9, nCorrection: Int = 100, learningRate: Double = 1.0, verbose: Boolean = false, lineSearch: Option[LineSearch[T]] = None, lineSearchOptions: Option[Table] = None)(implicit arg0: ClassTag[T], ev: TensorNumeric[T])

    maxIter

    Maximum number of iterations allowed

    maxEval

    Maximum number of function evaluations

    tolFun

    Termination tolerance on the first-order optimality

    tolX

    Termination tol on progress in terms of func/param changes

    lineSearch

    A line search function

    lineSearchOptions

    If no line search provided, then a fixed step size is used

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clearHistory(): Unit

    Clear the history information in the OptimMethod state

    Clear the history information in the OptimMethod state

    Definition Classes
    LBFGSOptimMethod
  6. def clone(): OptimMethod[T]

    clone OptimMethod

    clone OptimMethod

    Definition Classes
    OptimMethod → AnyRef
  7. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. def getHyperParameter(): String

    Get hyper parameter from config table.

    Get hyper parameter from config table.

    Definition Classes
    OptimMethod
  12. def getLearningRate(): Double

    get learning rate

    get learning rate

    Definition Classes
    LBFGSOptimMethod
  13. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  15. var learningRate: Double
  16. var lineSearch: Option[LineSearch[T]]
  17. var lineSearchOptions: Option[Table]
  18. def loadFromTable(config: Table): LBFGS.this.type

    load optimMethod parameters from Table

    load optimMethod parameters from Table

    Definition Classes
    LBFGSOptimMethod
  19. var maxEval: Double
  20. var maxIter: Int
  21. var nCorrection: Int
  22. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  23. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  24. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  25. def optimize(opfunc: (Tensor[T]) ⇒ (T, Tensor[T]), x: Tensor[T]): (Tensor[T], Array[T])

    Optimize the model parameter

    Optimize the model parameter

    opfunc

    a function that takes a single input (X), the point of a evaluation, and returns f(X) and df/dX

    x

    the initial point

    returns

    the new x vector and the evaluate value list, evaluated before the update x : the new x vector, at the optimal point f : a table of all function values: f[1] is the value of the function before any optimization and f[#f] is the final fully optimized value, at x*

    Definition Classes
    LBFGSOptimMethod
  26. def save(path: String, overWrite: Boolean = false): LBFGS.this.type

    save OptimMethod

    save OptimMethod

    path

    path

    overWrite

    whether to overwrite

    Definition Classes
    OptimMethod
  27. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  28. def toString(): String
    Definition Classes
    AnyRef → Any
  29. var tolFun: Double
  30. var tolX: Double
  31. def updateHyperParameter(): Unit

    Update hyper parameter.

    Update hyper parameter. We have updated hyper parameter in method optimize(). But in DistriOptimizer, the method optimize() is only called on the executor side, the driver's hyper parameter is unchanged. So this method is using to update hyper parameter on the driver side.

    returns

    A string.

    Definition Classes
    OptimMethod
  32. def verbose(msg: String): Unit
  33. var verbose: Boolean
  34. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )

Deprecated Value Members

  1. def clearHistory(state: Table): Table

    Clear the history information in the state

    Clear the history information in the state

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please use clearHistory() instead

  2. def getHyperParameter(config: Table): String

    Get hyper parameter from config table.

    Get hyper parameter from config table.

    config

    a table contains the hyper parameter.

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please use getHyperParameter() instead

  3. def optimize(feval: (Tensor[T]) ⇒ (T, Tensor[T]), parameter: Tensor[T], config: Table, state: Table = null): (Tensor[T], Array[T])

    Optimize the model parameter

    Optimize the model parameter

    feval

    a function that takes a single input (X), the point of a evaluation, and returns f(X) and df/dX

    parameter

    the initial point

    config

    a table with configuration parameters for the optimizer

    state

    a table describing the state of the optimizer; after each call the state is modified

    returns

    the new x vector and the function list, evaluated before the update

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please initialize OptimMethod with parameters when creating it instead of importing table

  4. def updateHyperParameter(config: Table, state: Table): Unit

    Update hyper parameter.

    Update hyper parameter. We have updated hyper parameter in method optimize(). But in DistriOptimizer, the method optimize() is only called on the executor side, the driver's hyper parameter is unchanged. So this method is using to update hyper parameter on the driver side.

    config

    config table.

    state

    state Table.

    returns

    A string.

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please use updateHyperParameter() instead

Inherited from OptimMethod[T]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped