object LarsSGD extends Serializable
- Alphabetic
- By Inheritance
- LarsSGD
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
def
containsLarsSGD[T](optimMethods: Map[String, OptimMethod[T]]): Option[Double]
Check if there is LarsSGD in optimMethods.
Check if there is LarsSGD in optimMethods. If so, return the weight decay of the first found LarsSGD. Else, return None
- returns
The weight decay of the first found LarsSGD in the optimMethods. Or None if there is not one
-
def
createOptimForModule[T](model: Module[T], trust: Double = 1.0, learningRate: Double = 1e-3, learningRateDecay: Double = 0.01, weightDecay: Double = 0.005, momentum: Double = 0.5, learningRateSchedule: LearningRateSchedule = Default())(implicit arg0: ClassTag[T], ev: TensorNumeric[T]): Map[String, OptimMethod[T]]
Create a Map(String, OptimMethod) for a container.
Create a Map(String, OptimMethod) for a container. For each submodule in the container, generate (module.getName(), new Lars[T]) pair in the returned map. The resulting map can be used in setOptimMethods. Note: each Lars optim uses the same LearningRateSchedule
- model
the container to build LARS optim method for
- trust
the trust on the learning rate scale, should be in 0 to 1
- learningRate
learning rate
- learningRateDecay
learning rate decay
- weightDecay
weight decay
- momentum
momentum
- learningRateSchedule
the learning rate scheduler
-
def
createOptimLRSchedulerForModule[A <: Activity, B <: Activity, T](model: Container[A, B, T], lrScheGenerator: (AbstractModule[Activity, Activity, T]) ⇒ (LearningRateSchedule, Boolean), trust: Double = 1.0, learningRate: Double = 1e-3, learningRateDecay: Double = 0.01, weightDecay: Double = 0.005, momentum: Double = 0.5)(implicit arg0: ClassTag[T], ev: TensorNumeric[T]): Map[String, OptimMethod[T]]
Create a Map(String, OptimMethod) for a container.
Create a Map(String, OptimMethod) for a container. For each submodule in the container, generate (module.getName(), new Lars[T]) pair in the returned map. The resulting map can be used in setOptimMethods. This function sets different LearningRateSchedules for different submodules
- model
the container to build LARS optim method for
- lrScheGenerator
the learning rate schedule generator for each sub-module. Generator accepts the sub-module that the schedule is linked to. It should return a tuple (learningRateSchedule, isOwner), where isOwner indicates whether the corresponding LARS optim method is responsible for showing the learning rate in getHyperParameter (multiple LARS optim methods may share one learning rate scheduler)
- trust
the trust on the learning rate scale, should be in 0 to 1
- learningRate
learning rate
- learningRateDecay
learning rate decay
- weightDecay
weight decay
- momentum
momentum
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )