SGDOptimizer

class SGDOptimizer<T : TrainableComponent<T>>(initialLearningRate: Float, weightDecay: Float, momentum: Float) : Optimizer<T>

Stochastic gradient descent optimizer with optional weight decay regularization and momentum parameters.

Default arguments are based on fast.ai's (https://docs.fast.ai/learner#Learner)

About using default values:

  • SGD() is "Vanilla SGD", with a default alpha of .001f and no weight decay or momentum.

  • SGD(weightDecay = true, momentum= true) enables weight decay and momentum with default values: weightDecay=.01f, momentum=.9f. alpha remains .001f by default.

Constructors

SGDOptimizer
Link copied to clipboard
fun SGDOptimizer(initialLearningRate: Float = defaultLearningRate, weightDecay: Boolean, momentum: Boolean)
SGDOptimizer
Link copied to clipboard
fun SGDOptimizer(initialLearningRate: Float = defaultLearningRate, weightDecay: Float = 0.0f, momentum: Float = 0.0f)

Types

Companion
Link copied to clipboard
object Companion

Functions

tensorTrainingStep
Link copied to clipboard
open override fun tensorTrainingStep(tensor: DTensor, gradient: DTensor): DTensor

Train an element of the model. For the model itself, use train.

train
Link copied to clipboard
fun train(component: T, tangent: Trainable.Tangent): T

Train a trainable using the given tangent.

Properties

initialLearningRate
Link copied to clipboard
val initialLearningRate: Float
momentum
Link copied to clipboard
val momentum: Float = 0.0f
weightDecay
Link copied to clipboard
val weightDecay: Float = 0.0f