SGDOptimizer
class SGDOptimizer<T : TrainableComponent<T>>(initialLearningRate: Float, weightDecay: Float, momentum: Float) : Optimizer<T>
Content copied to clipboard
Stochastic gradient descent optimizer with optional weight decay regularization and momentum parameters.
Default arguments are based on fast.ai's (https://docs.fast.ai/learner#Learner)
About using default values:
SGD()
is "Vanilla SGD", with a default alpha of .001f and no weight decay or momentum.SGD(weightDecay = true,
momentum= true)
enables weight decay and momentum with default values:weightDecay
=.01f,momentum
=.9f.alpha
remains .001f by default.
Constructors
SGDOptimizer
Link copied to clipboard
fun SGDOptimizer(initialLearningRate: Float = defaultLearningRate, weightDecay: Boolean, momentum: Boolean)
Content copied to clipboard
SGDOptimizer
Link copied to clipboard
fun SGDOptimizer(initialLearningRate: Float = defaultLearningRate, weightDecay: Float = 0.0f, momentum: Float = 0.0f)
Content copied to clipboard
Types
Functions
tensor Training Step
Link copied to clipboard
open override fun tensorTrainingStep(tensor: DTensor, gradient: DTensor): DTensor
Content copied to clipboard
Train an element of the model. For the model itself, use train.
train
Link copied to clipboard
Train a trainable using the given tangent.