Package org.diffkt.model

Types

Activation
Link copied to clipboard
interface Activation : LayerSingleInput<Activation>
AdamOptimizer
Link copied to clipboard
class AdamOptimizer<T : Model<T>> : Optimizer<T>

THIS IS A PLACEHOLDER!

AffineTransform
Link copied to clipboard

An affine transform. Multiplies by one tensor and then adds another. Like a Dense layer, except that where a dense layer performs a matmul, this one performs an element-wise multiplication.

AvgPool2d
Link copied to clipboard
class AvgPool2d(poolHeight: Int, poolWidth: Int) : LayerSingleInput<AvgPool2d>

Average pool 2d

BatchNorm2d
Link copied to clipboard
class BatchNorm2d(numFeatures: Int, momentum: Float) : BatchNormTraining

A training version of batch normalization provided for compatibility with existing code.

BatchNormResult
Link copied to clipboard
class BatchNormResult(result: DTensor, n: Float, sum: DTensor, sumOfSquares: DTensor, mean: DTensor, variance: DTensor)
BatchNormTraining
Link copied to clipboard
open class BatchNormTraining : BatchNormTrainingBase<BatchNormTraining>

A trainable Batch Normalization transform, as described in https://arxiv.org/abs/1502.03167 . When training is complete, use its @see inferenceMode property to get the computed affine transform. This version maintains an exponential moving average of the sum of the samples, sum of the squared samples, and sample count which are used to estimate the population mean and variance.

BatchNormTrainingBase
Link copied to clipboard
abstract class BatchNormTrainingBase<T : BatchNormTrainingBase<T>>(numFeatures: Int, momentum: Float, scaleShift: TrainableTensor) : TrainableLayerSingleInput<T> , LayerWithInferenceMode
BatchNormTrainingV1
Link copied to clipboard

A trainable Batch Normalization transform, as described in https://arxiv.org/abs/1502.03167 . When training is complete, use its @see inferenceMode property to get the computed affine transform. This version is provided to imitate the behavior in V1, the previous implementation, in that it calculates a running mean and running variance rather than gathering the raw input to compute the mean and variance. It applies Bessel's correction (https://en.wikipedia.org/wiki/Bessel%27s_correction) to the sample variance to get an estimate of the population variance for each batch, and uses an exponential moving average of those values as an estimate the population variance when @see inferenceMode is applied.

Conv2d
Link copied to clipboard
open class Conv2d(filterShape: Shape, horizontalStride: Int, verticalStride: Int, activation: Activation, paddingStyle: Convolve.PaddingStyle, trainableFilter: TrainableTensor) : TrainableLayerSingleInput<Conv2d>
Conv2dWithSamePadding
Link copied to clipboard
class Conv2dWithSamePadding(filterShape: Shape, horizontalStride: Int, verticalStride: Int, activation: Activation, random: Random) : Conv2d
Dense
Link copied to clipboard
class Dense : TrainableLayerSingleInput<Dense>

Densely-connected layer

Dropout
Link copied to clipboard
class Dropout(dropoutPercent: Float) : Layer<Dropout> , LayerWithInferenceMode
Embedding
Link copied to clipboard
class Embedding(trainableWeights: TrainableTensor) : TrainableLayer<Embedding>

A trainable embedding table with size vocabSize x embeddingSize

EmbeddingBag
Link copied to clipboard
class EmbeddingBag(trainableWeights: TrainableTensor, reduction: EmbeddingBag.Companion.Reduction) : TrainableLayer<EmbeddingBag>

A trainable embedding table with size vocabSize x embeddingSize

FanIn
Link copied to clipboard
class FanIn : FanMode
FanMode
Link copied to clipboard
sealed class FanMode

Fan mode informs weight initializers of the dimension for the weight matrix where Fan In corresponds to the size of incoming data and Fan Out corresponds to the size of outgoing data.

FanOut
Link copied to clipboard
class FanOut : FanMode
FixedLearningRateOptimizer
Link copied to clipboard
class FixedLearningRateOptimizer<T : Model<T>>(alpha: DScalar) : Optimizer<T>

simple optimizer that just uses a fixed learning rate.

Flatten
Link copied to clipboard
object Flatten : LayerSingleInput<Flatten>

Flattens the input. Does not affect batch size.

GRU
Link copied to clipboard
abstract class GRU : RecurrentBase<GRU, DTensor>

Make a GRU you desire? See invoke in the companion object, or use the GRUEncoder or GRUDecoder helpers.

Initializer
Link copied to clipboard
object Initializer
Layer
Link copied to clipboard
interface Layer<T : Layer<T>> : OnDevice
LayerSingleInput
Link copied to clipboard
interface LayerSingleInput<T : LayerSingleInput<T>> : Layer<T>
LayerWithInferenceMode
Link copied to clipboard
interface LayerWithInferenceMode
LinearAfterResetGru
Link copied to clipboard
class LinearAfterResetGru(numInputs: Int, numHidden: Int, initialHidden: DTensor?, accType: RecurrentBase.RecurrentBase.AccType, xh2u: Dense, xh2r: Dense, xh2n: Dense) : GRU

Linear-after-reset GRU

LinearBeforeResetGRU
Link copied to clipboard
class LinearBeforeResetGRU(numInputs: Int, numHidden: Int, initialHidden: DTensor?, accType: RecurrentBase.RecurrentBase.AccType, xh2u: Dense, xh2r: Dense, x2n: Dense, h2n: Dense) : GRU

Linear-before-reset GRU

MaxPool2d
Link copied to clipboard
class MaxPool2d(poolHeight: Int, poolWidth: Int) : LayerSingleInput<MaxPool2d>
Model
Link copied to clipboard
abstract class Model<T : Model<T>> : TrainableComponent<T>
Optimizer
Link copied to clipboard
abstract class Optimizer<T : Trainable<T>>
RecurrentBase
Link copied to clipboard
interface RecurrentBase<Recurrent : RecurrentBase<Recurrent, T>, T> : TrainableLayer<Recurrent>
ReluLayer
Link copied to clipboard
object ReluLayer : LayerSingleInput<ReluLayer>
RMSpropOptimizer
Link copied to clipboard
open class RMSpropOptimizer<T : Model<T>>(alpha: Float, beta: Float) : Optimizer<T>

An optimizer that implements the RMSprop optimization algorithm.

Sequential
Link copied to clipboard
class Sequential(layers: List<Layer<*>>) : TrainableLayerSingleInput<Sequential>
SGDOptimizer
Link copied to clipboard
class SGDOptimizer<T : TrainableComponent<T>>(initialLearningRate: Float, weightDecay: Float, momentum: Float) : Optimizer<T>

Stochastic gradient descent optimizer with optional weight decay regularization and momentum parameters.

Trainable
Link copied to clipboard
interface Trainable<T : Trainable<T>> : Differentiable<T> , OnDevice
TrainableComponent
Link copied to clipboard
interface TrainableComponent<T : TrainableComponent<T>> : Trainable<T>
TrainableLayer
Link copied to clipboard
interface TrainableLayer<T : TrainableLayer<T>> : TrainableComponent<T> , Layer<T>
TrainableLayerSingleInput
Link copied to clipboard
TrainableTensor
Link copied to clipboard
class TrainableTensor(tensor: DTensor) : Trainable<TrainableTensor>

Functions

avgPool
Link copied to clipboard
fun avgPool(x: DTensor, poolHeight: Int, poolWidth: Int): DTensor

Computes the average of the pool (poolHeight x poolWidth) for each pool in x with a stride of (poolHeight, poolWidth). Requires that dim H on x be divisible by poolHeight and dim W on x be divisible by poolWidth.

avgPoolGrad
Link copied to clipboard
fun avgPoolGrad(x: DTensor, poolHeight: Int, poolWidth: Int): DTensor
batchNorm
Link copied to clipboard
fun batchNorm(input: DTensor, scaleShift: DTensor): BatchNormResult

The batchNorm op used for training

batchNormTrainV1
Link copied to clipboard
fun batchNormTrainV1(input: DTensor, scaleShift: DTensor, runningMean: DTensor, runningVariance: DTensor, momentum: Float): Triple<DTensor, DTensor, DTensor>

The batchNorm op for training, V1 compatibility version

batchNormTrainV2
Link copied to clipboard
fun batchNormTrainV2(input: DTensor, scaleShift: DTensor, runningN: Float, runningSum: DTensor, runningSumOfSquares: DTensor, momentum: Float): Pair<DTensor, Triple<Float, DTensor, DTensor>>

The batchNorm op for training

freezeBatchNorm
Link copied to clipboard
fun freezeBatchNorm(scaleShift: DTensor, mean: DTensor, variance: DTensor): AffineTransform
into
Link copied to clipboard
infix fun <T : Layer<T>> DTensor.into(layer: Layer<T>): DTensor
maxPool
Link copied to clipboard
fun maxPool(x: DTensor, poolHeight: Int, poolWidth: Int): DTensor

Returns the max of the pool (poolHeight x poolWidth) for each pool in x with a stride of (poolHeight, poolWidth). Requires that dim H on x be divisible by poolHeight and dim W on x be divisible by poolWidth

momentumUpdated
Link copied to clipboard
fun Float.momentumUpdated(new: Float, momentum: Float): Float

Returns the current (this) value updated by the new value (new) scaled by momentum

fun DTensor.momentumUpdated(new: DTensor, momentum: Float): DTensor

Returns the current (this) tensor updated by the new tensor (new) scaled by momentum

Properties

BATCHNORM_EPSILON
Link copied to clipboard
const val BATCHNORM_EPSILON: Float = 1.0E-5f