GRU

abstract class GRU : RecurrentBase<GRU, DTensor>

Make a GRU you desire? See invoke in the companion object, or use the GRUEncoder or GRUDecoder helpers.

Types

Companion
Link copied to clipboard
object Companion

Functions

accMap
Link copied to clipboard
open fun accMap(t: DTensor, sequenceAxis: Int, initialState: DTensor): Pair<DTensor, DTensor>
cell
Link copied to clipboard
abstract fun cell(state: Pair<DTensor, DTensor>, x: DTensor): Pair<DTensor, DTensor>
cpu
Link copied to clipboard
open override fun cpu(): GRU
doRecurrence
Link copied to clipboard
open fun doRecurrence(x: DTensor, initialState: DTensor = this.initialState): DTensor

Do the recurrence.

equals
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
extractTangent
Link copied to clipboard
open override fun extractTangent(output: DTensor, extractor: (DTensor, DTensor) -> DTensor): TrainableComponent.Companion.Tangent
fold
Link copied to clipboard
open fun fold(t: DTensor, sequenceAxis: Int, initialState: DTensor): Pair<DTensor, DTensor>
getSingleInput
Link copied to clipboard
open fun getSingleInput(inputs: Array<out DTensor>): DTensor

Helper to check that the layer was called with a single input. Returns that input if successful, else errors.

gpu
Link copied to clipboard
open override fun gpu(): GRU
hashCode
Link copied to clipboard
open override fun hashCode(): Int
invoke
Link copied to clipboard
open operator override fun invoke(vararg inputs: DTensor): DTensor
load
Link copied to clipboard
open override fun load(from: ByteBuffer): GRU
processForBatching
Link copied to clipboard
open override fun processForBatching(initialState: DTensor, initialOutput: DTensor, batchSize: Int): Pair<DTensor, DTensor>
store
Link copied to clipboard
open override fun store(into: ByteBuffer): ByteBuffer
to
Link copied to clipboard
open fun to(device: Device): OnDevice
trainingStep
Link copied to clipboard
open override fun trainingStep(optim: Optimizer<*>, tangent: Trainable.Tangent): GRU
withTrainables
Link copied to clipboard
abstract fun withTrainables(trainables: List<Trainable<*>>): GRU
wrap
Link copied to clipboard
open override fun wrap(wrapper: Wrapper): GRU

The wrap function should return the same static type it is declared on.

Properties

accType
Link copied to clipboard
open override val accType: RecurrentBase.RecurrentBase.AccType
batchAxis
Link copied to clipboard
open val batchAxis: Int
initialOutput
Link copied to clipboard
open override val initialOutput: FloatTensor
initialState
Link copied to clipboard
open override val initialState: DTensor
sequenceAxis
Link copied to clipboard
open val sequenceAxis: Int
trainables
Link copied to clipboard
abstract val trainables: List<Trainable<*>>

Inheritors

LinearAfterResetGru
Link copied to clipboard
LinearBeforeResetGRU
Link copied to clipboard