RNNCell_cdopt#
CLASS cdopt.nn.RNNCell_cdopt(input_size, hidden_size, bias=True, nonlinearity='tanh', device=None, dtype=None, manifold_class = euclidean_torch, penalty_param = 0, manifold_args = {})
An Elman RNN cell with tanh or ReLU non-linearity,
where the weight for hidden states \(W_{hh}\) is constrained over the manifold defined by manifold_class
.
If nonlinearity
is relu
, then ReLU is used in place of tanh.
Parameters#
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
nonlinearity – The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
manifold_class – The manifold class for the weight matrix. Default:
cdopt.manifold_torch.euclidean_torch
penalty_param – The penalty parameter for the quadratic penalty terms in constraint dissolving function
manifold_args - The additional key-word arguments that helps to define the manifold constraints.
Shapes#
input: \((N, H_{in})\) or \((H_{in})\) tensor containing input features where \(H_{in}\) = input_size.
hidden: \((N, H_{out})\) or \((H_{out})\) tensor containing the initial hidden state where \(H_{out} = \mathrm{hidden\_size}\). Defaults to zero if not provided.
output: \((N, H_{out})\) or \((H_{out})\) tensor containing the next hidden state.
Attributes#
manifold (cdopt manifold class) – the manifold that defines the constraints. The shape of the variables in
manifold
is set asvar_shape
.weight_ih (torch.Tensor) – the learnable input-hidden weights, of shape (hidden_size, input_size)
weight_hh (torch.Tensor) – the learnable hidden-hidden weights, of shape (hidden_size, hidden_size)
bias_ih – the learnable input-hidden bias, of shape (hidden_size)
bias_hh – the learnable hidden-hidden bias, of shape (hidden_size)
quad_penalty (callable) – the function that returns the quadratic penalty terms of the weights. Its return value equals to \(||\mathrm{manifold.C}(\mathrm{weight})||^2\).
Example#
rnn = cdopt.nn.RNNCell_cdopt(10, 20)
input = torch.randn(6, 3, 10)
hx = torch.randn(3, 20)
output = []
for i in range(6):
hx = rnn(input[i], hx)
output.append(hx)
print(output)