Regularizers

class vega.regularizers.GelNet(lambda1, lambda2, P, d=None, lr=0.001, use_gpu=False)[source]

GelNet regularizer for linear decoder [Sokolov2016]. If P is set to Identity matrix, this is Elastic net. d needs to be a {0,1}-matrix. If lamda1 is 0, this is a L2 regularization. If lambda2 is 0, this is a L1 regularization.

Needs to be sequentially used in training loop.

Example
>>> loss = MSE(X_hat, X)
# Compute L2 term
>>> loss += GelNet.quadratic_update(self.decoder.weight)
>>> loss.backward()
>>> optimizer.step()
# L1 proximal operator update
>>> GelNet.proximal_update(self.decoder.weight)
Parameters
  • lambda1 (float) – L1-regularization coefficient

  • lambda2 (float) – L2-regularization coefficient

  • P (ndarray) – Penalty matrix (eg. Gene network Laplacian)

  • d (Optional[ndarray]) – Domain knowledge matrix (eg. mask)

  • lr (float) – Learning rate

quadratic_update(weights)[source]

Computes the L2 term of GelNet

Parameters

weights – Layer’s weight matrix

proximal_update(weights)[source]

Proximal operator for the L1 term inducing sparsity.

Parameters

weights – Layer’s weight matrix

class vega.regularizers.LassoRegularizer(lambda1, lr, d=None, use_gpu=False)[source]

Lasso (L1) regularizer for linear decoder. Similar to [Rybakov2020] lasso regularization.

Parameters
  • lambda1 (float) – L1-regularization coefficient

  • d (Optional[ndarray]) – Domain knowledge matrix (eg. mask)

  • lr (float) – Learning rate

quadratic_update(weights)[source]

Not applicable (identity)

proximal_update(weights)[source]

Proximal operator for the L1 term inducing sparsity.

Parameters

weights – Layer’s weight matrix