Regularizers
- class vega.regularizers.GelNet(lambda1, lambda2, P, d=None, lr=0.001, use_gpu=False)[source]
GelNet regularizer for linear decoder [Sokolov2016]. If
P
is set to Identity matrix, this is Elastic net.d
needs to be a {0,1}-matrix. Iflamda1
is 0, this is a L2 regularization. Iflambda2
is 0, this is a L1 regularization.Needs to be sequentially used in training loop.
- Example
>>> loss = MSE(X_hat, X) # Compute L2 term >>> loss += GelNet.quadratic_update(self.decoder.weight) >>> loss.backward() >>> optimizer.step() # L1 proximal operator update >>> GelNet.proximal_update(self.decoder.weight)
- Parameters
lambda1 (
float
) – L1-regularization coefficientlambda2 (
float
) – L2-regularization coefficientP (
ndarray
) – Penalty matrix (eg. Gene network Laplacian)d (
Optional
[ndarray
]) – Domain knowledge matrix (eg. mask)lr (
float
) – Learning rate
- class vega.regularizers.LassoRegularizer(lambda1, lr, d=None, use_gpu=False)[source]
Lasso (L1) regularizer for linear decoder. Similar to [Rybakov2020] lasso regularization.
- Parameters
lambda1 (
float
) – L1-regularization coefficientd (
Optional
[ndarray
]) – Domain knowledge matrix (eg. mask)lr (
float
) – Learning rate