The Azimuth Project
Bilinear regression (Rev #2, changes)

Showing changes from revision #1 to #2: Added | Removed | Changed

Bilinear regression



In statistics/machine learning the individual samples often come in the form of 2-D arrays, eg, a set of population counts of different species (one axis) at different points in time (second axis). Standard regression collapses these arrays into vectors and thus loses the structure in the regression process. Bilinear regression attempts to use the array structure by using the samples as matrices.


Basic model

The bilinear predictor function takes the form

(1)f(X)=tr(U TXV)+b= i=1:mu i TXv i+b f(X) = tr(U^T X V) + b = \sum_{i=1:m} u^T_i X v_i + b

When performing regularised fitting the score function is

(2)E= j=1:n( i=1:mu i TX jv i+by j) 2+λ i=1:mR(u i)+R(v i) E = \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right)^2 + \lambda \sum_{i=1:m} R(u_i) + R(v_i)

where λ\lambda is the regularization strength and R()R() is the regularization function.

The derivatives are

(3)Ev I= j=1:n( i=1:mu i TX jv i+by j) 2(u I TX j) T+λR(v I)v I \frac{\partial E}{\partial v_I}=\sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right)^2 (u^T_I X_j)^T + \lambda \frac{\partial R(v_I)}{\partial v_I}


(4)Eu I= j=1:n( i=1:mu i TX jv i+by j) 2(X jv I)+λR(u I)u I \frac{\partial E}{\partial u_I}=\sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right)^2 (X_j v_I) + \lambda \frac{\partial R(u_I)}{\partial u_I}