I am looking for a guide on matrix approximation of pretrained models. My idea is related to transfer learning: I want to use a pretrained model, take the weights and biases of one of its layers, approximate the matrix using a matrix of a different shape, and deploy that matrix in my new model as a frozen layer. Do you have any recommendations for research papers I could look into?
Asked
Active
Viewed 16 times
0
-
Can you clarify why you think it's possible to do this and why do you want to do this? You tagged this with SVD, but that doesn't seem to be the same thing you're looking for. – nbro May 29 '23 at 22:03
-
I am not sure it is possible. I was thinking conceptually about low-rank approximation (Lora), but I never studied how it works in detail. I would like to do this because transfer learning gives me some nice properties for my matrix, such as a meaningful latent space if I choose the right matrix to approximate. I will provide more details tomorrow. – postnubilaphoebus May 29 '23 at 22:28