0

Neural networks are ineherently representation learners, so one could simply extract the last layer embedding $\textbf{z} \in \mathbb{R}^d$ of a neural network model and consider it as a representation of raw input data. But in a supervised ML framework, this representation would simply be created to optimize neural network predictions. I'm wondering what are the benefits of an explicit representation learning task as it might be contrastive learning instead and why it might be preferred to simply considering the last neural network layer.

James Arten
  • 297
  • 1
  • 8
  • I don't see any question here. Can you edit your post to ask a **specific**, which you should put in the title too? – nbro Jan 25 '23 at 22:15

0 Answers0