Questions tagged [self-supervised-learning]

For questions related to self-supervised learning (SSL), which typically refers to techniques that automatically generate the supervisory learning signal. SSL can be used for representation learning, so it can be useful for transfer learning too. Some people consider SSL a sub-field of unsupervised learning given that many (if not all) SSL techniques do not require a human to manually annotate the inputs.

See this question What is self-supervision in machine learning? for more info.

26 questions
95
votes
3 answers

What is self-supervised learning in machine learning?

What is self-supervised learning in machine learning? How is it different from supervised learning?
13
votes
2 answers

How are generative adversarial networks trained?

I am reading about generative adversarial networks (GANs) and I have some doubts regarding it. So far, I understand that in a GAN there are two different types of neural networks: one is generative ($G$) and the other discriminative ($D$). The…
12
votes
3 answers

What is the relation between semi-supervised and self-supervised visual representation learning?

What's the differences between semi-supervised learning and self-supervised visual representation learning, and how they are connected?
11
votes
3 answers

What is the difference between self-supervised and unsupervised learning?

What is the difference between self-supervised and unsupervised learning? The terms logically overlap (and maybe self-supervised learning is a subset of unsupervised learning?), but I cannot pinpoint exactly what that difference is. What are the…
5
votes
2 answers

How to understand the concept of self-supervised learning in AI?

I am new to self-supervised learning and it all seems a little magical at the moment. The only way I can get an intuitive understanding is to assume that, for real-world problems, features are still embedded at a per-object level. For example, to…
3
votes
1 answer

How to generate labels for self-supervised training?

I've been reading a lot lately about self-supervised learning and I didn't understand very well how to generate the desired label for a given image. Let's say that I have an image classification task, and I have very little labeled data. How can I…
3
votes
1 answer

Does self-supervised learning require auxiliary tasks?

Self-supervised learning algorithms provide labels automatically. But, it is not clear what else is required for an algorithm to fall under the category "self-supervised": Some say, self-supervised learning algorithms learn on a set of auxiliary…
2
votes
1 answer

Perform clustering on high dimensional data

Recently I trained a BYOL model on a set of images to learn an embedding space where similar vectors are close by. The performance was fantastic when I performed approximate K-nearest neighbours search. Now the next task, where I am facing a problem…
2
votes
2 answers

Is it realistic to train a transformer-based model (e.g. GPT) in a self-supervised way directly on the Mel spectrogram?

In music information retrieval, one usually converts an audio signal into some kind "sequence of frequency-vectors", such as STFT or Mel-spectrogram. I'm wondering if it is a good idea to use the transformer architecture in a self-supervised manner…
2
votes
0 answers

Does Yann LeCun consider k-means self-supervised learning?

I was discussing the topic of self-supervised learning with a colleague. After a while we realized we were using different definitions. That's never helpful. Both of us were introduced to self-supervised learning by reading or listening to Yann…
2
votes
1 answer

What are some most promising ways to approximate common sense and background knowledge?

I learned from this blog post Self-Supervised Learning: The Dark Matter of Intelligence that We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense…
2
votes
1 answer

Is it possible to use self-supervised learning on different images for the pretext and downstream tasks?

I have just come across the idea of self-supervised learning. It seems that it is possible to get higher accuracies on downstream tasks when the network is trained on pretext tasks. Suppose that I want to do image classification on my own set of…
2
votes
1 answer

Is it possible to pre-train a CNN in a self-supervised way so that it can later be used to solve an instance segmentation task?

I would like to use self-supervised learning (SSL) to learn features from images (the dataset consists of similar images with small differences), then use the resulting trained model to bootstrap an instance segmentation task. I am thinking about…
1
vote
1 answer

Definition of negatives in NT-Xent loss

I'm trying to understand few details about NT-Xent loss defined in SimCLR paper(link). The loss is defined as $$\mathcal{l}_{i,j} = -\log\frac{\exp(sim(z_i,z_j)/\tau)}{\sum_{k=1}^{2N}\mathbb{1}_{[k\neq i]} \exp(sim(z_i,z_k)/\tau)}$$ Where $z_i$ and…
1
vote
0 answers

Should we consider the prototypical forecasting task as self-supervised learning?

In NLP, the task of "predicting the next word" is an example of self-supervised learning. An essential part is that the label can be computed programmaticaly and does not require explicit human effort. Typically, this task is not an end in itself,…
Enk9456
  • 21
  • 3
1
2