0

I have seen from literature that models such as DANN or ADDA are typical in the field of Domain Adaptation, a branch of transductive learning. I know that these methods are extremely useful especially when we try to perform the same tasks as for source on the unlabelled targets.

One question that rises to my mind is if we have target and source domains both with labels, other than doing adaptation, is it possible that we naively merge the two dataset and perform the normal supervised learning setup?

My first thought was that DANN's purpose lies on trying to extract a somewhat universal feature that can generalize over the whole scope of domain. If we just naively merge the dataset, the model might be able to classify the data's originating domain to perform the specified task. In other words, it can internally classify the domain and source looking at the feature vector and do a prediction.

However suppose we have an MNIST and USPS dataset being the source and target domain. We can make a DANN network to make a generalized feature vector, which indeed seems logically convincing. But after all, our goal of classifying between the number labels is based on the qualitative assessment of features derived from the image. How can we really tell that classifying one's domain and then performing their task is something different from the methodology just stated. Then why bother performing adversarial training?

I am definitely new to this field, so I may have a lot of non-standard terminology or some errors in my reasoning. Thank you very much in advance.

Haneul
  • 1
  • 1

0 Answers0