0

Recently I am reading literature regarding domain adaption. However, most of the works consider scenarios when there are some labelled data in the source domain. So I wonder if there is any unsupervised way of domain adaption when there are only unlabelled data in both source and target domains?

Especially, if a language model (e.g. BERT or Roberta) was pre-trained with unsupervised tasks (e.g. Masked language modelling) on the general domain, what can be done to make it adapted to a new domain?

0 Answers0