2

This problem is about two-oracle variant of the PAC model. Assume that positive and negative examples are now drawn from two separate distributions $\mathcal{D}_{+}$ and $\mathcal{D}_{-} .$ For an accuracy $(1-\epsilon),$ the learning algorithm must find a hypothesis $h$ such that: $$ \underset{x \sim \mathcal{D}_{+}}{\mathbb{P}}[h(x)=0] \leq \epsilon \text { and } \underset{x \sim \mathcal{D}_{-}}{\mathbb{P}}[h(x)=1] \leq \epsilon$$

Thus, the hypothesis must have a small error on both distributions. Let $\mathcal{C}$ be any concept class and $\mathcal{H}$ be any hypothesis space. Let $h_{0}$ and $h_{1}$ represent the identically 0 and identically 1 functions, respectively. Prove that $\mathcal{C}$ is efficiently PAC-learnable using $\mathcal{H}$ in the standard (one-oracle) PAC model if and only if it is efficiently PAC-learnable using $\mathcal{H} \cup\left\{h_{0}, h_{1}\right\}$ in this two-oracle PAC model.

However, I wonder if the problem is correct. In the official solution, when showing that 2-oracle implies 1-oracle, the author returns $h_0$ and $h_1$ when the distribution is too biased towards positive or negative examples. However, in the problem, it is required that only in 2-oracle case we can return $h_0$ and $h_1$. Therefore, in this too-biased case, it seems that there may not exist a 'good' hypothesis at all.

Is this problem wrong? Or I make some mistake somewhere?

nbro
  • 39,006
  • 12
  • 98
  • 176
j200932
  • 181
  • 1
  • 2
  • Can we decompose probabilities like done in the 'so called' proof? From $D$ to $D^+$ and $D^-$. –  Mar 17 '20 at 22:48

0 Answers0