Questions tagged [pac-learning]

For questions related to Probably Approximately Correct (PAC) learning, a framework for mathematical analysis of machine learning algorithms, which was introduced in the paper "A Theory of the Learnable" (1984) by Leslie G. Valiant.

See A Theory of the Learnable (1984) by Leslie G. Valiant.

15 questions
8
votes
1 answer

What are some resources on computational learning theory?

Pretty soon I will be finishing up Understanding Machine Learning: From Theory to Algorithms by Shai Ben-David and Shai Shalev-Shwartz. I absolutely love the subject and want to learn more, the only issue is I'm having trouble finding a book that…
7
votes
3 answers

Are PAC learnability and the No Free Lunch theorem contradictory?

I am reading the Understanding Machine Learning book by Shalev-Shwartz and Ben-David and based on the definitions of PAC learnability and No Free Lunch Theorem, and my understanding of them it seems like they contradict themselves. I know this is…
7
votes
1 answer

Are PAC learning and VC dimension relevant to machine learning in practice?

Are PAC learning and VC dimension relevant to machine learning in practice? If yes, what is their practical value? To my understanding, there are two hits against these theories. The first is that the results all are conditioned on knowing the…
4
votes
2 answers

Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning?

Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning? I came across this statement in the section 5.2 of the book "understanding machine learning: from theory to algorithms". You just search "increases…
3
votes
1 answer

Is there a notion of generalization in unsupervised learning?

I've been learning a little bit about generalization theory, and in particular, the PAC (and PAC-Bayes) approach to thinking about this problem. So, I started to wonder if there is an analogous version of "generalization" in Unsupervised Learning?…
3
votes
0 answers

How to show Sauer's Lemma when the inequalities are strict or they are equalities?

I have the following homework. We proved Sauer's lemma by proving that for every class $H$ of finite VC-dimension $d$, and every subset $A$ of the domain, $$ \left|\mathcal{H}_{A}\right| \leq |\{B \subseteq A: \mathcal{H} \text { shatters } B\} |…
2
votes
1 answer

What is the relevance of the concept size to the time constraints in PAC learning?

My question is about the relevance of concept size to the polynomial-time/example constraints in efficient PAC-learning. To ask my question precisely I must first give some definitions. Definitions: Define the input space as $X_n = {\left\{…
2
votes
1 answer

Is there any practical application of knowing whether a concept class is PAC-learnable?

A concept class $C$ is PAC-learnable if there exists an algorithm that can output a hypothesis with probability at least $(1-\delta)$ (the "probably" part), and an error that is less than $\epsilon$ (the "approximately" part), in time that is…
2
votes
0 answers

What is the relationship between PAC learning and classic parameter estimation theorems?

What are the differences and similarities between PAC learning and classic parameter estimation theorems (e.g. consistency results when estimating parameters, e.g. with MLE)?
2
votes
0 answers

A problem about the relation between 1-oracle and 2-oracle PAC model

This problem is about two-oracle variant of the PAC model. Assume that positive and negative examples are now drawn from two separate distributions $\mathcal{D}_{+}$ and $\mathcal{D}_{-} .$ For an accuracy $(1-\epsilon),$ the learning algorithm must…
2
votes
0 answers

Convert a PAC-learning algorithm into another one which requires no knowledge of the parameter

This is part of the exercise 2.13 in the book Foundations of Machine Learning (page 28). You can refer to chapter 2 for the notations. Consider a family of concept classes $\left\{\mathcal{C}_{s}\right\}_{s}$ where $\mathcal{C}_{s}$ is the set of…
1
vote
0 answers

Where can I find the solutions to the problems in the book "An Introduction to Computational Learning Theory"?

I have been going through "An Introduction to Computational Learning Theory" (Kearns-Vazirani). I don't know if my solutions to the problems are correct and have no other way of checking my learning. How would I find solutions to compare to or…
1
vote
0 answers

Characterize the high probability bound for learning algorithm

Suppose we have a dataset $S = (x_1, \dots x_n)$ drawn i.i.d from distribution $D$, a learning algorithm $A$ and error function $err$. The performance of $A$ is therefore defined by the error/confidence pair $(\alpha, \beta)$, that is $$P(err(A(S))…
1
vote
1 answer

What is the expectation of an empirical model in model based RL?

In the paper - "Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems", on page 1083, on the 6th line from the bottom, the authors define expectation of the empirical model…
ijuneja
  • 78
  • 8
1
vote
1 answer

Prove that in such cases, it is possible to find an ERM hypothesis for $H_n$ in the unrealizable case in time $O(mnm^{O(n)})$

Let $H_1$ , $H_2$ ,... be a sequence of hypothesis classes for binary classification. Assume that there is a learning algorithm that implements the ERM rule in the realizable case such that the output hypothesis of the algorithm for each class $H_n$…
Ben
  • 133
  • 5