For questions related to the bias-variance tradeoff, which is an important issue in machine learning.
Questions tagged [bias-variance-tradeoff]
17 questions
8
votes
1 answer
Is there a connection between the bias term in a linear regression model and the bias that can lead to under-fitting?
Here is a linear regression model
$$y = mx + b,$$
where $b$ is known as $y$-intercept, but also known as the bias [1], $m$ is the slope, and $x$ is the feature vector.
As I understood, in machine learning, there is also the bias that can cause the…

Sivaram Rasathurai
- 316
- 1
- 10
4
votes
1 answer
Can I compute the fitness of an agent based on a low number of runs of the game?
I'm developing an AI to play a card game with a genetic algorithm. Initially, I will evaluate it against a player that plays randomly, so there will naturally be a lot of variance in the results. I will take the mean score from X games as that…

Ryxuma
- 237
- 1
- 5
4
votes
1 answer
How does Monte Carlo have high variance?
I was going through David Silver's lecture on reinforcement learning (lecture 4). At 51:22 he says that Monte Carlo (MC) methods have high variance and zero bias. I understand the zero bias part. It is because it is using the true value of value…

Bhuwan Bhatt
- 394
- 1
- 11
4
votes
2 answers
Why is having low variance important in offline policy evaluation of reinforcement learning?
Intuitively, I understand that having an unbiased estimate of a policy is important because being biased just means that our estimate is distant from the truth value.
However, I don't understand clearly why having lower variance is important. Is…

Hunnam
- 227
- 1
- 6
3
votes
2 answers
What makes a machine learning algorithm a low variance one or a high variance one?
Some examples of low-variance machine learning algorithms include linear regression, linear discriminant analysis, and logistic regression.
Examples of high-variance machine learning algorithms include decision trees, k-nearest neighbors, and…

Posi2
- 358
- 2
- 16
3
votes
1 answer
What's the difference between estimation and approximation error?
I'm unable to find online, or understand from context - the difference between estimation error and approximation error in the context of machine learning (and, specifically, reinforcement learning).
Could someone please explain with the help of…

stoic-santiago
- 1,121
- 5
- 18
2
votes
0 answers
Why don't ensembling, bagging and boosting help to improve accuracy of Naive bayes classifier?
You might think to apply some classifier combination techniques like ensembling, bagging and boosting but these methods would not help. Actually, “ensembling, boosting, bagging” won’t help since their purpose is to reduce variance. Naive Bayes has…

Sivaram Rasathurai
- 316
- 1
- 10
2
votes
1 answer
Why are large models necessary when we have a limited number of training examples?
In Goodfellow et al. book Deep Learning chapter 12.1.4 they write
These large models learn some function $f(x)$, but do so using many more parameters than are necessary for the task. Their size is necessary only due to the limited number of…

Borun Chowdhury
- 191
- 1
- 5
2
votes
1 answer
What is the bias-variance trade-off in reinforcement learning?
I am watching DeepMind's video lecture series on reinforcement learning, and when I was watching the video of model-free RL, the instructor said the Monte Carlo methods have less bias than temporal-difference methods. I understood the reasoning…

Aman Savaria
- 33
- 7
2
votes
1 answer
How can I determine the bias and variance of a random forrest?
On this website https://scikit-learn.org/stable/modules/learning_curve.html, the authors are speaking about variance and bias and they give a simple example of how works in a linear model.
How can I determine the bias and variance of a random…

jennifer ruurs
- 579
- 2
- 8
2
votes
0 answers
How is the bias caused by a max pooling layer overcome?
I have constructed a CNN that utilizes max-pooling layers. I have found with these layers that, should I remove them, my network performs ideally with every output and gradient at each layer having a variance close to 1. However, if they are…

Recessive
- 1,346
- 8
- 21
1
vote
1 answer
How does deep learning overcome overfitting?
From Berkeley CS182, SP22: https://cs182sp22.github.io/assets/lecture_slides/2022.01.26-ml-review-pt2.pdf.
Can someone help me interpret this diagram? I understand the graph on the left, but I don't understand how in the right graph, the test risk…

9j09jf02jsd
- 19
- 1
1
vote
0 answers
Bias-variance tradeoff and learning curves for non-deep learning models
I am following a course on machine learning and am confused about the bias-variance trade-off relationship
to learning curves in classification.
I am seeing some conflicting information online on this.
The scikit-learn learning curve looks like the…

ML-Student-1996
- 11
- 2
1
vote
0 answers
Do the variance and bias belong to the policy or value functions?
Recently, I read many papers on variance and bias. But I am still confused by the two notions, the variance or bias belongs to who? Policy or value? If the variance or bias is large or low, what results will we get?

GoingMyWay
- 150
- 8
0
votes
0 answers
Effectiveness of DNN training with reduced Batch randomness
So here's an example set to help explain my doubt. Suppose I have 80,000 total images available for a DNN training task. With a batch size of 32, that is 2500 batches.
Now let's say I partition the dataset into two bins of 40,000 images each. Now,…