I'm reading about machine learning and trying to understand what bias and variance mean. I've read these articles (1, 2, 3 ), but still have a few questions:
Bias:
model is biased in that it assumes that the data will behave in a certain fashion (linear, quadratic, etc.) even though that assumption may not be true
What does it mean "assume"? We select the model we want to use. If we select a linear model then it will try to fit the best line it can do.
Variance:
variance measures how inconsistent are the predictions from one another over different training sets
Why it should be consistent if we use different training set? If we use data of cats it will give one predictions. If we use data of dogs, it will give us different predictions. Or do they mean that when we add more observations to our training set the predictions should improve and not that model now gives us different prediction than from before?