The inductive bias is the prior knowledge that you incorporate in the learning process that biases the learning algorithm to choose from a specific set of functions [1].
For example, if you choose the hypothesis class
$$\mathcal{H}_\text{lines} = \{f(x) = ax + b \mid a, b \in \mathbb{R} \}$$ rather than $$\mathcal{H}_\text{parabolas} = \{f(x) = ax^2 + b \mid a, b \in \mathbb{R} \},$$ then you're assuming (implicitly or explicitly, depending on whether you're aware of these concepts) that your target function (the function that you want to learn) lies in the set $\mathcal{H}_\text{lines}$. If that's the case, then your learning algorithm is more likely to find it.
In most cases, you do not know exactly the nature of your target function, so you could think that it may be a good idea to choose the largest set of possible functions, but this would make learning infeasible (i.e. you have too many functions to choose from) and could lead to over-fitting, i.e. you choose a function that performs well on your training data, but it's actually quite different from your target function, so it performs badly on unseen data (from your target function). This can happen because the training data could not be representative of your target function (you don't usually know this a priori, so you cannot really or completely solve this issue).
So, the definition above does not imply that the inductive bias will not necessarily lead to over-fitting or, equivalently, will not negatively affect the generalization of your chosen function. Of course, if you chose to use a CNN (rather than an MLP) because you are dealing with images, then you will probably get better performance. However, if you mistakingly assume that your target function is linear and you choose $\mathcal{H}_\text{lines}$ from which your learning algorithm can pick functions, then it will choose a bad function.
Section 2.3 of the book Understanding Machine Learning: From Theory to Algorithms and section 1.4.4. of the book Machine Learning A Probabilistic Perspective (by Murphy) provide more details about the inductive bias (the first more from a learning theory perspective, while the second more from a probabilistic one).
You may also be interested in this answer that I wrote a while ago about the difference between approximation and estimation errors (although if you know nothing about learning theory it may not be very understandable). In any case, the idea is that the approximation error (AE) can be a synonym for inductive bias because the AE is the error due to the choice of hypothesis class.
(As a side note, I think it is called "inductive bias" because this bias is the one that can make inductive inference feasible and successful [2] and maybe to differentiate it from other biases, e.g. the bias term in linear regression, although that term can also be an inductive bias).