Bias and variance, underfitting and overfitting
Based on traditional machine learning theory, we may see the above curve. We may pretty easy to understand the underfitting (which means the model is not well trained) or overfitting (which means the model is over-trained).
Personally, I feel underfitting and overfitting is much easier to be understood than the bias and variance concepts. Also, based on deep learning practice, we may not have the overfitting case if we apply some tricks such as dropout and batch normalization. We may have some curves like bias is reducing and variance is increasing a little bit. In that case, we may have a smaller total error with the training goes.
Reference
[1]https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229