In machine learning (ML), variance is a concept which is related to errors in the model's predictions, as a results of over-sensitivity and high correlation of the machine learning algorithm to the training data. Due to this over-sensitivity, the ML model becomes complex to explain (explainability) and it captures the complexity inside the training data in great detail. However it fails to generalize new test and training data adequately. In the case of high variance, the ML model learns the noise in the training data as well as any random fluctuations and not the underlying pattern between the dependent and independent variables. If a model has high variance, then it performs well on the training data but fails on the testing data, thus falling under the overfitting area outside the good fit.
Variance in machine learning is a measurement of the spread between numbers in a dataset (x independent variables) or a measurement of the variation of an ML model's estimations across different datasets. ML models with high bias and low variance lead to underfitting. ML models with low bias and high variance lead to overfitting. ML models with relatively low/average bias and variance are closer to the best fit (aka good fit or sweet spot). The balance therefore between bias and variance is reflected on the balance between underfitting and overfitting.
There are various ML methods which aim at reducing overfitting by simplifying an ML model, while at the same time keeping most of the "good" variance inside the model's features. One such method is PCA, an unsupervised dimensionality reduction algorithm.