AUC, or Area Under the Curve, is a metric used to evaluate the performance of a machine learning model. It is a way of measuring the accuracy of a model in predicting the probability of a binary outcome. AUC is calculated by plotting the True Positive Rate (TPR) against the False Positive Rate (FPR). The AUC is the area under this curve. The higher the AUC, the better the model is at distinguishing between the two classes. A perfect model would have an AUC of 1.0, meaning it is able to perfectly distinguish between the two classes. A model with an AUC of 0.5 is considered to be a random classifier, meaning it is not able to distinguish between the two classes.