Receiver operating characteristic (ROC) curves, precision-recall (PR) curves, calibration curves, and decision-curve analysis (DCA) for the training set and validation set. (IMAGE)
Caption
(A) ROC of the training set. (B) ROC of the validation set. (C) PR curve of the training set. (D) PR curve of the validation set. (E) Calibration curve
of the training set. (F) Calibration curve of the validation set. (G) DCA of the training set. (H) DCA of the validation set. AUROC and AUPRC are expressed as the point estimates and 95% CI. An AUROC > 0.8 is considered to give good discriminatory accuracy for a clinical prediction model. An AUPRC > 0.7 can be considered evidence that the model has good performance and can reach a certain level of the comprehensive performance in precision and recall. It has certain application value for clinical prediction and other tasks, and can effectively identify and predict target events to a certain extent. Calibration curves are used to evaluate the consistency between the predicted probability of the model and the actual probability of occurrence. An ideal model has pairs of observed and predicted probabilities that lie on the 45◦ line, and the P-value of the Hosmer–Lemeshow test should be greater than 0.05. DCA is used to evaluate the clinical net benefit of the prediction model under different decision thresholds, taking into account true positives, false positives, true negatives, and false negatives, as well as the potential benefits and risks of different decisions. Net benefit represents true cases of postoperative infection that would be treated preventatively. Abbreviations: CI, confidence interval; AUROC, area under the receiver operating characteristic curve; AUPRC, area under the precision-recall curve.
Credit
hLife
Usage Restrictions
News organizations may use or redistribute this image, with proper attribution, as part of news coverage of this paper only.
License
Original content
 
                