A nationally deployed healthcare algorithm - one of the largest commercial tools used by health insurers to inform health care decisions for millions of people each year - shows significant racial bias in its predictions of the health risks of black patients, researchers report. The widely used tool underestimates the health needs of black patients; at the same risk score, the algorithm determines that black patients are healthier than equally sick whites, thus reducing the number of black patients who are identified as requiring extra care. The results demonstrate that this bias occurs because the algorithm's prediction on health needs is instead a prediction on health costs. "...[The] study contributes greatly to a more socially conscious approach to technology development as they demonstrate how a seemingly benign choice of label (i.e., health cost), initiates a process with potentially life-threatening results," writes Ruha Benjamin in a related Perspective. Healthcare institutions increasingly rely on algorithmic predictions to help identify health risk and allocate resources to those that require the most need. While the potential for bias in these automated systems is a recognized concern, independent evaluation of bias in largely proprietary commercial healthcare algorithms has been limited. Here, Ziad Obermeyer and colleagues present one of the first evaluations of a risk-prediction algorithm widely used throughout the U.S. healthcare system. With access to the tool's underlying data and functions, Obermeyer et al. were able to observe the algorithm's inner workings - including how racial disparities arise. This is because the algorithm uses health costs as a proxy for health needs. The authors argue that healthcare costs are a racially biased metric; blacks incur lower costs than whites because of a lack of access, they say, as well as due to systemic racism. Obermeyer et al. show that reformulating the algorithm using a different proxy led to an 84% reduction in racial bias. In light of the findings, the authors are working with the algorithm's developer to reduce bias.