The "explainability" of machine learning (ML) systems is often framed as a technical challenge for the communities who design artificial intelligence systems. However, in a Policy Forum, Diane Coyle and Adrian Weller argue that the challenges of explainability are at least as significant for the policymakers who use ML systems to inform decisions. According to Coyle and Weller, addressing this challenge raises important questions about the broader accountability of the organizations that use ML in their decision-making and may expose hidden yet implicit political biases. In the context of ML, explainability refers to the extent to which the complex internal mathematical mechanics of the system can be explained in human terms. In other words, the rationale behind how a model derives its conclusion based on the information it was given. Although powerful and increasingly used across a wide spectrum of applications, most current ML systems are black-boxes, and little is known about how the complex algorithms operate, even by those who develop them, let alone to the public. Thus, there is a growing demand for explainability of ML systems, particularly its use in the public sector, where undetected bias and inaccuracies in data are likely to lead to political decisions that have a significant impact on those affected. Coyle and Weller suggest that the increasing use of ML will force policymakers and organizations to be more specific and explicit about their objectives, and by extension, their values and political choices. According to the authors, progress in using explainable and transparent ML could help to reveal the conflicting aims and implicit trade-offs in policy decisions, just as it has helped reveal bias in existing social and economic systems.