In the largest experiment studying people's "risky choices" to date, researchers show how machine learning can be used to test and improve long-stagnant theories of human decision-making. Understanding and predicting how people make decisions has been a longstanding goal in psychology and economics, which has led to a proliferation of competing theories and models of decision-making. But many of these theories are often difficult to distinguish from each other and few provide discrete or novel insights into human behavior. As a result, there remains little consensus on the best decision theory or model and little gain in their predictive power. Recently, efforts to discover and evaluate new decision-making theories have been enhanced using machine learning. However, while these data-driven approaches can accelerate the discovery of new predictive models of human judgments, the results are limited by small datasets and are often uninterpretable. To address this, Joshua Peterson and colleagues collected a large dataset of human decisions for nearly 10,000 risky choice problems. Risky choice - one of the most basic and extensively studied problems in classical decision theory - evaluates how a person decides between two unequal gambles: getting $100 with a probability of 20% or getting $50 dollars with a probability of 80%, for example. Peterson et al. found that deep neural networks could mimic human decisions to a surprising degree of accuracy, substantially outperforming existing, human-generated risky choice models. What's more, in learning to mimic human decisions, the networks also revealed many of the psychological properties underlying established behavioral theories, allowing them to be evaluated and refined. "Ultimately, the increased availability of large datasets and improvements in computing power will make machine learning an indispensable component of the decision scientist's toolbox, revitalizing (and perhaps ... revolutionizing) theoretical research on human choice behavior," write Sudeep Bhatia and Lisheng He in a related Perspective.