News Release

Experts call for science- and evidence-based AI policy

Summary author: Walter Beckwith

Peer-Reviewed Publication

American Association for the Advancement of Science (AAAS)

In a Policy Forum, Rishi Bommasani et al. argue that successful artificial intelligence (AI) policy must be grounded in solid evidence and scientific understanding rather than hype or political expediency. “AI policymaking should place a premium on evidence: Scientific understanding and systematic analysis should inform policy, and policy should accelerate evidence generation,” write Bommasani et al. Although developing sound AI policy hinges on clearly defining and effectively using credible evidence, the authors note that what is considered to be valid or credible varies across AI policy domains, leading to uncertainty in governance. This uncertainty creates a policy challenge: act too soon and risk overregulation; wait too long and risk real harm. Here, Bommasani et al. call for developing mechanisms that allow AI policy to evolve alongside emerging scientific understanding, which would ensure that governance remains both effective and grounded in the current state of the technology. To achieve evidence-based AI policy, the authors suggest that governments should incentivize rigorous pre-release model evaluations, increase public transparency around safety practices, and establish systems to monitor post-deployment harms. Protecting independent researchers through safe harbor provisions is essential to expand the evidence base. Policies should also prioritize interventions supported by strong evidence across the broader sociotechnical landscape. Finally, fostering expert consensus through credible, inclusive scientific bodies will help guide responsible AI governance amid uncertainty and disagreement, say the authors.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.