How AI support can go wrong in safety-critical settings
Peer-Reviewed Publication
Updates every hour. Last Updated: 9-Oct-2025 03:11 ET (9-Oct-2025 07:11 GMT/UTC)
When it comes to adopting artificial intelligence in high-stakes settings like hospitals and airplanes, good AI performance and a brief worker training on the technology is not sufficient to ensure systems will run smoothly and patients and passengers will be safe, a new study suggests. Instead, algorithms and the people who use them in the most safety-critical organizations must be evaluated simultaneously to get an accurate view of AI’s effects on human decision making, researchers say. The team also contends these evaluations should assess how people respond to good, mediocre and poor technology performance to put the AI-human interaction to a meaningful test – and to expose the level of risk linked to mistakes.
A new study reports evidence of a link between prenatal exposure to the widely used insecticide chlorpyrifos (CPF) and structural abnormalities in the brain and poorer motor function in New York City children and adolescents.
Reported conflicts among expert advisors had been significantly reduced, and the most concerning type of conflict had become exceedingly rare, USC Schaeffer Center research finds.