Feature Story | 27-Jan-2026

Why the future of AI depends on trust, safety, and system quality

UVA data scientist Daniel Graham argues that as intelligent systems move into the physical world, cybersecurity must be treated as a core measure of safety and reliability, not just defense

University of Virginia School of Data Science

When Daniel Graham, an associate professor in the University of Virginia School of Data Science, talks about the future of intelligent systems, he does not begin with the usual vocabulary of cybersecurity or threat mitigation. Instead, he focuses on quality assurance and on how to build digital and physical systems we can trust.

“We are moving toward a world where software does not just live in the digital space,” said Graham, who earned bachelor’s and master’s degrees in engineering at UVA. “It’s embodied in cars, robots, medical devices and public infrastructure. Once systems can act in the real world, the cost of failure becomes physical. So, the question is not only ‘Is it smart?’ but also ‘Is it safe, reliable and high quality?’”

Graham joined the School of Data Science in 2025 after teaching computer science for seven years. The move, he said, was a chance to refresh, collaborate with new colleagues and teach in smaller, more engaged classroom environments.

The intersection of security and safety

Graham’s research explores secure embedded systems and networks, particularly those that directly interact with the physical world, including medical devices, water treatment systems, autonomous vehicles and other forms of operational infrastructure.

Early in his career, Graham saw firsthand how vulnerabilities in software could translate into real-world consequences. Over time, this led him to view security not as a defensive activity, but as a measure of system quality and safety.

“We already know how to build incredibly powerful smart systems,” he said. “What we need now is assurance.” He emphasized that as society increasingly relies on intelligent systems to manage hospitals, transportation networks, power grids and military hardware, those systems must be dependable.

He believes the model already exists. Just as a professional engineer must sign off on the safety of a bridge, he says, future data and AI systems should require comparable review, oversight and certification.

“We have strong regulatory norms for physical infrastructure,” he said. “But the digital infrastructure that increasingly runs everything does not yet follow comparable accountability standards. That has to change.”

 

Associate Professor of Data Science Daniel Graham and his recent publication, "Metasploit: The Penetration Tester's Guide, Second Edition. (Photo credit: Alyssa Brown)

A public voice on responsible system evaluation

Graham also writes and teaches widely on secure systems evaluation and penetration testing. His book, released this year, “Metasploit: The Penetration Tester’s Guide (Second Edition),” introduces readers to professional methods for testing and auditing complex systems. Its reach is global, with planned translations in Mandarin, Korean, French and Russian.

“Penetration testing is the digital equivalent of financial auditing,” he said. “Just as organizations require audits to ensure the integrity of their financial systems, critical digital and embedded systems should be routinely evaluated for quality and resilience.”

Framing cybersecurity in this way, Graham translates a highly technical concept into terms the public can easily understand.

“People understand quality,” Graham said. “They understand the difference between something that is built well and something that is built carelessly. We should expect the same quality from the systems that run our world.”

Looking ahead

As data science extends further into automation, embedded intelligence and decision-making systems, Graham hopes to help shape how future practitioners view their responsibilities.

“The most important systems of the century ahead will be intelligent, networked and physical,” he said. “The people building them must think carefully about safety, reliability and impact. Quality is not optional. It is the foundation of trust.”

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.