Disentangling Conscience Protections
Nadia N. Sawicki
Earlier this year, the U.S. Department of Health and Human Services announced its intention to strengthen the enforcement of legal protections for health care providers' conscience rights. While the questions about who and what actions should be protected have been long debated, "there is one remaining question that has received very little attention--the question of how providers are protected, and from what consequences," writes Nadia N. Sawicki, a professor at Loyola University Chicago and the academic director of Loyola's Beazley Institute for Health Law and Policy. Sawicki proposes a taxonomy for categorizing conscience protections based on whether they protect against adverse actions by state or private actors and "whether they protect providers against adverse action based on their beliefs, their conduct, or the harmful consequences of their conduct." The most defensible conscience protections shield health care providers from adverse actions by public entities and protect providers when their conduct does not cause harm to third parties like patients, Sawicki writes.
Mark R. Wicclair argues that it is more relevant to identify whether conscience protection may cause harm to third parties such as patients than whether it protects health care providers against adverse actions by state or private actors. Wicclair is a professor of philosophy at West Virginia University and an adjunct professor of medicine at the Center for Bioethics and Health Law at the University of Pittsburgh.
Artificial intelligence and machine learning applications have the potential to revolutionize health care delivery, including correcting human errors and improving diagnostic accuracy. But some applications of these systems can lead to moral pitfalls, such as exacerbating human biases and weakening patient confidentiality. It is "vital to involve bioethicists in the design of these technologies," writes Junaid Nabi, a physician who conducts research in surgical health services at Brigham and Women's Hospital and Harvard Medical School. "As AI and machine learning advance, bioethical frameworks need to be tailored to address the problems that these evolving systems might pose, and the development of these automated systems also needs to be tailored to incorporate bioethical principles."
Also in this issue:
Policy & Politics: Crossing U.S. Borders While Pregnant: An Increasingly Complex Reality
Perspective: Amyloid on the Brain, Alzheimer's on the Mind
Contact Susan Gilbert, director of communications
The Hastings Center
Hastings Center Report