Mélanie Terrasse, Moti Gorin, and Dominic Sisti
Given the profound influence of social media and emerging evidence of its effects on human behavior and health, bioethicists have an important role to play in the development of professional standards of conduct for health professionals using social media and in the design of online systems themselves. The authors examine several ethical issues: the impact of social networking sites on the doctor-patient relationship, the development of e-health platforms to deliver care, the use of online data and algorithms to inform health research, and the broader public health consequences of widespread social media use. The authors also make recommendations for addressing bias and other ethical challenges. Mélanie Terrasse is a PhD candidate in sociology and social policy at Princeton University, Moti Gorin is an assistant professor of philosophy at Colorado State University, and Dominic Sisti is assistant professor in medical ethics and health policy at the University of Pennsylvania.
Other Voices: Several articles respond to Terrasse, Gorin, and Sisti. In "Welcoming the Intel-ethicist," John Banja argues that two concerns--that digital platforms will diminish the therapeutic value of medicine and that artificial intelligence algorithms will increase errors and unfair decision-making--may be exaggerated. Patients are already adapting to AI systems that serve particular medical uses, such as screening for diabetic retinopathy, and health care providers, just like AI tools, are prone to making errors and acting on biases when diagnosing conditions and recommending treatments. With ethical oversight, AI systems can learn from their mistakes, too, Banja writes.
In "Deep Ethical Learning: Taking the Interplay of Human and Artificial Intelligence Seriously," Anita Ho writes that while the use of AI technologies can pose ethical issues like perpetuating human biases, it would be irresponsible to not employ them in ways where they can improve care, such as using electronic monitoring systems to track the long-term care of seniors. AI tools should be part of a larger effort for health care quality improvement while administrators and developers monitor and address the ethical problems that can arise from their use.
In "Ethical Use of Social Media Data: Beyond the Clinical Context," Catherine M. Hammack argues that the use of social media and other digital tools in research poses new and distinct challenges, in part because the law offers less protection of individual privacy in research than in clinical care. One relevant risk is that privacy settings on digital platforms may not prevent companies from sharing data with third parties or using it for marketing and product development and other types of research.
Alex John London
While many AI systems can provide highly accurate diagnoses and other predictions critical for medical care, the reasoning by which they arrived at their findings can be inscrutable, leading some commentators to question whether health care practitioners should trust these systems. But knowing how a machine arrived at its decision is less relevant than its ability to produce accurate results, writes London. Opaque clinical judgment and uncertainty about how decisions are made are commonplace in many non-AI aspects of health care, and it would be misguided to devalue an AI system simply because its decision-making process is inaccessible. London is a Clara L. West professor of ethics and philosophy at Carnegie Mellon University, where he directs the Center for Ethics and Policy.
Also in this issue:
Susan Gilbert, director of communications
The Hastings Center