Blog Post

The Empathic Chatbot in Medicine

Published: June 20, 2023

Kjell Asplund, Professor emeritus in Medicine and member of the WASP-HS Board, shares his thoughts and reflections on emphatic AI chatbots in medicine and how WASP-HS contributes with new knowledge and perspectives on the ethics of AI use in health care.

I was asked to talk about The Digital Patient at a national meeting for specialists in cardiology, pulmonary medicine and general practice. With ancient philosophical roots, health care professionals have been talking about the two facets of a human being: body and soul. But a new dichotomy has emerged – the doctor may pay some of her/his attention to the patient of flesh and soul but, increasingly, the attention is directed towards the patient as (s)he emerges on the screen, to the extent that most of the physicians’ conversion is a mumbling dialogue with the digital patient on the screen. The digital patient competes for time and attention with the patient in the hospital bed or sitting in the visitor’s chair in the primary care center.

One of my messages was that health care professions have better to adapt, decisively, to the rapid shift of technologies. There are already plenty of examples when machine learning is superior to physicians, for instance interpreting ECGs, ultrasound and brain images and predicting the risk of future adverse events and death. And the machines do it with much less variation between readings. In an occasional study when physicians managed to perform equally well as to quality, the machine learning system was 186 times faster.  

Many of the health care tasks that, up till now, have been considered as intellectual and important components of the professional identity (and pride) will soon be performed by machines. At the lecture, I concluded that health care staff will have to meet these challenges by developing the skills that are unique to humans: the eloquent encounter between two human beings using empathy, compassion and harmony, even mercy, as professional tools, being open, honest and applying basic ethical values such as equity and respect for the patient’s integrity and autonomy.

My lecture was in February this year. In late April, an article in a prestigious US medical journal (JAMA Internal Medicine, 28 April, 2023) shattered my message. The results of a competition between physicians and ChatGPT were reported. The starting point was nearly 200 questions in AskDocs, a popular public social media forum. Certified physicians had responded to questions from patients and next-of-kins. ChatGPT was then asked to respond to the same questions. An independent panel of licensed physicians, not knowing how individual answers was generated, evaluated the responses. What was the quality from a strict medical point of view? How empathic were the answers?

For us physicians, the results were challenging, to say the least. With ChatGPT, the chance of getting a high or very high quality answer as to medical facts was 3.6 times greater compared to when a physician responded.  Even more challenging was that ChatGPT provided responses characterized by high or very high empathy more than 9.8 times as often compared with the physicians. 

In health care, there is a widespread skepticism against IT in general, simply because the grandiose promises when the new techniques were introduced were not fulfilled.

More time for patients? No. Better overview? On the contrary. Relief of administrative burdens and better working conditions? No, many blame the digitalization for the ongoing burnout epidemic among health care staff. Lots of money saved? Hardly. What most health professionals still agree on is that the medical quality and patient safety has improved and that the introduction of IT may have contributed to this, thus undoubtedly benefitting patients.

The mixed experience with health care digitalization so far is one of the reasons why health care employees as well as patients have met AI with ambivalence. Will the renewed promises by fulfilled? Many see the great potential.  We have never had such a powerful tool to master incoherent electronic medical records and documentation overload. We have never encountered such opportunities to automate routine procedures and improve decision-making support. On the other hand, the list of ethical dilemmas that come with AI is long, risks of discrimination, inequity, issues on accountability and threats to basic human values, to name a few.

Several ongoing WASP-HS research projects and will contribute with new knowledge and new perspectives on the ethics of AI use in health care. At Chalmers in Gothenburg, Francis Lee is leading a project on the use of Big Data in biomedicine. Ana Nordberg, a law researcher in Lund, is analyzing how AI users’ preferences and values may impact the right to health. In Umeå, Pedro Sanchez is the PI of a project on human-centered AI for health, autonomy and wellbeing. Helena Lindgren, also in Umeå, is exploring how socially intelligent systems are employed for managing stress (including IT-induced stress, I presume) and improving emotional wellbeing. Ericka Johnson’s project, led from Linköping, is about the ethics and social consequences of the use of AI in caring robots with a focus on fundamental values such as trust, empathy and accountability.

Together, these WASP-HS projects comprise an exciting and highly relevant multidisciplinary mix of scientific approaches to ethical implications of the use AI in health promotion and health care. At a point in the AI development when a chatbot has been reported to be more empathic than a common doctor, this is timely, to express it modestly.

Author

Kjell Asplund
Professor Emeritus in Medicine and member of the WASP-HS Board

Recent Blog Posts