One of WASP-HS’s goals is to be more than an ivory tower, and the most recent community reference meeting (on Healthcare, March 25) was a beautiful example of academic researchers situated firmly on the ground, embedded in the public dialogue about artificial intelligence in health care.
Erik Campano, image above, is a PhD Student at Umeå University, affiliated with the WASP-HS program.
The meeting started with an introduction by Maja Fjaestad, an important public official in Sweden, who made it very clear that computing technology is going to become more and more important in national health and social policy. The roundtable discussions that followed, meanwhile, brought together representatives from government, big and small companies, non-profits, and of course academia, to try to tackle, together, some of the most pressing problems regarding that technology.
I was the secretary for the roundtable on the “citizen’s perspective” on AI for illness prevention. This gave me the opportunity to pay close attention to the way that stakeholders from each of these different spheres can cooperate to share and ultimately implement knowledge. Much of the conversation revolved around questions of public health care data — how to make it available, acquire it, and use it while upholding each citizen’s right to privacy. We compared the way that public health care data is compiled in Sweden versus other parts of the world — and not just places like China, where privacy protection is less codified — but also our Scandinavian neighbors.
A regional health care representative observed that Finland is moving “at the speed of light” compared to Sweden in using public data for illness prevention. Sweden, he said, is in “reactive mode”, whereas Finland and Denmark, at least, are “pro-active” and “predictive”.
What was encouraging was that other roundtable participants — professors, a startup founder, a CEO — were able to explain why public data in Sweden may (may!) be less practically useful than other Nordic countries, but more importantly, how that might improve. For example, clinical centers can cooperate. Computer systems can be built that better capture the narratives of patients. Mental health practitioners can coordinate their notation better. The roundtable participants showed real creative thinking about how to improve the publication and useability of data.
And then someone asked the questions, “Yes, but from a citizen’s point of view: Can computers keep a secret? … In the future, AI will be like electricity. It will be everywhere, solving problems for you. In that future, who are we? What is the new definition of privacy?” And I think it’s fair to say that this question grabbed the attention of many in our Zoom chat. When government officials, or health care companies, are deciding what to do with data, they have to keep this long-term, philosophical perspective in mind. There’s no easy answer to these questions. They’re also easy to ignore, when one is only communicating with one’s immediate colleagues, in one’s own company.
The real benefit of WASP-HS’s goal to facilitate dialogue across sectors is that it is in multi-stakeholder discussions that some of the most profound, and otherwise often unspoken, questions arise. Everybody left that conversation with multiple perspectives on the “citizen’s view” of AI in preventative health care, perspectives which at least some of us (myself included) hadn’t before considered.
I’ll be bringing those perspectives back to my university research. Hopefully they will produce fruitful new questions, and empirical evidence, that can strengthen our understanding of AI in health care. And I’m looking very forward to our next community reference meeting on “Life in the Digital World”, when I am sure more creative thinking will occur on AI in society. In this way, through dialogue after dialogue, everyone’s knowledge grows like a snowball rolling down a mountain to ground level, more distant from the ivory tower.