WASP-HS is summarizing the initial five years, and is making plans for the coming five years. What could then be a better start than to ask representatives of some of the leading human-centred AI centres and industry research units what they view are the most inspiring futures of AI at the service for humanity and society, and the major challenges when moving towards a desirable AI future?
During a few days during past week members of the WASP-HS board and management team had the opportunity to visit the Wallenberg Research Link[1] and the Institute for Human-Centered AI (HAI)[2] at Stanford, the Berkeley AI Research Institute[3], World Economic Forum[4], Ericsson, META and Apple to discuss these issues together. The following are my takeaways as a member of the management team, and as researcher on, and educator of, human-centred AI.
I am Helena Lindgren, Professor of Computing Science at Umeå University. I am the coordinator of a WASP-HS research project on digital companions as social actors and their relevance for managing exhaustion syndromes; member of a project on AI, Democracy and self-determination; and leading the development of a digital coach for promoting health in collaboration with Region Västerbotten (STAR-C), and the Västerbotten Intervention Programme (VIP, initiated in 1986).
The importance of the research environment in developing excellent researchers, research and education.
Growing and nurturing an open environment promoting certain values is key at Stanford, including openness to multidisciplinary work while maintaining strong basis in core disciplines. Stanford values and promotes the following: absence of barriers between education and research; visionary leadership; bottom-up initiatives; and outstanding research and education environments with labs, meeting places both on and off work, with access to culture and nature. The growing National research environment that WASP-HS is building with similar goals and values has a great potential to both contribute to and support the international research environments striving in the same direction towards human-centered AI. Through the visits the international collaborations across these environments are extended, which will be beneficial to the WASP-HS community. To further strengthen the research environment also within Sweden, within and across programs even more (WASP, WASP-ED, DDLS), should be a priority the coming years. WASP-HS is unique, and is still in its early stage where the core disciplines are provided support to initiate and further strengthening their participation in developing AI that is useful to address burning societal challenges.
The challenges for society and humanity are many, where AI can play a role. Just looking into the Swedish society’s immediate challenges, AI’s role may be more or less constructive and sustainable in a longer perspective. Considering the risks of excluding citizens from services, meaningful occupations and living conditions by how software, workflows and divisions of labour are being designed, this should be a major target for WASP-HS research. One example is the national AI mission initiated by the government lead by four Swedish authorities[5] to put efforts into implementing AI-based solutions to mitigate the cost for employees in public agencies, including healthcare. What if the cost is seemed to only being moved from healthcare personnel to IT staff and AI-based equipment in the citizens’ perspective? It is essential to integrate WASP-HS research in the ongoing societal transformations in close collaboration with the public agencies, and to overcome the common view that research is only about evaluating the outcome of implementations afterwards. Consequently, to be explored is how WASP-HS research can to a higher extent actively contribute to the current digital / AI transformation.
Another fundamental challenge is to build bridges between the core research disciplines. This was elicited in the discussions with UC Berkeley and Stanford researchers, including Daniel Schwartz, Professor of educational technology at Stanford, and Chair of WASP-HS International Scientific Board (ISAB). One example is the challenge to bridge brain research on plasticity and brain models, which operates on fundamental, biological and chemical levels of understanding human cognition, to the reality of the classroom, and children’s motivation when learning and developing in a social and cultural context. Another immediately relevant example is to bridge societal and humanistic research on the futures of AI-infused society, and fundamental AI research and development of new computational instruments. In both examples AI is at the core in future research at both ends of the spectrum of disciplines, very differently approached in terms of research methodology and theoretical frameworks, however, necessary to combine in order to create sustainable value for society and humanity. How can new methodologies emerge, new theoretical frameworks be developed, and how can new and forgotten theoretical frameworks guide research on and the development of AI beneficial to humans and for society?
Future artificial intelligence should be aimed at being beneficial for humanity rather than being intelligent
To understand today’s AI and its implication for the future it is essential to reflect upon the history of AI. Our team of visitors embarked in a time-travel with the help of Stuart Russell, Professor at UC Berkeley[6], also Director of the Center for Human-Compatible AI[7] and of the Kavli Center for Ethics, Science, and the Public[8] aiming to develop the future AI beneficial to society, and Wolfgang Walster, Professor[9], the initiator and Director of DFKI, the German AI research institute, who was visiting UC Berkeley.
Our time travel spanned from computational techniques for mimicking human rationality, initially from the perspectives of cognitive capabilities and economical rationality maximizing some utility, touching down into the current reality of the new Super-Parrots as Walster described today’s language models, to envisioning how to move beyond current fragmentary exploit of AI techniques still far too often for purposes not beneficial for neither humanity nor society in short or long term perspectives.
Two distinct perspectives were mediated during the visits. One was the consumer perspective of services and products, mainly provided by the industry, where the future was envisioned as augmented with advanced and person-tailored extended reality to advance the experiences of presence – in messaging, distance meetings, in social media, in shopping, gaming, this at home, at work and while being on the move. Privacy is at the core of their research, since when extending reality in the public sphere, by-stander integrity and privacy is at stake. A more global perspective on the consumer perspective in social media was presented by Michael Bernstein, HAI Stanford. The key finding is that algorithms person-tailoring content affect the local society very differently, depending on the country’s degree of democracy. By altering algorithms generating the next person-tailored posts, the user could be provided a list of recommended posts in opposite order compared to current algorithms, thus providing a diverse view of current trends, which, potentially, could mitigate polarization in democratic countries. However, they had also shown that current social media actually promotes democratic processes in countries where people are striving for increased democracy, since current algorithms promote mobilization of joint efforts. This example illuminates the importance to carefully develop, research, and evaluate far-reaching technology across cultural contexts and structures of power in a global world, in relation to what values are promoted.
The second perspective is on the future more advanced AI that can communicate with humans in a human-like meaningful way. Walster points out that simulating human-like dialogues is one of the most ambitious scientific goals of this millennium. However, this requires a fundamentally multidisciplinary approach embedding all other cognitive abilities that are the basis for human intelligence, striving for AI-completeness, in addition to the social and socio-technical perspective. Russell strives to advance the computational properties of emergent human preferences to increase human control, and to fundamentally shift the view on what AI is towards systems that are beneficial to humans and society, instead of intelligent agents striving to reach a fixed objective.
WASP-HS has during its first five years established itself on the national and international research arena that strives to increase knowledge on and to infuse society with humane AI that is human-centred and beneficial to society. I am looking forward to the forthcoming period of WASP-HS research when we can elevate the ambition further, while addressing societal challenges in collaborations across disciplines and organisations for the benefits of Swedish society.
References
[1] https://wallenbergresearchlink.stanford.edu/
[3] https://humancompatible.ai/ https://bair.berkeley.edu/
[4] https://initiatives.weforum.org/c4ir/home
[5] https://www.digg.se/analys-och-uppfoljning/publikationer/publikationer/2023-01-23-slutrapport-uppdrag-att-framja-offentlig-forvaltnings-formaga-att-anvanda-artificiell-intelligens
[6] http://people.eecs.berkeley.edu/~russell/
[7] https://humancompatible.ai/