Author

Virginia Dignum
Professor of Responsible AI at Umeå University

Blog Posts

Why We Shouldn’t Pause Research on AI, but Instead Prioritize Multidisciplinary Research and AI Governance

Published: May 4, 2023

Authors: Virginia Dignum, Christian Balkenius, Ingar Brink, Francis Lee, Anna-Sara Lind, Helena Lindgren (members of the WASP-HS management team).

Abstract. The recent open letter from the Future of Life Institute (FLI) calling for a pause on the development of AI systems has generated a significant amount of discussion. We argue that more significant action to address the potential dangers of large, generative models, requires increased efforts on AI governance, and research funding structures to build a sustainable basis for transparent research on the foundations and consequences of AI technologies and their societal applications. We believe that we should prioritize addressing the actual impact of AI and its current state, rather than causing alarm about theoretical risks associated with superintelligence or AGI. This requires multidisciplinary research on the impact of AI and autonomous systems bringing knowledge and expertise from the humanities and social sciences together with engineering and computer science. In Sweden, the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS) is an example of such an effort.

Current developments around large, generative models have drawn substantial attention to the speed of AI development. A recent open letter urging for a worldwide pause of giant AI experiments while putting regulations and policies in place, brings attention to the possible dangers of such developments. As management team of WASP-HS, a research program specifically focused at studying the consequences of AI for humanity and society, we stress that it is utterly important not to pause research on AI. We also stress the importance of building a sustainable basis for the vital and transparent research exploring the foundations and consequences of AI technologies and their applications to society.

The Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS) is a 10-year initiative that since 2019 is engaging Swedish academia and research institutes to further advance multidisciplinary research on the impact and challenges of AI and autonomous systems for humanity and society.

Wishful thinking

Research on previous technological shifts teach us that it is common to overestimate the societal impact of new technologies in the short run and underestimate long term consequences. In the technical world, any novel feature is presented as a breakthrough of technological wizardry, the next big thing that will change the world forever. “This time it is different!” On occasion this is indeed the case, as in the introduction of the world wide web, but alas, most of the time it is not. 

Nowhere is this more apparent than in the field of artificial intelligence (AI). Time after time, we have been promised technology that will compete with or even surpass human cognitive abilities. The history of overselling the capabilities of AI-systems is as old as the field itself.

The perceptrons of the late 1950’s were described as universal learning machines, the expert systems of the 1980’s were supposed to replace human experts, and the neural network models popularized during the 1990’s were said to be the solutions to any learning task.  Every advancement was portrayed as the definitive answer to artificial intelligence, often due to the media’s insatiable craving for sensational news.

In recent years, we have seen a surge in impressive results related to deep-learning networks. Although some of the underlying methods were developed already in the early 1980’s, the availability of high-performance computers and an almost infinite amount of training data on the internet has radically changed what can be done with these systems. With the public availability of systems based on generative AI models, such as large language models (LLMs ) including ChatGPT, anyone interested is able to interact with one of the most advanced AI systems ever developed.  What we see now is a perfect storm of companies elevating their products, loud voices speculating the impact of their work, the media falling victim to the power of a good story, the scientific community not delivering enough knowledge and research on the impact of technology, and the public delighted by watching a magic trick well performed.

Yes, these systems are impressive and even very useful to the informed user. But their abilities are greatly overstated.  With enough data, there is always something that looks like a good reply to any question, and it is easy to be seduced by the apparent intelligence of these systems. To quote Qui-Gon Jinn in The Phantom Menace, “The ability to speak does not make you intelligent”. Just because a reply is more eloquent than a web search it does not mean that the technology is in any way conscientious.

With this in mind it is surprising that a number of prominent figures in the computer industry and academia have proposed a ban on AI research for half a year to figure out the societal consequences of the technology. As of today, there are more than 25,000 signatures on the open letter. We would like to propose an alternative perspective.

First, it is unlikely that a ban on research and development will have any real impact. Not only because it is unenforceable, but because the solution to the perceived problems with generative AI, is more research and more discussion instead of less. Without a well thought out plan for the study of the consequences of advanced AI systems, a ban on further research is not likely to have any positive outcomes.

More importantly, the letter promotes an unrealistic view of the current state of artificial intelligence. There may have been impressive results, but we are nowhere near actual human intelligence. Furthermore, by blurring the border between science and fiction, it risks undermining the good research that is actually being done.

Real consequences

It is possible, or even likely, that the intelligence of machines will one day surpass that of humans, but that day is not today and probably not tomorrow or in the near future either. Unfortunately, much of the reporting put forward is a fantasy. However, the consequences of the technology that we have here today are very real. Not because super intelligence is lurking around the corner, but because artificial intelligence has matured enough to actually be useful. This is what we should be focusing on.

The open letter arouses fear of ‘superintelligence’ or ‘Artificial General Intelligence’ (AGI). This is a strategy that we often see propagated by the so-called long-termism movement: as far as people are sufficiently afraid of the future, no one is paying close attention to the harms now. We find this very worrying and cynical.

AI systems are integrated in a great deal of different applications with a wide difference in use, consequence, and impact. To treat all implementations of all large language models as having the same challenges and problems is to reduce AI technology to a monolithic technology that is assumed to have homogeneous consequences. We can expect different AI systems to have different consequences in different situations. If we can learn anything from the history of technology, it is that new technologies are adopted in widely different ways and have widely different consequences. After all, adopting the same critique on all applications of for instance the introduction of the electric engine, would not make sense. And adopting the same stance to all generative AI models would not make sense either.

Something noticeably missing from the letter are any concerns regarding the environmental impact of those models and the lack of transparency about them. Not only on energy consumption, but also on human resources and human dignity, and the large number of resources needed for the physical infrastructure.

Instead, we see an urgent need to educate researchers, policy makers and the public on the possibilities and risks of current technology, the real-world problems we already see with AI now (e.g. generation of disinformation, spread of social biases, exploitation of workers, discrimination).

There needs to be policies set in place that take into account the actual capabilities of current AI-systems and those that are likely to appear in the near future. Here we agree with the open letter but contend that most important step at this moment is to collaboratively work on a global plan and concrete steps (technical, organisational and legal) to ensure that this is done. The six-month time frame also indicates that there is a race, with a finish line where we will lose or win. This narrative counter-acts the systematic and long-term work that is required to explore consequences and further improve both AI technologies and their implementations, as well as regulations.  

Demanding responsibility and accountability from organisations developing and deploying LLMs and other generative systems, must be accompanied by concrete steps on governance at national and global level. Not just ‘nice words’ about which principles AI should follow, but specific solutions. For instance, demands on preconditions for making such systems available: technical requirements on transparency and explainability, demands on auditing of the organisations deploying these systems, and maybe also demands on the expertise of those using the systems.  We also don’t let pharmaceutical companies release medicines without lots of tests, nor do we let people drive cars without a driver’s license.

How WASP-HS contributes

Ensuring that AI is employed for maximal benefit will require wide participation. We are working within WASP-HS to establish a constructive, collaborative, and scientific approach across disciplines that ensures that technological development goes together with a deep knowledge of humanities and social sciences.

Collaboration with a broad spectrum of stakeholders, academia, industry, government, civil society, will improve our understanding and builds a rich system of collaborations to ensure the responsible development and use of AI technologies.

WASP-HS is concretely working on

  • Developing excellent research and expertise on the human and societal impact and challenges of AI and autonomous systems, from a multidisciplinary perspective, based upon the humanities and social sciences
  • Support large-scale transdisciplinary capacity-building and skill development through recruitment and graduate education
  • Facilitate open and continuous dialogue across disciplines and with industry and society

Besides strengthening education broadly across disciplines at the academic institutions through the capacity building efforts, WASP-HS is also joining effort with WASP in the Wallenberg AI and Transformative Education Development Program (WASP-ED), which is a research program aimed at advancing AI education across disciplines through a national collaboration across universities. 

We strongly believe that the way to move forward in a rapidly evolving society is the continuing, collaborative development of sustainable conditions and infrastructures for increasing the vital and transparent research and education on the foundations and consequences of AI technologies and their applications to society, together with agile implementation of regulatory frameworks.

Read more about WASP-HS: https://wasp-hs.org/

See also the work of our sister programs: https://wasp-sweden.org/ and https://wasp-ed.org/

Contact

For comments please contact Virginia Dignum at virginia@cs.umu.se.

Recent Blog Posts