Gunnar Holmberg och Nicolette Lakemond

The AI@work conference – a conference beyond our usual focus

Gunnar Holmberg

Attending international conferences is an important part of the work for researchers connected to WASP-HS. Here reports Gunnar Holmberg, adjunct professor at Linköping University and responsible for business development for future systems at Saab. He participates in the project The emergence of complex intelligent systems and the future of management, within WASP-HS.

We are in the early stages of WASP-HS. To us WASP-HS is a great opportunity to be part of a melting pot of different perspectives that we seldom meet, and this will certainly stimulate us and challenge our perspectives. The participation in WASP-HS will hopefully help us to make additional relevant contributions to the vibrant area of AI and its impact on society.

One thing we do, is to try to participate in meetings and conferences beyond our usual focus. Just before the outbreak of Corona in Europe, we travelled to a conference called AI@work in Amsterdam. The conference was organized by Reshaping Work and positioned as “an innovative international and multi-stakeholder events series discussing the latest trends concerning digital trends and the future of work by promoting research and collective thinking”.

A central question was whether AI would reduce, transform or grow the labor market. In general, the opinion at the conference seemed to be that it will mainly transform the labor market, which is perhaps not so surprising from a group involved in the topic. One interesting aspect that was discussed was that narrow AI is so far capable of performing single tasks, while most jobs actually contain many tasks.

We took the opportunity to present and discuss some of our ideas focused on the implications for management when AI is entering complex systems and critical infrastructures. We discussed the combination of model-based and data-based methods to achieve complex systems with embedded intelligence. Such systems are developed in organizational or industrial ecosystems, and when using AI technologies, the ecosystems have an increasing expectation from society to consider a wider set of ethical aspects in its decisions and activities.

To reflect this in our study, we used the widely supported Asilomar principles for development of AI that express the ambition to ethically develop AI. In the presentation we outlined a number of management challenges such as how to implement satisficing purposeful decision making that adapts dynamically over the system’s life cycle. Our presentation was part of a session with interrelated presentations and resulted in interesting discussions.

Most other presentations in the conference addressed standalone applications of narrow AI, but acknowledged the influence of the application context and the need for tight integration to gain full impact. Our focus on complex systems, models and data thus created a great deal of curiosity.

One of the challenges that was discussed in several of the studies presented at the conference is the applicability of a solution. This includes how to find the boundaries of its applicability, looking at how trustworthy a model is that is generated through machine learning and its explainability.

We saw several examples in health care, e.g. detecting cancer tumors and solutions that can help to prioritize what patients should have intensive care. Several researchers were involved in action research and experiments. In order to be allowed to perform their experiments, these researchers had entered into negotiations on how to perform and delimit such an experiment so that it would not be dangerous.

The presentations typically reported how a particular application would make a difference when AI is more mature, while at the same time reporting rather strong limitations of their experiments using current AI. We missed studies that reported how other technologies and processes could work together with AI to benefit from a less mature AI, and supporting early implementation without overconfidence in the method. A simple example is what happens when AI is doing the first scanning for cancer tumors. In what ways could human postprocessing be used to eliminate false positives, and more importantly false negatives? There are probably many implications of such hybrid methods that deserve researcher’s attention.

This is just one illustration of the importance to consider the combination of management and technology together. Part of the solution will come from technical progress of AI, while processes and routines is a necessary complement to enable early purposeful applications.

If you are interested, you are welcome to look at our short project presentation at The emergence of complex intelligent systems and the future of management.

Image above: Gunnar Holmberg and Nicolette Lakemond, participating in the research project. Photo: Mikael Sönne/LiU.