Artificial Intelligence in Management research – an outlook from the 2020 Academy of Management conference
In this blog post, Anna Yström, Associate Professor in Industrial Management, Linköping University, and researcher in the WASP-HS project “Complex intelligent systems and the future of management”, gives a report from this year’s Academy of Management conference.
Anna Yström states that researchers have just started to scratch the surface when it comes to understanding how AI can impact management and that there will be many opportunities for the WASP-HS project to make important contributions to the research community.
Like so many other conferences, this year’s Academy of Management conference had to transition into a virtual mode. This is the largest management conference in the world with over 10 000 participants a regular year. This year, nothing was regular, and while the virtual mode most likely enabled some newcomers to attend that would otherwise not have been able to travel, the interaction between participants suffered in some sense, as it was a relatively small group of people who were able to make their voices heard and have an impact in this format.
Given the focus of our WASP HS project – “Complex intelligent systems and the future of management” – this year’s conference was important to get a sense of where the research frontiers are and what is at the heart of the management community discussions when it comes to Artificial Intelligence and management. Our project aims to contribute to future purposeful management and AI for a better society. This refers to a wider purpose of organization and management beyond economic exchanges, e.g. including sustainability and safety and reflecting something more aspirational, i.e. how we can increase the odds for realizing the societal benefits and possibilities with AI.
With that as a starting point in the project, we aim to provide answers to how decisions on complex intelligent systems can be made, how organizations can be designed, how organizations work together with others in ecosystems, and how we can harness complexity. With such questions in mind, I was happy to note that several divisions (interest groups) such as Technology and Innovation Management, Organization and Management Theory and Organizational Development and Change, had decided to organize sessions that focused on different aspects of AI and management.
In particular, the symposium “Broadening our sight to Artificial Intelligence in Management” offered several intriguing presentations. Manju Ahuja talked about socio-technical systems and the (need for) boundaries of agency for AI. She argued that as the objectives of the AI algorithms are based on human input, it is important to remember that AI acts on behalf of humans and it is vital to maintain a “human in the loop”-perspective, in order not to delegate too much agency to AI. In researchers’ efforts to delineate design-making processes and reduce biases etc., Natalia Levina continued to state that perhaps the research community have placed too much focus on understanding and developing the psychologic (cognitive) aspects of AI, and not enough on the sociological aspects – how humans will relate to and work with AI in society.
In the session on “Artificial Intelligence and Innovation ethics”, several presenters discussed (the usual) ethical considerations where e.g. Samer Faraj argued that an equivalent of the Hippocratic oath is needed for AI, to ensure that harm is minimized. Miguel Alzola problematized around if AI can learn virtues (as from Aristotle), and claimed that when it comes to ”doing the right thing” in any given situation, even in a crisis, is very difficult for AI to know what that is, as it is not possible to define a decision-making process for this in advance. In his view, doing the right thing is ”uncodifyable”, and re-emphasizes the need to keep a human in the loop when implementing AI. This was a highly intriguing presentation, as it links closely to the starting point of our project with its explicit focus on “purposeful management of AI” and how organizational and managerial processes and practices may turn out to be necessary to complement “incomplete” AI solutions (which realistically will never become perfect or all-inclusive).
In the same session, Tae Wan Kim talked about the need for “explainable AI” – the importance of understanding the rationale behind AI actions, and for AI to be able to offer ”good explanations”. His point was also empirically illustrated in the session on “Artificially intelligent futures: Technology, the changing nature of work and organizing”, in an excellent paper presentation by Sara Lebovitz and colleagues, looking at how medical professionals used AI for critical judgments, and why the doctors often decided NOT to use the AI tools available, if they could not understand the AI results or had other organizational tools or routines in place to verify the AI’s accuracy. The discussant, Beth Bechky, also made an interesting point highlighting the power dynamics related to AI implementation, where e.g. medical doctors have the professional status to say no to the AI, whereas Über-drivers have no way of refusing the use of a facial recognition app required to do their job. This points to the critical need to better understand and develop the interfaces between AI and humans, as well as the organizational context, in order to have successful implementation – it will not happen by itself.
Still, despite this growing interest among management scholars, my impression is that management researchers have just started to scratch the surface when it comes to understanding and explicating how AI can impact humans, organizations, management and of course society at large. At this point in time, the management community seems to be talking more about what kind of questions need to be explored in the future, rather than presenting findings from empirical studies in any wider scale. Conclusions about the potential consequences (both positive and negative) of AI appear to be drawn with limited insights into the technical aspects of developing AI.
Overall, it was somewhat disappointing to note that this loosely defined research field is still in a budding stage, which became evident by the predominant focus on reiterating rather general, clichéd notions of implications of AI, without adding depth or critical reflection to such preconceptions. Only time will tell if management scholars will be able and interested in rising to the challenge and contribute to a more nuanced understanding of the implications of AI on organization and management.
However, the current situation also offers ample opportunities for our WASP-HS project to make important contributions to the research community. Questions that are of specific interest to us about e.g. embedding AI in safety critical systems or infrastructures was not addressed to any great extent during this conference, or how dynamics and interactions in organizations and ecosystems of actors may change as AI become embedded in them. In addition, inspiring sessions on temporality, organization design and process studies point to a need and an opportunity to study the future in the making, which gives us motivation and support to study AI implementation and adoption in real-time.
There would also be significant value in cross-disciplinary studies which are able to more clearly integrate technical and managerial and/or organizational perspectives in order to further explicate purposeful management of AI – a perfect platform for the three PhD students Youshan Yu, Bijona Troqe and Elinor Särner who are now embarking on their explorative journeys together with me and my colleagues in the WASP-HS project!
Associate Professor in Industrial Management, Linköping University