< AI for Humanity and Society 2024 Workshops

WASP-HS Workshop in conjunction with the conference AI for Humanity and Society 2024

Privacy and AI: Towards a Trustworthy Ecosystem (AITrust)

Host

Xuan-Son (Sonny) Vu, Lili Jiang, Elena Volodina, Simon Dobnik, Therese Lindstrom, Johanna Björklund

About

The wider adoption of machine learning (ML) and artificial intelligence (AI) make several applications successful across societies such as healthcare, finance, robotics, transportation and industry operations by inducing intelligence in real-time [1-2]. Designing, developing and deploying reliable and trustworthy AI applications are desirable that offer trustworthy services to users with high-stakes decision making [2-4]. For instance, AI-assisted robotic surgery, automated financial trading, autonomous driving and many more modern applications are vulnerable to reidentification attacks, concept drifts, dataset shifts, misspecifications, misconfiguration of AI algorithms, perturbations, and adversarial attacks beyond human or even machine comprehension level, thereby posing dangerous threats to various stakeholders at different levels. Moreover, building trustworthy AI systems requires lots ofmulti-party efforts in addressing different mechanisms and approaches that could enhance user and public trust. To name a few, the following topics are known to be topics of interest in trustworthy AI, but are not limited to: (i) privacy preservation, (ii) bias and fairness, (iii) robust mitigation of adversarial attacks, (iv) improved privacy and security in model building, (v) being decent, (vi) model attribution and (vii) scalability of the model under adversarial settings [1-5]. All of these topics are important and need to be addressed. This workshop aims to draw together new and on-going work in AI to address challenges for ensuring reliability in trustworthy systems.

The challenges in different AI systems are including, but are not limited to (i) data collection and use, (ii) data sharing and aggregation, (iii) re-identification, and (iv) secure and private learning. Nonetheless, all aspects of AI systems that can deal with reliable, robust and secure issues are within the scope of this workshop. It will focus on robustness and performance guarantee, as well as consistency, transparency and safety of AI which is vital to ensure reliability. The workshop will bring together experts from academics and industries and inspire discussion on  building trustworthy AI systems including developing and assessing theoretical and empirical methods, practical applications, initializing  new ideas and identifying directions for future studies. Original contributions, on-going work, as well as comparative studies among different methods, are all welcome.

Goals

A cross-disciplinary forum for advancing the designing, developing and deploying of reliable and trustworthy AI applications.

Preparation for participants:

Participants are encouraged to submit a one-page abstract about the research work they will present at the workshop. Depending on the number of submissions and topics of interest, we will arrange short talk presentations for selected works. All accepted abstracts should prepare a poster to present at the workshop to foster discussions among participants, regardless of whether the papers are selected for short talks.

Registration

Registration information and important dates: will be provided via https://mormor-karl.github.io/events/AITrust-Workshop/

Please note that in order to participate in this workshop you must also register for the conference via the event page.