< Winter Conference 2026

Winter Conference 2026 Workshop

Bias and Prejudices in AI Training Data

This workshop explores how AI inherits — and even amplifies — the pre-existing biases and prejudices in the datasets used to train them. As AI identifies patterns from their training datasets, AI is inevitably predisposed to reflect the same distortions as embedded in those datasets. This inaccurate image of reality can lead to unfair, inaccurate, or harmful outcomes in the areas in which AI is used, including in the healthcare sector, in the workplace and for social services. The workshop will explore three questions:

  1. What causes bias and prejudices in training datasets? 
  2. What are the consequences of these false assumptions?
  3. What could the solutions be to overcome disparities in training datasets (e.g. technical, societal, legal)? 

Each participant is expected to contribute to the workshop by discussing how biases and prejudices from the AI training data play a role in their project. Depending on the stage of your project, you may also wish to discuss solutions to overcome this skewed image due to biased and prejudiced training datasets.

 

Structure of the Workshop

The workshop will begin with small-group discussions based on the questions above. Flip charts, post-its, and markers will be available to facilitate the conversation. After these sessions, we will come together for a roundtable discussion, where we will take a step back to view the broader picture and explore – among other things – the common themes emerging across the groups.*

 

Submission Details

Potential participants will be required to submit an abstract, which should be no longer than 200-250 words (including references, if any).

Abstracts should be e-mailed to sarah.de_heer@jur.lu.se on Friday 30 January the latest.

* The anonymised discussions and conclusions may help guide the author’s future scholarly and educational activities.