Guðbjörg Linda Rafnsdóttir
School of Social Sciences
Artificial Intelligence

In today's fast-paced, technology-driven world, artificial intelligence (AI) is becoming increasingly influential across various sectors. While AI has often been seen as a tool to promote fairness and reduce bias, concerns are growing that it might, in fact, amplify existing inequalities. This is particularly evident in how AI is used in employment to manage, monitor, and assess workers.

One type of AI, known as Natural Language Processing (NLP), is used to analyze text and make decisions, such as identifying which job applicants should be shortlisted for interviews or which employees may require further training. These tools can also monitor for violations that could result in disciplinary action. However, the models that NLP-based systems rely on are not immune to bias. This raises concerns about their potential to unintentionally discriminate and lead to bad decisions.

European research network

On behalf of the University of Iceland, I lead research network called BIAS—Mitigating Bias of AI in the Labour Market, along with representatives of eight other universities, research institutes and innovative organizations. Our goal is to explore how AI systems used in management across Europe might introduce or amplify diversity biases. By conducting a mixed-methods study, we are investigating where and how discrimination occurs in AI-driven management tools, what works well, and how diversity biases can be reduced.


We focus on the experiences and perceptions of employees and different stakeholders, including AI developers and HR professionals, regarding diversity biases in AI applications. Additionally, we conduct an ethnographic study across selected organizations in Iceland, Italy, the Netherlands, Norway, and Turkey, using interviews and fieldwork to deepen our understanding. A key technical objective of BIAS is to develop a proof-of-concept for a new, more equitable recruitment tool, based on NLP and Case-Based Reasoning.

Towards more equitable AI tools

Recognizing the different ways AI can lead to discrimination is crucial in shaping technologies that promote fairness in the labor market. Our research seeks to ensure that AI systems benefit everyone, regardless of e.g. gender, age, race, ethnicity, religion or sexuality, rather than perpetuating existing inequalities. By fostering ethical AI development, we can help prevent AI from reinforcing stereotypes or excluding marginalized groups.

It is crucial to address these biases in order to close the well-known gender digital gap and ensure that AI serves all individuals equitably. Inclusivity should be a top priority in AI innovation to stabilize, rather than further destabilize, an already rapidly changing society.
 

Fundings

Horizon Europe

Researchers

Other team members

Dilys Sharona Quartey - Doctoral Graduate Student | University of Iceland

International networks and cooperation:  
The members of the network BIAS—Mitigating Bias of AI in the Labour Market.  

Tags

Share

Did this help?

Why wasn't this information helpful

Limit to 250 characters.