By browsing this website, you acknowledge the use of a simple identification cookie. It is not used for anything other than keeping track of your session from page to page. OK
1

"United we stand, divided we fall".
A study on potential legal measures to fight algorithmic discrimination

Bookmarks
Article

Palmirotta, Federica

Hungarian Labour Law E-Journal

2024

2

1-22

digital economy ; crowd work ; discrimination ; labour law

EU countries

Law

http://hllj.hu/

English

"The adoption of algorithms and Artificial Intelligence within the world of work entails new risks, for
instance the harm of algorithmic discrimination for workers.
Currently, labour lawyers are trying to assess the adequacy of the current antidiscrimination law
framework to address the challenges that digitalisation and algorithmic management pose, despite
also highlighting some of its limitations.
Thus, the aim of this paper is to focus on the other legal sources or legal proposals on the European
Union level that address issues linked with algorithmic management and, consequently, algorithmic
discrimination, in order to investigate whether and how these legal measures corroborate the non discrimination law framework.
To this end, focus is placed on the General Data Protection Regulation, the AI act, as well as the
Directive on improving working conditions in platform work, with the aim to select the regulations
which constitute valuable additions to the non-discrimination and equal treatment law and could be
useful to prevent, detect and redress algorithmic discrimination, enforcing equality obligations.
The joint analysis of these regulatory sources shows that each of them, despite its shortcomings,
provides some forms of protection against discriminations and, indirectly, addresses some of the
limitations acknowledged in the antidiscrimination law, therefore, confirming that the interplay
of these provisions, if enforced effectively, could become a potential solution against algorithmic
discrimination, while also representing an opportunity to correct biases already happening in humans'
mind and yet opaque and not explicit."

Digital



Bookmarks