The use of Artificial Intelligence (AI) can, inadvertently, give rise to issues relating to data protection compliance and equality law. However, used properly, it also provides a unique opportunity to combat implicit systematic discrimination. The new EU AI Act supports such an optimistic approach towards AI.

Discrimination through non-automated processes

In the public discourse on AI and the associated risks of discrimination, it is often overlooked that human decisions could be unconsciously based on non-objective criteria.

As a result, forms of implicit systematic discrimination can occur in the course of workforce decision making. For example, HR decisions made “on instinct” may be based on ill-considered preferences. Research has shown that individuals tend to attribute more skills to people they find attractive. This and other unconscious biases can trigger discriminatory results. Furthermore, irrelevant criteria such a person’s own current state of wellbeing can also impact decision-making.

AI as an opportunity in the fight against discrimination

Although there is a risk that AI-powered automation of decision-making processes could lead to the proliferation of discriminatory effects if the AI systems are trained with data from previous biased human decisions, this can be mitigated through appropriate quality control measures.  These involve assessing the training data for discrimination and bias and taking remedial action to address this where required.

Implementing these measures, however, will often require collecting sensitive data about the data subjects whose data is part of the training data, for example, data about an individual’s religious affiliation or sexual orientation. The collection of any such personal data will need to comply with the EU General Data Protection Regulation (GDPR) and there are more stringent obligations for processing sensitive personal data.

AI Act introduces a paradigm shift

With the AI Act, the EU legislator has recognized for the first time that this challenge needs to be addressed . It expressly permits the use of sensitive data insofar as this is absolutely necessary to counteract discriminatory bias in high-risk AI systems. These high-risk AI systems include, in particular, AI systems that support work-related decisions, e.g., by evaluating employees.

The AI Act permits the processing of sensitive data under strict additional conditions. These conditions prioritize protecting data subjects’ interests under data protection law.

Specifically:

  • The use of sensitive data must be necessary for the detection and correction of biases and only where other types of data, including synthetic or anonymized data, is not sufficient for this purpose.
  • Furthermore, sensitive data must be protected by the highest security measures. This includes strict access control and documentation of all accesses.
  • In addition, the sensitive data must be pseudonymized so that the data subjects cannot be immediately identified.
  • Finally, the sensitive data must not be transferred to any party that is not an authorised person who is subject to the appropriate confidentiality obligations and the sensitive data must be deleted as soon as they are no longer necessary for the detection and correction of biases.

All of this must be documented appropriately. This paradigm shift is an expression of a fundamental optimism that our social reality can be improved in a sustainable manner through properly regulated AI.


This is the first post of a three-part blog series. Click to view our second post GDPR compliance and inclusion: striking the right balance, and look out for our final post on The protection of gender identity under the GDPR next week.

Author

Dr. Lukas Feiler, SSCP, CIPP/E, has more than eight years of experience in IP/IT and is a partner and head of the IP and IT team at Baker McKenzie • Diwok Hermann Petsche Rechtsanwälte LLP & Co KG in Vienna. He is a lecturer for data protection law at the University of Vienna Law School and for IT compliance at the University of Applied Science Wiener Neustadt.

Author

Elisabeth is a partner in Baker McKenzie's Brussels office. She advises clients in all fields of IT, IP and new technology law, with a special focus on data protection and privacy aspects. She regularly works with companies in the healthcare, finance and transport and logistics sectors.

Author

Francesca Gaudino is the Head of Baker McKenzie’s Information Technology & Communications Group in Milan. She focuses on data protection and security, advising particularly on legal issues that arise in the use of cutting edge technology.

Author

Magalie Dansac Le Clerc is a partner in Baker McKenzie's Paris office. A member of the Firm's Information Technology and Communications Practice Group, she is a Certified Information Privacy Professional (CIPP).

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Prof. Dr. Michael Schmidl is co-head of the German Information Technology Group and is based in Baker McKenzie's Munich office. He is an honorary professor at the University of Augsburg and specialist lawyer for information technology law (Fachanwalt fĂźr IT-Recht). He advises in all areas of contentious and non-contentious information technology law, including internet, computer/software, data privacy and media law. Michael also has a general commercial law background and has profound experience in the drafting and negotiation of outsourcing contracts and in carrying out compliance projects.