Article 50 AI Act: Do the Transparency Provisions Improve Upon the Commission’s Draft?

Authors

  • Nicolaj Feltes

Keywords:

AI Act, Transparency, Generative AI, Deep Fakes, DSA

Abstract

On April 21, 2021, the European Commission presented the first draft of the EU Artificial Intelligence Act, marking a significant step in Europe’s regulatory approach to Artificial Intelligence (AI). The original proposal already included foundational transparency requirements, many of which are now formalised in Art. 50 of the Artificial Intelligence Act (hereinafter: AI Act). However, as AI technologies evolved rapidly – including the emergence of advanced tools like ChatGPT – the transparency obligations in Art. 50 AI Act were expanded to address new concerns around user awareness and content authenticity. Thus, notable additions such as labelling requirements for synthetic content and AI-generated texts were implemented in the final version of the AI Act.

In its finalised version, the AI Act specifies five distinct transparency obligations designed to enhance clarity and user protection across various AI applications. These obligations apply to interactive AI systems such as Chatbots (para. 1), AI systems for the creation of synthetic content (para. 2), systems for emotion recognition or biometric categorisation (para. 3), concerning AI-generated deep fake content (para. 4, subpara. 1), and AI-generated texts (para. 4, subpara. 2).

This article closely examines the transparency obligations, addressing potential issues of interpretation, practical challenges, and discusses whether the final version of the AI Act effectively addresses the problems present in the Commission’s draft.

Downloads

Published

2025-09-04

Similar Articles

1 2 3 4 5 6 7 8 > >> 

You may also start an advanced similarity search for this article.