It will be what we want it to be: Sociotechnical and Contested Systemic Risk at the Core of the EU’s Regulation of Platforms’ AI Systems
Keywords:
AI Systems, Digital Platforms, Systemic Risk, EU Law, ParticipationAbstract
The EU regulates AI systems of large digital platforms using a risk-based approach developed primarily through the Digital Services Act (DSA) and the AI Act (AIA). The existing literature highlights two main challenges to this regulatory strategy: the potentially unconstrained discretion and informational power of regulated tech companies, and the limited predictive value of risk regulation for less quantifiable forms of harm. This paper describes and systematises how EU law intends to address these challenges and ensure effective AI risk management processes. Through doctrinal analysis of the DSA, AIA, and their implementing laws and soft law, it lays out the integrated risk management framework these regulations establish for platforms’ AI systems. It argues that this integrated framework has three main normative commitments: (i) AI systemic risks should be framed sociotechnically, (ii) their management should be methodologically contextual, and (iii) and civil society should be actively involved in identifying and mitigating AI systemic risks. On this last commitment, however, the mechanisms for civil society participation remain especially unclear. This paper thus offers an overview of all formal and informal spaces of participation in this risk management framework, differentiating them by their institutional setup, rationales for civil society intervention, types of expertise sought, and actors involved. Overall, this paper advances the dialogue on the EU’s risk-based approach to platform and AI regulation, offering a possible baseline for critique and empirical inquiry into its implementation.