EU AI Act
- Nyhet
A Commentary
Inbunden, Engelska, 2025
Av Michaela Nebel, Lukas Feiler, Nikolaus Forgo, Lukas Feiler, Nikolaus Forgó, Nikolaus Forgo
3 189 kr
Beställningsvara. Skickas inom 5-8 vardagar
Fri frakt för medlemmar vid köp för minst 249 kr.It is without any doubt that artificial intelligence is transforming the way we work, the way we live, and how we perceive the world. It is, however, less clear whether – and to what extent – the law can and should respond, and has the potential to shape these changes.This invaluable commentary on the EU Artificial Intelligence Act (EU AI Act) offers a thorough analysis of this groundbreaking legislation. As AI technologies become increasingly integrated into society, it is imperative to address the potential risks and ethical concerns they bring.Readers will quickly get a solid foundational understanding of the EU AI Act in the introductory chapter, which provides a comprehensive overview of the act as a whole. The following chapters deliver insightful examinations of each of the act’s articles by renowned experts in the field. Lukas Feiler, Nikolaus Forgó and Michaela Nebel bring diverse perspectives and deep knowledge to the discussion, making this an essential reference for anyone involved in AI regulation and compliance.Businesses seeking initial guidance and pragmatic solutions on how to navigate the EU AI Act will find this book particularly useful. It is also an indispensable tool for lawyers, judges and other legal professionals who need to navigate the complexities of AI-related regulations.
Produktinformation
- Utgivningsdatum2025-10-27
- Mått160 x 240 x 30 mm
- Vikt900 g
- FormatInbunden
- SpråkEngelska
- Antal sidor500
- FörlagGlobe Law and Business Ltd
- ISBN9781837231065
Tillhör följande kategorier
- Preface 11List of abbreviations 13List of recitals of the AI Act 21An introduction to the AI Act 231. The scope of application of tthe AI Act. . . . . . . . . . . . . . . . .231.1 The material scope of application: What types of AI are covered? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231.2 The personal scope of application: To whom does the AI Act apply? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241.3 The territorial scope of application: Where does the AI Act apply? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261.4 The temporal scope of application: When does the AI Act apply? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262. The AI Act as an instrument of product regulation. . . . . . . . . . . . . . . . . . . . . . . . . 282.1 An overview of European Union product regulation . . . . . . . . . . . . . . . 282.2 The role of harmonised standards and common specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.3 External conformity assessment bodies and their accreditation and notification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.4 The relationship with other harmonisation legislation . . . . . . . . . . . . . . . 303. Risk-based regulation of AI systems and AI models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1 Prohibited AI systems. . . . . . . . . . . . . . . . 313.2 High-risk AI systems. . . . . . . . . . . . . . . . . . 323.3 GenAI and certain biometric AI systems that are subject to special transparency regulations . . . . . 343.4 Other AI systems . . . . . . . . . . . . . . . . . . . . . . . 353.5 General-purpose AI models. . . . . . . 354. An overview of the obligations of the AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.1 Obligations of the providers . . . . . 364.1.1 Obligations regarding high-risk AI systems . . . . . . . . . . . . . . . . . . 364.1.2 Obligations regarding GenAI systems pursuant to Article 50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.1.3 Obligations regarding other AI systems . . . . . . . . . . . . . . . . . . . . . . . . 404.1.4 Obligations regarding general-purpose AI models. . . . . . . . 414.1.5 Obligations regarding general-purpose AI systems . . . . . . . 424.2 Obligations of importers . . . . . . . . . . . 424.2.1 Obligations regarding high-risk AI systems . . . . . . . . . . . . . . . . . . 424.2.2 Obligations regarding other AI systems . . . . . . . . . . . . . . . . . . . . . . . . 434.3 Obligations of distributors . . . . . . . . 434.3.1 Obligations regarding high-risk AI systems . . . . . . . . . . . . . . . . . . 434.3.2 Obligations regarding other AI systems . . . . . . . . . . . . . . . . . . . . . . . . 444.4 Obligations of the deployers. . . . . 444.4.1 Obligations regarding high-risk AI systems . . . . . . . . . . . . . . . . . . 444.4.2 Obligations regarding GenAI and certain biometric AI systems pursuant to Article 50. . 464.4.3 Obligations regarding other AI systems . . . . . . . . . . . . . . . . . . . . . . . . 474.5 Obligations for authorised representatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.5.1 Obligations regarding high-risk AI systems . . . . . . . . . . . . . . . . . . 474.5.2 Obligations regarding general-purpose AI models. . . . . . . . 485. Measures to promote innovation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.1 AI regulatory sandboxes. . . . . . . . . . . . 485.2 Testing in real-world conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496. Enforcement by the authorities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516.1 Market surveillance of AI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516.1.1 Regulatory responsibility for market surveillance. . . . . . . . . . . . . . 516.1.2 Powers of the market surveillance authorities . . . . . . . . . . . . . 546.1.3 The market surveillance procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.2 The AI Office as a supervisory authority for providers of general-purpose AI models . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.3 Fines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577. Liability law and enforcement by private individuals . . . . . . . . . . . . . . . . . . . 58Text of the EU AI Act and commentary 61Chapter I – General provisions 63Article 1 Subject matter. . . . . . . . . . . . . . . . . . . 63Article 2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Article 3 Definitions . . . . . . . . . . . . . . . . . . . . . . . . 85Article 4 AI literacy. . . . . . . . . . . . . . . . . . . . . . . . 125Chapter II – Prohibited AI practices 127Article 5 Prohibited AI practices. . . . 127Chapter III – High-risk AI systems 149Section 1 – Classification of AI systems as high-risk . . . . . . . . . . . . . . . . . . . . . . 149Article 6 Classification rules for high-risk AI systems . . . . . . . . . . . . . . . . . . . . . . 149Article 7 Amendments to Annex III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158Section 2 – Requirements for high-risk AI systems. . . . . . . . . . . . . . . . . . . . . . . . . . . 160Article 8 Compliance with the requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160Article 9 Risk management system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Article 10 Data and data governance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Article 11 Technical documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Article 12 Record-keeping . . . . . . . . . . . . . 175Article 13 Transparency and provision of information to deployers . . . . . . . . . . 177Article 14 Human oversight. . . . . . . . . . 181Article 15 Accuracy, robustness and cybersecurity. . . . . . . . . . . . . . . . . . . . . . . . . . . 185Section 3 – Obligations of providers and deployers of high-risk AI systems and other parties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Article 16 Obligations of providers of high-risk AI systems . . . . . . . . . . . . . . . . . . 191Article 17 Quality management system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194Article 18 Documentation keeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198Article 19 Automatically generated logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200The EU AI Act: A Commentary Article 20 Corrective actions and duty of information . . . . . . . . . . . . . . . . . . . . . . 201Article 21 Cooperation with competent authorities . . . . . . . . . . . . . . . . . . . 203Article 22 Authorised representatives of providers of high-risk AI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204Article 23 Obligations of importers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208Article 24 Obligations of distributors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212Article 25 Responsibilities along the AI value chain . . . . . . . . . . . . . . . . . . . . . . . . . 215Article 26 Obligations of deployers of high-risk AI systems . . . . . . . . . . . . . . . . . . 221Article 27 Fundamental rights impact assessment for high-risk AI systems . . . . . . . . . . . . . . . . . . . . . . 228Section 4 – Notifying authorities and notified bodies. . . . . . . . . . . . . . . . . . . . . . . . . . . . 233Article 28 Notifying authorities . . . . 233Article 29 Application of a conformity assessment body for notification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235Article 30 Notification procedure. . . 236Article 31 Requirements relating to notified bodies . . . . . . . . . . . . . . . . . . . . . . . . . . 238Article 32 Presumption of conformity with requirements relating to notified bodies . . . . . . . . . . . . . 240Article 33 Subsidiaries of notified bodies and subcontracting . . . . . . . . . . . . 241Article 34 Operational obligations of notified bodies. . . . . . . . . . . . . . . . . . . . . . . . . . . 242Article 35 Identification numbers and lists of notified bodies. . . . . . . . . . . . 243Article 36 Changes to notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244Article 37 Challenge to the competence of notified bodies. . . . . . 247Article 38 Coordination of notified bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248Article 39 Conformity assessment bodies of third countries . . . . . . . . . . . . . . . 249Section 5 – Standards, conformity assessment, certificates, registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250Article 40 Harmonised standards and standardisation deliverables. . . 250Article 41 Common specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253Article 42 Presumption of conformity with certain requirements . . . . . . . . . . . . . 256Article 43 Conformity assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257Article 44 Certificates . . . . . . . . . . . . . . . . . . . . 262Article 45 Information obligations of notified bodies . . . . . . . . . . . . . . . . . . . . . . . . . . 263Article 46 Derogation from conformity assessment procedure . . 264Article 47 EU declaration of conformity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Article 48 CE marking . . . . . . . . . . . . . . . . . . . 267Article 49 Registration. . . . . . . . . . . . . . . . . . . 269Chapter IV – Transparency obligations for providers and deployers of certain AI systems 273Article 50 Transparency obligations for providers and deployers of certain AI systems. . . . . . . . . . . . . . . . . . . . . . . . . . 273Chapter V – General-purpose AI models 281Section 1 – Classification rules . . . . . . . . . 281Article 51 Classification of general-purpose AI models as general-purpose AI models with systemic risk. . . . . . . . . 281Article 52 Procedure . . . . . . . . . . . . . . . . . . . . . . 285Section 2 – Obligations for providers of general-purpose AI models . . . . . . . . . . . 288Article 53 Obligations for providers of general-purpose AI models. . . . . . . . 288Article 54 Authorised representatives of providers of general-purpose AI models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294Section 3 – Obligations of providers of general-purpose AI models with systemic risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Article 55 Obligations of providers of general-purpose AI models with systemic risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Article 56 Codes of practice . . . . . . . . . . 299Chapter VI – Measures in support of innovation 303Article 57 AI regulatory sandboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303Article 58 Detailed arrangements for, and functioning of, AI regulatory sandboxes. . . . . . . . . . . . . . . . . . . . . 311Article 59 Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox. . . . . . . . . . .316Article 60 Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes . . . 320Article 61 Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324Article 62 Measures for providers and deployers, in particular SMEs, including start-ups. . . . . . . . . . . . . . . . . . . . . . . . . 325Article 63 Derogations for specific operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 326Chapter VII – Governance 327Section 1 – Governance at Union level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327Article 64 AI Office . . . . . . . . . . . . . . . . . . . . . . . . 327Article 65 Establishment and structure of the European Artificial Intelligence Board. . . . . . . . . . . . . . . . . . . . . . . . . . 328Article 66 Tasks of the Board. . . . . . . . . 330Article 67 Advisory forum. . . . . . . . . . . . . 332Article 68 Scientific panel of independent experts. . . . . . . . . . . . . . . . . . . . . . 334Article 69 Access to the pool of experts by the Member States . . . . . . . 336Section 2 – National competent authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Article 70 Designation of national competent authorities and single points of contact . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Chapter VIII – EU database for high-risk AI systems 341Article 71 EU database for high-risk AI systems listed in Annex III . . . . . . . 341Chapter IX – Post-market monitoring, information sharing and market surveillance 343Section 1 – Post-market monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343Article 72 Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems . . . . . . . . . . . . . . . . . . . . . . . . 343Section 2 – Sharing of information on serious incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . 346Article 73 Reporting of serious incidents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346Section 3 – Enforcement . . . . . . . . . . . . . . . . . . . 350Article 74 Market surveillance and control of AI systems in the Union market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Article 75 Mutual assistance, market surveillance and control of general-purpose AI systems. . . . . . . 361Article 76 Supervision of testing in real world conditions by market surveillance authorities . . . . . . . . . . . . . . . . . 363Article 77 Powers of authorities protecting fundamental rights . . . . . . 364Article 78 Confidentiality. . . . . . . . . . . . . . 366Article 79 Procedure at national level for dealing with AI systems presenting a risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369The EU AI Act: A Commentary Article 80 Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III . . . . 373Article 81 Union safeguard procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375Article 82 Compliant AI systems which present a risk . . . . . . . . . . . . . . . . . . . . . . 376Article 83 Formal non-compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378Article 84 Union AI testing support structures . . . . . . . . . . . . . . . . . . . . . . . . . 379Section 4 – Remedies . . . . . . . . . . . . . . . . . . . . . . . . . 380Article 85 Right to lodge a complaint with a market surveillance authority . . . . . . . . . . . . . . . . . . . 380Article 86 Right to explanation of individual decision-making . . . . . . 381Article 87 Reporting of infringements and protection of reporting persons . . . . . . . . . . . . . . . . . . . . . 384Section 5 – Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI models . . . . . . . . . . . . . . . . 385Article 88 Enforcement of the obligations of providers of general-purpose AI models . . . . . . . . . . . . . . . . . . . . . . . . . 385Article 89 Monitoring actions . . . . . . . 387Article 90 Alerts of systemic risks by the scientific panel . . . . . . . . . . . . . . . . . . . 388Article 91 Power to request documentation and information. . 389Article 92 Power to conduct evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391Article 93 Power to request measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393Article 94 Procedural rights of economic operators of the general-purpose AI model . . . . . . . . . . . . . 394Chapter X – Codes of conduct and guidelines 395Article 95 Codes of conduct for voluntary application of specific requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395Article 96 Guidelines from the Commission on the implementation of this Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . 398Chapter XI – Delegation of power and committee procedure 401Article 97 Exercise of the delegation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401Article 98 Committee procedure. . . 403Chapter XII – Penalties 405Article 99 Penalties . . . . . . . . . . . . . . . . . . . . . . . . 405Article 100 Administrative fines on Union institutions, bodies, offices and agencies . . . . . . . . . . . . . . . . . . . . . . . 410Article 101 Fines for providers of general-purpose AI models . . . . . . . . . . . . 412Chapter XIII – Final provisions 415Article 102 Amendment to Regulation (EC) No. 300/2008 . . . . . . 415Article 103 Amendment to Regulation (EU) No. 167/2013 . . . . . . 417Article 104 Amendment to Regulation (EU) No. 168/2013 . . . . . . 418Article 105 Amendment to Directive 2014/90/EU . . . . . . . . . . . . . . . . . . . . 419Article 106 Amendment to Directive (EU) 2016/797. . . . . . . . . . . . . . . . 420Article 107 Amendment to Regulation (EU) 2018/858 . . . . . . . . . . . . . 421Article 108 Amendments to Regulation (EU) 2018/1139 . . . . . . . . . . . 422Article 109 Amendment to Regulation (EU) 2019/2144 . . . . . . . . . . . 424Article 110 Amendment to Directive (EU) 2020/1828. . . . . . . . . . . . . . 425Article 111 AI systems already placed on the market or put into service and general-purpose AI models already placed on the market . .. . . . . . . . 426Article 112 Evaluation and review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428Article 113 Entry into force and application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432Annex I – List of Union harmonisation legislation 435Annex II – List of criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii) 439Annex III – High-risk AI systems referred to in Article 6(2) 441Annex IV – Technical documentation referred to in Article 11(1) 461Annex V – EU declaration of conformity 465Annex VI – Conformity assessment procedure based on internal control 467Annex VII – Conformity based on an assessment of the quality management system and an assessment of the technical documentation 469Annex VIII – Information to be submitted upon the registration of high-risk AI systems in accordance with Article 49 473Section A – Information to be submitted by providers of high-risk AI systems in accordance with Article 49(1) . . . . . . . . . . . . .. . . . . . . . . . 473Section B – Information to be submitted by providers of high-risk AI systems in accordance with Article 49(2) . . . . . . . . . . .. . . . . 474Section C – Information to be submitted by deployers of high-risk AI systems in accordance with Article 49(3) . . . . . . . . . . . . . . . 474Annex IX – Information to be submitted upon the registration of high-risk AI systems listed in Annex III in relation to testing in real world conditions in accordance with Article 60 477Annex X – Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice 479Annex XI – Technical documentation referred to in Article 53(1), point (a) — technical documentation for providers of general-purpose AI models 483Section 1 – Information to be provided by all providers of general-purpose AI models . . . . . . . . . . . . . . .483Section 2 – Additional information to be provided by providers of general-purpose AI models with systemic risk . . . . . . . . . . .484Annex XII – Transparency information referred to in Article 53(1), point (b) – technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system 485Annex XIII – Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51 487Index 489About the authors 499About Globe Law and Business 501
Hoppa över listan