EU AI Act
- Nyhet
A Commentary
Inbunden, Engelska, 2025
Av Michaela Nebel, Lukas Feiler, Nikolaus Forgo, Lukas Feiler, Nikolaus Forgó, Nikolaus Forgo
2 979 kr
Kommande
This invaluable commentary on the EU Artificial Intelligence Act (EU AI Act) offers a thorough analysis of this groundbreaking legislation. As AI technologies become more integrated into society, it is imperative to address the potential risks and ethical concerns they bring.Readers will quickly get a sound foundational understanding of the EU AI Act in the introductory chapter, which provides a comprehensive overview of the entire Act. The following chapters offer insightful examination of each of the Act’s articles by renowned experts in the field. Lukas Feiler, Nikolaus Forgó and Michaela Nebel bring diverse perspectives and deep knowledge to the discussion, making this a valuable reference for anyone involved in AI regulation and compliance.Businesses seeking initial guidance and pragmatic solutions on how to navigate the EU AI Act will find this book particularly useful. It is also an indispensable tool for lawyers, judges, and other legal professionals who need to navigate the complexities of AI-related regulations.
Produktinformation
- Utgivningsdatum2025-10-31
- Mått160 x 240 x undefined mm
- SpråkEngelska
- Antal sidor500
- FörlagGlobe Law and Business Ltd
- EAN9781837231065
Hoppa över listan
Mer från samma författare
Tillhör följande kategorier
- List of abbreviations List of sources cited in abbreviated form List of recitals of the AI Act An introduction to the AI Act 1. The scope of the AI Act1.1 Material scope – What types of AI are covered?1.2 Personal scope – Who does the AI Act apply to?1.3 Territorial scope – Where does the AI Act apply?1.4 Temporal scope – From when does the AI Act apply? 2. The AI Act as an instrument of product regulation2.1 An overview of European product regulation2.2 The role of harmonized standards and common specifications2.3 External conformity assessment bodies and their accreditation and notification2.4 Relation to other harmonization legislation 3. Risk-based regulation of AI systems and AI models3.1 Prohibited AI systems3.2 High-risk AI systems3.3 GenAI and certain biometric AI systems that are subject to special transparency rules3.4 Other AI systems3.5 General-purpose AI models 4. An overview of the obligations under the AI Act4.1 Obligations of providers4.2 Obligations of importers4.3 Obligations of distributors4.4 Obligations of deployers systems4.5 Obligations of authorised representatives 5. Innovation-promoting measures5.1 AI regulatory sandboxes5.2 Testing in real-world conditions 6. Enforcement by public authorities6.1 Market surveillance regarding AI systems6.2 The AI Office as supervisory authority for providers of general-purpose AI models6.3 Administrative fines 7. Liability and private enforcement AI Act text and commentary Chapter I – General ProvisionsArticle 1 Subject matterArticle 2 ScopeArticle 3 DefinitionsArticle 4 AI literacyChapter II – Prohibited AI practicesArticle 5 Prohibited AI practicesChapter III – High-risk AI systems Section 1 – Classification of AI systems as high-riskArticle 6 Classification rules for high-risk AI systemsArticle 7 Amendments to Annex III Section 2 – Requirements for high-risk AI systemsArticle 8 Compliance with the requirementsArticle 9 Risk management systemArticle 10 Data and data governanceArticle 11 Technical documentationArticle 12 Record-keepingArticle 13 Transparency and provision of information to deployersArticle 14 Human oversightArticle 15 Accuracy, robustness and cybersecurity Section 3 – Obligations of providers and deployers of high-risk AI systems and other partiesArticle 16 Obligations of providers of high-risk AI systemsArticle 17 Quality management systemArticle 18 Documentation keepingArticle 19 Automatically generated logsArticle 20 Corrective actions and duty of informationArticle 21 Cooperation with competent authoritiesArticle 22 Authorised representatives of providers of high-risk AI systemsArticle 23 Obligations of importersArticle 24 Obligations of distributorsArticle 25 Responsibilities along the AI value chainArticle 26 Obligations of deployers of high-risk AI systemsArticle 27 Fundamental rights impact assessment for high-risk AI systems Section 4 – Notifying authorities and notified bodiesArticle 28 Notifying authoritiesArticle 29 Application of a conformity assessment body for notificationArticle 30 Notification procedureArticle 31 Requirements relating to notified bodiesArticle 32 Presumption of conformity with requirements relating to notified bodiesArticle 33 Subsidiaries of notified bodies and subcontractingArticle 34 Operational obligations of notified bodiesArticle 35 Identification numbers and lists of notified bodiesArticle 36 Changes to notificationsArticle 37 Challenge to the competence of notified bodiesArticle 38 Coordination of notified bodiesArticle 39 Conformity assessment bodies of third countries Section 5 – Standards, conformity assessment, certificates, registrationArticle 40 Harmonised standards and standardisation deliverablesArticle 41 Common specificationsArticle 42 Presumption of conformity with certain requirementsArticle 43 Conformity assessmentArticle 44 CertificatesArticle 45 Information obligations of notified bodiesArticle 46 Derogation from conformity assessment procedureArticle 47 EU declaration of conformityArticle 48 CE markingArticle 49 RegistrationChapter IV – Transparency obligations for providers and deployers of certain AI systemsArticle 50 Transparency obligations for providers and deployers of certain AI systems Chapter V – General-purpose AI models Section 1 – Classification rulesArticle 51 Classification of general purpose AI models as general-purpose AI models with systemic riskArticle 52 Procedure Section 2 – Obligations for providers of general-purpose AI modelsArticle 53 Obligations for providers of general-purpose AI modelsArticle 54 Authorised representatives of providers of general purpose AI models Section 3 – Obligations of providers of general-purpose AI models with systemic riskArticle 55 Obligations of providers of general-purpose AI models with systemic riskArticle 56 Codes of practice Chapter VI – Measures in support of innovationArticle 57 AI regulatory sandboxesArticle 58 Detailed arrangements for, and functioning of, AI regulatory sandboxesArticle 59 Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandboxArticle 60 Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxesArticle 61 Informed consent to participate in testing world conditions outside AI regulatory sandboxesArticle 62 Measures for providers and deployers, in particular SMEs, including start-upsArticle 63 Derogations for specific operators Chapter VII – Governance Section 1 – Governance at Union levelArticle 64 AI OfficeArticle 65 Establishment and structure of the European Artificial Intelligence BoardArticle 66 Tasks of the BoardArticle 67 Advisory forumArticle 68 Scientific panel of independent expertsArticle 69 Access to the pool ofexperts by the Member States Section 2 – National competent authoritiesArticle 70 Designation of national competent authorities and single points of contact Chapter VIII – EU Database for high-risk AI systemsArticle 71 EU database for high-risk AI systems listed in Annex III Chapter IX – Post-market monitoring, information sharing and market surveillance Section 1 – Post-market monitoringArticle 72 Post-market monitoring by providers and post-market monitoring plan for high risk AI systems, Section 2 – Sharing of information on serious incidentsArticle 73 Reporting of serious incidentsSection 3 – EnforcementArticle 74 Market surveillance and control of AI systems in the Union marketArticle 75 Mutual assistance, market surveillance and control of general-purpose AI systemsArticle 76 Supervision of testing in real world conditions by market surveillance authoritiesArticle 77 Powers of authorities protecting fundamental rightsArticle 78 ConfidentialityArticle 79 Procedure at national level for dealing with AI systems presenting a riskArticle 80 Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex IIIArticle 81 Union safeguard procedureArticle 82 Compliant AI systems which present a riskArticle 83 Formal non-complianceArticle 84 Union AI testing support structures Section 4 – RemediesArticle 85 Right to lodge a complaint with a market surveillance authorityArticle 86 Right to explanation of individual decision-makingArticle 87 Reporting of infringements and protection of reporting persons Section 5 – Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI modelsArticle 88 Enforcement of the obligations of providers of general-purpose AI modelsArticle 89 Monitoring actionsArticle 90 Alerts of systemic risks by the scientific panelArticle 91 Power to request documentation and informationArticle 92 Power to conduct evaluationsArticle 93 Power to request measuresArticle 94 Procedural rights of economic operators of the general-purpose AI model Chapter X – Codes of conduct and guidelinesArticle 95 Codes of conduct for voluntary application of specific requirementsArticle 96 Guidelines from the Commission on the implementation of this Regulation Chapter XI – Delegation of power and committee procedureArticle 97 Exercise of the delegationArticle 98 Committee procedure Chapter XII – PenaltiesArticle 99 PenaltiesArticle 100 Administrative fines on Union institutions, bodies, offices and agenciesArticle 101 Fines for providers of general-purpose AI models Chapter XIII – Final provisionsArticle 102 Amendment to Regulation (EG) No 300/2008Article 103 Amendment to Regulation (EU) No 167/2013Article 104 Amendment to Regulation (EU) No 168/2013Article 105 Amendment to Directive 2014/90/EUArticle 106 Amendment to Directive (EU) 2016/797Article 107 Amendment to Regulation (EU) 2018/858Article 108 Amendments to Regulation (EU) 2018/1139Article 109 Amendment to Regulation (EU) 2019/2144Article 110 Amendment to Directive (EU) 2020/1828Article 111 AI systems already placed on the market or put into service and general-purpose AI models already placed on the marketArticle 112 Evaluation and reviewArticle 113 Entry into force and application ANNEX I – List of Union harmonisation legislation ANNEX II – List of criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii) ANNEX III – High-risk AI systems referred to in Article 6(2) ANNEX IV – Technical documentation referred to in Article 11(1) ANNEX V – EU declaration of conformity ANNEX VI – Conformity assessment procedure based on internal control ANNEX VII – Conformity based on an assessment of the quality management system and an assessment of the technical documentation ANNEX VIII – Information to be submitted upon the registration of high-risk AI systems in accordance with Article 49Section A Information to be submitted by providers of high-risk AI systems in accordance with Article 49(1)Section B Information to be submitted by providers of high-risk AI systems in accordance with Article 49(2)Section C Information to be submitted by deployers of high-risk AI systems in accordance with Article 49(3) ANNEX IX – Information to be submitted upon the registration of high-risk AI systems listed in Annex III in relation to testing in real world condictions in accordance with Article 60 ANNEX X – Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice ANNEX XI – Technical documentationreferred to in Article 53(1), point (a) technical documentation for providers of general-purpose AI modelsSection 1 Information to be provided by all providers of general purpose AI modelsSection 2 Additional information to be provided by providers of general-purpose AI models with systemic risk ANNEX XII – Transparency information referred to in Article 53(1), point (b) – technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system ANNEX XIII – Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51 Index