CIPP/US

Enroll

CIPP/E

Enroll

CIPP/C

Enroll

CIPM

Enroll

CIPT

Enroll

AIGP

Enroll

AIGP

0%

Table of Contents

TOC

Welcome

incomplete

I. The Foundations of Artificial Intelligence Governance

Introduction to the Foundations of AI Governance

incomplete

Section A: The Basic Elements of Artificial Intelligence and Machine Learning

+

0/12

1. What is Artificial Intelligence?

incomplete

a. The Multiple Definitions of AI

b. The Turing Test

c. Common Elements

d. AI as a Socio-Technical System

2. The Subfields or Categories of AI

incomplete

a. A Roadmap to Understand AI Concepts

b. Weak AI vs. General AI

3. What is Machine Learning?

incomplete

a. Machine Learning Defined

b. ML Models and Training

4. Types of Machine Learning Techniques

incomplete

a. Supervised Learning

b. Unsupervised Learning

c. Semi-Supervised Learning

d. Reinforcement Learning

5. The Role of Data in Machine Learning

incomplete

a. Training, Validation, and Testing Data

b. Quantity and Quality of Data

c. The Types of Data

d. Processing Data

6. Specific Algorithms and Training Models and Other Techniques

incomplete

a. Supervised Learning Techniques

b. Unsupervised Learning Techniques

c. Federated Learning

d. Other Techniques and Concepts

7. Deep Learning and Neural Networks

incomplete

8. Additional Categorizations of AI/ML Models

incomplete

a. Discriminative vs. Generative Models

b. Proprietary vs. Open-Source Models

c. Foundation Models

i. Transfer Learning

ii. Fine-Tuning Foundation Models

iii. Types of Foundation Models

d. Multimodal Models

e. Transformer Models

f. Diffusion Models

9. Other Models of AI

incomplete

a. Linear and Statistical Models

b. Computer Vision and Speech Recognition

c. Natural Language Processing

d. Expert Systems

e. Fuzzy Logic

f. Robotics and Robotic Processing Automation

10. The OECD Framework for the Classification of AI Systems

incomplete

a. The Five Dimensions of AI Classification

11. The AI Boom and Technology Infrastructure

incomplete

a. Technology Megatrends

b. The AI Tech Stack Infrastructure

i. Compute

ii. Storage

iii. Network

iv. Software

v. Observability and AI Monitoring

Section I.A Review

incomplete

Section B: The Need for AI Governance

+

0/6

1. Introduction to AI Governance

incomplete

a. What is AI Governance?

b. Alignment of AI

2. Risks and Harms Posed by AI

incomplete

a. “Safe” AI

b. The Sources of AI Harms

c. Risks, Threats, and Controls Defined

d. Taxonomies of AI Harms

i. CSET AI Harm Taxonomy of AIID

ii. Sociotechnical Harms of Algorithmic Systems

iii. MIT’s AI Risk Repository

e. Calculating AI Risk

3. The Impact of AI Risks

incomplete

a. Individual Harm

i. How Individual Harms Occur

ii. Where Individual Harms Occur

iii. Privacy and Other Civil Liberties

iv. Economic Opportunities

b. Group Harm

c. Organizational Harm

d. Societal Harm

e. Ecosystem Harm

4. Unique Characteristics of AI that Require a Comprehensive Approach to Governance

incomplete

a. Complexity

b. Opacity

c. Autonomy

d. Speed and Scale

e. Potential for Harm or Misuse

f. Data Dependency

g. Probabilistic vs. Deterministic Output

5. Common Principles of Responsible AI

incomplete

a. Fairness

b. Safety, Reliability, and Robustness

c. Privacy and Security

d. Transparency and Explainability

e. Accountability

f. Human-Centricity

Section I.B Review

incomplete

Knowledge Review #1

incomplete

Section C: Establishing and Communicating Organizational Expectations for AI Governance

+

0/7

1. Developing an AI Governance Strategy

incomplete

2. Roles and Responsibilities of AI Governance Stakeholders

incomplete

a. Stakeholders

b. Creating a Culture of Ethical and Responsible AI

3. Cross-Functional Collaboration

incomplete

a. The Role of AI Researchers, Data Scientists, and Engineers

b. The Role of Humanities and Social Sciences

c. The Role of User Interface and User Experience Design

d. Other Roles to Define

e. Additional AI Actors

f. Standing Up an AI Governance Body

4. Establishing an AI Training and Awareness Program

incomplete

a. Communicating About AI Governance

b. Training and Awareness

i. Role-Based Training

ii. Training vs. Awareness

iii. Training and Awareness as a Communication Tool

iv. Building a Training and Awareness Program

5. Differentiating Approaches to AI Governance

incomplete

a. Governance Models

i. Centralized Model

ii. Distributed Model (a/k/a Local Model or Decentralized Model)

iii. Hybrid Model (a/k/a Federated Model)

iv. Advantages and Disadvantages

b. Location of AI Governance

c. AI Governance Program Maturity

6. Differences Among AI Developers, Deployers, and Users from a Governance Perspective

incomplete

a. AI Developers

b. AI Deployers

c. AI Users

Section I.C. Review

incomplete

Section D: Establish Policies and Procedures to Apply Throughout the AI Life Cycle

+

0/5

1. What is the AI Life Cycle?

incomplete

2. Oversight and Accountability Across All Stages of the AI Life Cycle

incomplete

a. Developing an AI Governance Framework

b. Creating an Inventory of AI Applications and Algorithms

c. Establishing AI Policies and Procedures

3. Evaluate and Update Existing Data Privacy and Security Policies for AI

incomplete

a. Unique Privacy Risks of AI

b. Unique Security Risks of AI

4. Managing Third-Party Risk

incomplete

a. Assessing Third-Party Vendor Risk

b. Choosing a Third-Party Vendor

c. Vendor Contracts

d. Managing Third-Party Risk from Downstream Use

Section I.D. Review

incomplete

Knowledge Review #2

incomplete

II. The Laws, Standards, and Frameworks Applicable to AI

Introduction to the Laws, Standards, and Frameworks Applicable to AI

incomplete

Section A: The Application of Existing Data Privacy Laws

+

0/8

1. Introduction to Data Privacy Laws

incomplete

a. Types of Data Privacy Laws

b. What do Data Privacy Laws Regulate?

i. “Personal Data”

ii. “Processing” Personal Data

iii. The Roles in Data Processing

c. The Role of Pseudonymization and Anonymization

i. Pseudonymization of Data

ii. Anonymization of Data

2. Fair Information Practices

incomplete

a. What are Fair Information Practices?

i. Individual Data Subject Rights

ii. Organizational Management

b. The OECD Privacy Guidelines

3. Lawfulness, Notice, Choice, Consent, and Purpose Limitation Requirements

incomplete

a. Data Processing Principles and Lawfulness

b. Privacy Notices

c. Consent

i. Types of Consent

ii. When Consent is Not Needed (“No Option”)

iii. The Importance of Consent

d. Purpose Limitation

e. Application to AI

4. Data Minimization

incomplete

5. Privacy by Design

incomplete

6. The Obligations of Data Controllers

incomplete

a. Privacy Impact Assessments (PIAs)

b. Third-Party Processors

c. Cross-Border Data Transfers

d. Data Subject Rights

i. Data Subject Rights Under the GDPR

ii. Automated Decision-Making (GDPR Art. 22)

e. Security and Safeguards

f. Incident Management

g. Breach Notification

h. Record Keeping

7. Sensitive Personal Data and Special Categories of Personal Data

incomplete

Section II.A Review

incomplete

Section B: The Application of Other Existing Laws

+

0/5

1. Intellectual Property Laws

incomplete

a. Ownership and Licensing of IP

b. Copyright Laws

c. The Status of Machines and Humans in IP Law

i. Copyright

ii. Patents

d. Other Considerations Related to IP and the use of AI

2. Non-Discrimination Laws

incomplete

a. Employment Discrimination

i. New York City Regulation

ii. EEOC Guidance

b. Credit and Lending Discrimination

i. The Fair Credit Reporting Act

ii. The Equal Credit Act

iii. SR 11-7

c. Housing Discrimination

d. Insurance Discrimination

e. Healthcare Discrimination

3. Consumer Protection Laws

incomplete

a. The United States FTC Act

i. Algorithmic Disgorgement

ii. AI-Specific Enforcement Actions

b. Consumer Medical Tech in the U.S.

c. OSHA Guidelines for Robotics

d. E.U. Digital Services Act

4. Product Safety and Liability Laws

incomplete

a. Liability Regimes: Fault-Based vs. Strict Liability

b. The Difficulty of Proving Liability

c. Potential Reforms

i. European Reforms

ii. U.S. Reforms

Section II.B Review

incomplete

Section C: The European Union’s AI Act

+

0/7

1. Introduction and Scope of the E.U. AI Act

incomplete

a. What the E.U. AI Act Applies to (“AI Systems”)

b. Who the E.U. AI Act Applies to (“Providers” and “Deployers”)

c. Territorial Scope

d. Exemptions

2. E.U. AI Act’s Risk Classification Framework

incomplete

a. Prohibited AI Practices: Unacceptable Risk

b. High-Risk AI Systems

c. Limited-Risk AI Systems

d. Minimal-Risk AI Systems

e. Ongoing Evaluations of Risk

3. High-Risk AI System Requirements

incomplete

a. Risk Management Systems

b. Data Governance Practices

c. Technical Documentation

d. Record-Keeping Requirements

e. Transparency and Notification of Information to Downstream Deployers

f. Human Oversight

g. Accuracy, Robustness, and Cybersecurity

4. Requirements Based on Organizational Context

incomplete

a. AI Literacy Requirements (Art. 4)

b. Providers of High-Risk AI Systems

i. Compliance with Section 2 and High-Risk AI Systems

ii. Quality Management Systems

iii. Documentation Keeping

iv. Automated Logging

v. Conformity Testing

vi. Registration Obligations

vii. Corrective Actions and Information Provision

c. Authorized Representatives of Providers of High-Risk AI Systems

d. Deployers of High-Risk AI Systems

e. Importers of High-Risk AI Systems

f. Distributors of High-Risk AI Systems

g. Avoiding the Label of Provider

5. Distinct Requirements for General Purpose AI Models

incomplete

6. Enforcement and Penalties

incomplete

Section II.C Review

incomplete

Section D: Industry Standards and Frameworks

+

0/8

1. Introduction to Industry Standards and Frameworks

incomplete

2. Additional Global AI Regulation

incomplete

a. The Various Approaches to Regulation

b. Other Global Laws and Guidelines

i. Canada’s Artificial Intelligence and Data Act (AIDA)

ii. Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems

i. Singapore’s Model AI Governance Framework

iv. China’s Cyberspace Administration Guidelines

c. U.S. Law and Guidance

3. OECD Principles of Trustworthy AI

incomplete

4. Executive Order on the Safe, Security and Trustworthy Development and Use of AI (14110)

incomplete

5. NIST AI Risk Management Framework and Playbook

incomplete

a. Trustworthy AI Principles

b. Core Functions

c. Profiles

d. NIST AI RMF Playbook

e. Generative AI Profile

6. NIST ARIA Program

incomplete

a. Three “Testbeds”

b. Assessment and Measurement Layers

7. ISO Standards

incomplete

a. Standard 22989

b. Standard 42001

c. Standard 3100

Section II.D Review

incomplete

Knowledge Review #3

incomplete

III. How to Govern AI Development

Introduction to Governing AI Development

incomplete

Section A: Governing the Designing and Building of the AI Model

+

0/7

1. Define the Business Context and Use Case of the AI Model

incomplete

a. Identifying the Business Problem, Objectives, and Requirements

b. Use Case Analysis

c. Scoping the Proposed Solution

d. Stakeholder Input

2. Perform or Review an Impact Assessment of the AI Model

incomplete

a. What is an AI Impact Assessment?

b. How to Perform an AI Impact Assessment?

c. Examples of AI Impact Assessment Methodologies

i. Microsoft’s Responsible AI Impact Assessment Guide and Template

ii. Canada’s Algorithm Impact Assessment Tool

iii. The Council of Europe’s HUDERIA Methodology

3. Identify Laws that Apply to the AI Model

incomplete

4. Design and Build the AI Model

incomplete

a. System Architecture and Model Selection

b. Ensemble Methods

c. Feature Engineering

5. Managing Risks Related to Designing and Building the AI Model

incomplete

6. Document the Designing and Building Process

incomplete

Section III.A Review

incomplete

Section B: Governing the Collection and Use of Data in Training and Testing the AI Model

+

0/7

1. Establishing a Data Strategy

incomplete

a. Data Collection Methods

b. Data Structures

c. Data Preparation

2. Follow Requirements for Data Governance

incomplete

a. What is Data Governance?

b. The Role of Notice and Consent

c. Other Lawfulness Considerations

3. Establish and Document Data Lineage and Provenance

incomplete

4. Plan and Perform Training and Testing of the AI Model

incomplete

a. Training the AI Model

b. Testing the AI Model

i. Types of Testing

5. Risks During Training and Testing of the AI Model

incomplete

a. Differential Privacy

b. Homomorphic Encryption

c. Secure Multi-Party Computation

6. Document the Training and Testing Process

incomplete

Section III.B Review

incomplete

Section C: Governing the Release, Monitoring, and Maintenance of the AI Model

+

0/7

1. Assess Readiness and Prepare for Release into Production

incomplete

a. Model Cards

i. What is Included?

ii. Benchmarking

iii. Tools for Producing Model Cards

b. Conformity Requirements

2. Continuous Monitoring From a Developer Perspective

incomplete

3. Conduct Periodic Activities to Assess the AI Model’s Performance, Reliability, and Safety

incomplete

4. Manage and Document Incidents, Issues, and Risks

incomplete

a. What is an AI Incident?

b. AI Incident Response

5. Understand Why Incidents Arise from AI Models

incomplete

6. Make Public Disclosures to Meet Transparency Obligations

incomplete

Section III.C Review

incomplete

Knowledge Review #4

incomplete

IV. How to Govern AI Deployment and Use

Introduction to Governing AI Deployment and Use

incomplete

Section A: Evaluate Key Factors and Risks Relevant to the Decision to Deploy the AI Model

+

0/5

1. Understand the Context of AI Use Cases

incomplete

a. Understanding the AI System

b. Understanding How the AI System Will Be Used

c. Operational and Business Risks of Deploying an AI System

2. Performing a Readiness Assessment

incomplete

3. AI Deployment Options

incomplete

a. The Process of Deployment

b. Types of Deployment Environments

4. Improving Performance and Fit

incomplete

a. Retrieval Augmented Generation

b. Prompt Engineering

Section IV.A Review

incomplete

Section B: Perform Key Activities to Assess the AI Model

+

0/5

1. Perform or Review an AI Impact Assessment on the Selected AI Model

incomplete

2. Identify Laws that Apply to the AI System

incomplete

3. Identify and Evaluate Key Terms and Risks in the Vendor or Open-Source Agreement

incomplete

4. Deploying a Proprietary Model

incomplete

Section IV.B Review

incomplete

Section C: Govern the Deployment and Use of the AI Model

+

0/8

1. Apply the Policies, Procedures, Best Practices, and Ethical Considerations to the Deployment of an AI Model

incomplete

a. Optionality and Contestability

b. Human Oversight

c. AI Governance Automation

2. Continuous Monitoring and Establishing a Regular Schedule for Maintenance, Updates, and Retraining

incomplete

a. Monitoring the AI Model Itself

b. Monitoring for Security Risks

c. Ongoing Maintenance

3. Conduct Periodic Activities to Assess the AI Model’s Performance, Reliability, and Safety

incomplete

a. Challenger Modeling

b. Audits

i. Types of Audits

ii. The Audit Life Cycle

c. Red Teaming

d. Threat Modeling

e. Bug Bounties

4. Document Incidents, Issues, Risks, and Post-Market Monitoring Plans

incomplete

a. Incident Response Planning

b. Incident Response

c. Documenting the Incident

d. Incident Follow-Up

5. Forecast and Reduce Risks of Secondary or Unintended Uses and Downstream Harms

incomplete

6. Establish External Communications Plans

incomplete

a. System Cards

b. What Must be Disclosed?

c. User Interface and User Experience (UI/UX) Considerations

d. Acceptable Use Policies

e. Communication with Developers

7. Create and Implement a Policy and Controls to Deactivate or Localize an AI Model as Necessary

incomplete

Section IV.C Review

incomplete

Knowledge Review #5

incomplete

Conclusion

incomplete

Full Exam #1

incomplete

Full Exam #2

incomplete

What is Artificial Intelligence?

When we think of the term “artificial” intelligence, it raises the question of: an artificial version of what? The answer, of course, is an artificial form of human or “natural” intelligence. There are many different definitions of what constitutes human intelligence. Some definitions will include the ability to problem-solve, learn, think, or engage in abstraction. Other definitions of natural intelligence include components of emotional intelligence, creativity, wisdom, or even morality. It is difficult to come to one comprehensive definition of what natural intelligence is. For this reason, it can be equally as challenging to define what “artificial” intelligence is.

With that said, the notion of artificial intelligence is based on a relatively simple premise. As stated by John McCarthy, a pioneer in the field, and his co-authors: “Every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it.”1

a. The Multiple Definitions of AI

There is no single definition of Artificial Intelligence (AI) that applies in all contexts. It is worth starting, however, with how the IAPP itself defines AI in its document titled Key Terms for AI Governance:

quote

Artificial intelligence is a broad term used to describe an engineered system that uses various computational techniques to perform or automate tasks. This may include techniques, such as machine learning, in which machines learn from experience, adjusting to new input data and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. It may include automated decision-making.2

As this definition itself recognizes, AI is a broad term that is subject to many different meanings. It can be defined in terms of the tasks it seeks to perform, or it could be defined in terms of the academic discipline from which it originates.

The European Union’s Artificial Intelligence Regulation—commonly called the E.U. AI Act, which we will cover in much further detail in Section II.C—defines AI from a systems perspective. An AI System, is defined under the E.U. AI Act as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it received, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”3

The E.U.’s AI Act definition is rather complex. It may be easier to think of the term AI system, as so-defined, as having two primary components: (1) the system must operate at varying levels of autonomy; and (2) it must infer from the input how to generate outputs that can influence physical or virtual environments. As set forth in the Recitals to the E.U. AI Act, this definition is “based on key characteristics . . . that distinguish [A.I.] from simpler traditional software systems or programming approaches.”4 Therefore, the intent is that this definition does not apply to “systems that are based on the rules defined solely by natural persons to automatically execute operations,” such as typical software algorithms.5 Accordingly, “[a] key characteristic of AI systems is their capability to infer.”6

b. The Turing Test

From a conceptual or theoretical perspective, one should be able to properly distinguish between AI and more general software applications by subjecting it to some form of testing. The Turing Test, named after famed computer scientist Alan Turing, is one means of testing a “machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.”7

As originally formulated, the Turing Test asked the question whether a human would be able to differentiate between a computer-generated response and a human response.8 If a human cannot differentiate between the two responses, the computer-generated response would be considered AI. Put differently, AI will be able to trick a human into thinking that it is human.

The Turing Test, developed in 1949, was originally called the “imitation game.” In the game, the evaluator would judge language conversations it had between two separate partners. Initially, this test was limited to written text, but it has since been adopted as a more general test for machine intelligence.

Turing Test

The Turing Test can be thought of itself as another way to define the term artificial intelligence. Is the computer-generated answer distinguishable from a human? If not, then it is AI under this more theoretical definition.

exclaim

EXPLANATORY NOTE: Many consider the beginning of AI to be Alan Turing’s imitation game. The term “artificial intelligence” was not officially coined, however, until 1956 as part of the Dartmouth Summer Research Project on AI (a/k/a the “Dartmouth Conference”). This project brought together researchers from multiple distinct fields of study to explore the possibilities of intelligent machines.

The development of AI has seen peaks and valleys since the Dartmouth Conference gave birth to the unified field of AI research. These are commonly referred to as AI summers (fast developments) and winters (intense skepticism). Below is a brief timeline:

  • First AI Summer (mid-1950s to mid-1970s)

  • First AI Winter (mid-1970s to mid-1980s)

  • Second AI Summer (mid-1980s to late-1980s)

  • Second AI Winter (late-1980s to late-1990s)

  • Renaissance and the Era of Big Data (late-1990s to 2011)

  • The AI Boom (2011 to Present)

The most recent version of the AIGP Body of Knowledge indicates that the history of AI development is no longer tested on the AIGP exam. Having some basic background in the history of AI, however, is helpful for understanding its development and what it is from a conceptual perspective. Therefore, we have continued to include this information here.

c. Common Elements

There are several key elements that form the basis for nearly all definitions of AI. These elements help differentiate AI from more simplistic, traditional software systems. These elements include: (1) technology; (2) automation; (3) human involvement; and (4) the expected output.

The exact contours of each of these four elements is what varies from one definition to the next. Under most definitions, the technology component will usually be defined in terms of an engineered or machine-based system, or as a logic, knowledge or learning algorithm. Elements of varying levels of automation will be included in a definition of AI, but any definition will make clear that it can act on its own or respond dynamically to inputs or the environment. The role of humans is typically defined in terms of either setting the objectives of the system or in providing the input or data to the system. However, human involvement could also include the training of the system involved. The output of an AI system is commonly defined to include content, predications, recommendations, or decisions.

Consider the E.U. AI Act’s definition of “AI systems” set forth above. It can be broken down according to these four elements as follows:

  • Technology: A “machine-based system”

  • Automation: “[D]esigned to operate with varying levels of autonomy”

  • Human Involvement: “[E]xhibit[s] adaptiveness after deployment and that, for explicit or implicit objectives”

  • Expected Output: “[I]nfers, from the input it received, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”

d. AI as a Socio-Technical System

As a final point, it is worth noting that AI is an inherently Socio-Technical System, which is to say that its use involves interactions between people and technology, each of which helps shape the other.9 Put differently, we as humans help shape technology. Yet, at the same time, technology also shapes us as a society. A socio-technical system refers to this cycle.

Socio-Technical Systems

Because of this cycle, it is important to consider all relevant stakeholders in developing, designing, deploying, or imposing governance on AI systems. Bringing in stakeholders that can understand the impacts an AI system has on society is key. By way of example, AI systems should be developed with a cross-functional team that calls upon social sciences, user experience (UX) designers, and others. Development of AI should not be left solely to computer scientists and software engineers that do the actual building; rather, a holistic approach that can anticipate downstream impacts should be used to help shape AI development. We will return to this point later in Module I.C.3.

Key Points
  • Artificial intelligence is often defined in terms of its counterpart – human or “natural” intelligence
  • Artificial Intelligence (IAPP Definition): A broad term used to describe an engineered system that uses various computational techniques to perform or automate tasks. This may include techniques, such as machine learning, in which machines learn from experience, adjusting to new input data and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. It may include automated decision-making.
  • The E.U. AI Act defines AI from a systems perspective – (1) a system that operates at varying levels of autonomy; and (2) that infers from inputs how to generate outputs that influence the environment
  • Turing Test: A method of testing a “machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human”
  • Originally called the “imitation game”
  • Often considered the beginning of AI
  • The term “artificial intelligence” was originally coined as part of the Dartmouth Conference (1956)
  • Common elements of various AI definitions include:
  • (1) Technology;
  • (2) Automation;
  • (3) Human Involvement; and
  • (4) Expected output
  • Socio-Technical System: A system in which its use involves interactions between people and technology where each influences the other
  • AI is an example of a socio-technical system
  • The consequence of this is that social sciences, UI/UX design, and other stakeholders should be brought into the AI design, development, and deployment process to anticipate downstream effects
Sources

+

1. John McCarthy, et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (Aug. 31, 1955), available at https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.

2. Artificial Intelligence, IAPP, Key Terms for AI Governance, https://iapp.org/resources/article/key-terms-for-ai-governance/.

3. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 202/1828 (Artificial Intelligence Act), 2024 OJ (L 144) 1, Art. 3(1).

4. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 202/1828 (Artificial Intelligence Act), 2024 OJ (L 144) 1, Recital 12.

5. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 202/1828 (Artificial Intelligence Act), 2024 OJ (L 144) 1, Recital 12.

6. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 202/1828 (Artificial Intelligence Act), 2024 OJ (L 144) 1, Recital 12.

7. Turing Test, IAPP, Key Terms for AI Governance, https://iapp.org/resources/article/key-terms-for-ai-governance/.

8. Turing Test, IAPP, Key Terms for AI Governance, https://iapp.org/resources/article/key-terms-for-ai-governance/.

9. Nat’l Institute of Standards & Tech., U.S. Dep’t of Commerce, Artificial Intelligence Risk Management Framework at 1 (AI RMF.1.0), NIST AI 100-1 (Jan. 2023), available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

Previous

Next