© 2023 National Association of Insurance Commissioners 1
Draft: 12/2/2023
Adopted by Executive (EX) Committee and Plenary, December 4, 2023
Adopted by the Innovation, Cybersecurity, and Technology (H) Committee, December 1, 2023
NAIC MODEL BULLETIN:
USE OF ARTIFICIAL INTELLIGENCE SYSTEMS BY INSURERS
TO: All Insurers Licensed to Do Business In (Insert Name of Jurisdiction) (“Insurers”)
FROM: [Department/Commissioner]
DATE: [Insert]
RE: The Use of Artificial Intelligence Systems in Insurance
This bulletin is issued by the [] (Department) to remind all Insurers that hold certificates of authority to do
business in the state that decisions or actions impacting consumers that are made or supported by advanced
analytical and computational technologies, including Artificial Intelligence (AI) Systems (as defined below), must
comply with all applicable insurance laws and regulations. This includes those laws that address unfair trade
practices and unfair discrimination. This bulletin sets forth the Department’s expectations as to how Insurers will
govern the development/acquisition and use of certain AI technologies, including the AI Systems described herein.
This bulletin also advises Insurers of the type of information and documentation that the Department may request
during an investigation or examination of any Insurer regarding its use of such technologies and AI Systems.
SECTION 1:
INTRODUCTION, BACKGROUND, AND LEGISLATIVE AUTHORITY
Background
AI is transforming the insurance industry. AI techniques are deployed across all stages of the insurance
life cycle, including product development, marketing, sales and distribution, underwriting and pricing, policy
servicing, claim management, and fraud detection.
AI may facilitate the development of innovative products, improve consumer interface and service,
simplify and automate processes, and promote efficiency and accuracy. However, AI, including AI Systems, can
present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability,
and lack of transparency and explainability. Insurers should take actions to minimize these risks.
The Department encourages the development and use of innovation and AI Systems that contribute to
safe and stable insurance markets. However, the Department expects that decisions made and actions taken by
Insurers using AI Systems will comply with all applicable federal and state laws and regulations.
The Department recognizes the Principles of Artificial Intelligence that the NAIC adopted in 2020 as an
appropriate source of guidance for Insurers as they develop and use AI systems. Those principles emphasize the
importance of the fairness and ethical use of AI; accountability; compliance with state laws and regulations;
transparency; and a safe, secure, fair, and robust system. These fundamental principles should guide Insurers in
their development and use of AI Systems and underlie the expectations set forth in this bulletin.
© 2023 National Association of Insurance Commissioners 2
Legislative Authority
The regulatory expectations and oversight considerations set forth in Section 3 and Section 4 of this
bulletin rely on the following laws and regulations:
Unfair Trade Practices Model Act (#880): The Unfair Trade Practices Act [insert citation to state statute
or regulation corresponding to Model #880] (UTPA), regulates trade practices in insurance by: 1) defining
practices that constitute unfair methods of competition or unfair or deceptive acts and practices; and 2)
prohibiting the trade practices so defined or determined.
Unfair Claims Settlement Practices Model Act (#900): The Unfair Claims Settlement Practices Act, [insert
citation to state statute or regulation corresponding to Model #900] (UCSPA), sets forth standards for the
investigation and disposition of claims arising under policies or certificates of insurance issued to residents
of [insert state].
Actions taken by Insurers in the state must not violate the UTPA or the UCSPA, regardless of the methods
the Insurer used to determine or support its actions. As discussed below, Insurers are expected to adopt practices,
including governance frameworks and risk management protocols, that are designed to ensure that the use of AI
Systems does not result in: 1) unfair trade practices, as defined in []; or 2) unfair claims settlement practices, as
defined in [].
Corporate Governance Annual Disclosure Model Act (#305): The Corporate Governance Annual Disclosure
Act [insert citation to state statute or regulation corresponding to Model #305] (CGAD), requires Insurers
to report on governance practices and to provide a summary of the Insurers corporate governance
structure, policies, and practices. The content, form, and filing requirements for CGAD information are set
forth in the Corporate Governance Annual Disclosure Model Regulation (#306) [insert citation to state
statute or regulation corresponding to Model #306]) (CGAD-R).
The requirements of CGAD and CGAD-R apply to elements of the Insurers corporate governance
framework that address the Insurer’s use of AI Systems to support actions and decisions that impact consumers.
Property and Casualty Model Rating Law (#1780): The Property and Casualty Model Rating Law, [insert
citation to state statute or regulation corresponding to the Model #1780], requires that property/casualty
(P/C) insurance rates not be excessive, inadequate, or unfairly discriminatory.
The requirements of [] apply regardless of the methodology that the Insurer used to develop rates, rating
rules, and rating plans subject to those provisions. That means that an Insurer is responsible for assuring that
rates, rating rules, and rating plans that are developed using AI techniques and Predictive Models that rely on data
and Machine Learning do not result in excessive, inadequate, or unfairly discriminatory insurance rates with
respect to all forms of casualty insuranceincluding fidelity, surety, and guaranty bondand to all forms of
property insuranceincluding fire, marine, and inland marine insurance, and any combination of any of the
foregoing.
Market Conduct Surveillance Model Law (#693): The Market Conduct Surveillance Model Law [insert
citation to state statute or regulation corresponding to Model #693] establishes the framework pursuant
to which the Department conducts market conduct actions. These are comprised of the full range of
activities that the Department may initiate to assess and address the market practices of Insurers,
beginning with market analysis and extending to targeted examinations. Market conduct actions are
separate from, but may result from, individual complaints made by consumers asserting illegal practices
by Insurers.
© 2023 National Association of Insurance Commissioners 3
An Insurer’s conduct in the state, including its use of AI Systems to make or support actions and decisions
that impact consumers, is subject to investigation, including market conduct actions. Section 4 of this bulletin
provides guidance on the kinds of information and documents that the Department may request in the context of
an AI-focused investigation, including a market conduct action.
SECTION 2: DEFINITIONS
For the purposes of this bulletin the following terms are defined
1
:
Adverse Consumer Outcome refers to a decision by an Insurer that is subject to insurance regulatory standards
enforced by the Department that adversely impacts the consumer in a manner that violates those standards.
Algorithm means a clearly specified mathematical process for computation; a set of rules that, if followed, will
give a prescribed result.
AI System is a machine-based system that can, for a given set of objectives, generate outputs such as
predictions, recommendations, content (such as text, images, videos, or sounds), or other output influencing
decisions made in real or virtual environments. AI Systems are designed to operate with varying levels of
autonomy.
Artificial Intelligence (AI) refers to a branch of computer science that uses data processing systems that perform
functions normally associated with human intelligence, such as reasoning, learning, and self-improvement, or the
capability of a device to perform functions that are normally associated with human intelligence such as reasoning,
learning, and self-improvement. This definition considers machine learning to be a subset of artificial intelligence.
Degree of Potential Harm to Consumers refers to the severity of adverse economic impact that a consumer
might experience as a result of an Adverse Consumer Outcome.
“Generative Artificial Intelligence (Generative AI) refers to a class of AI Systems that generate content in the
form of data, text, images, sounds, or video, that is similar to, but not a direct copy of, pre-existing data or content.
“Machine Learning (ML) Refers to a field within artificial intelligence that focuses on the ability of computers to
learn from provided data without being explicitly programmed.
Model Drift refers to the decay of a model’s performance over time arising from underlying changes such as
the definitions, distributions, and/or statistical properties between the data used to train the model and the data
on which it is deployed.
Predictive Model” refers to the mining of historic data using algorithms and/or machine learning to identify
patterns and predict outcomes that can be used to make or support the making of decisions.
Third Party for purposes of this bulletin means an organization other than the Insurer that provides services,
data, or other resources related to AI.
1
Drafting note: Individual states may have adopted definitions for terms that are included in the model bulletin that may be
different from the definitions set forth herein.
© 2023 National Association of Insurance Commissioners 4
SECTION 3: REGULATORY GUIDANCE AND EXPECTATIONS
Decisions subject to regulatory oversight that are made by Insurers using AI Systems must comply with
the legal and regulatory standards that apply to those decisions, including unfair trade practice laws. These
standards require, at a minimum, that decisions made by Insurers are not inaccurate, arbitrary, capricious, or
unfairly discriminatory. Compliance with these standards is required regardless of the tools and methods Insurers
use to make such decisions. However, because, in the absence of proper controls, AI has the potential to increase
the risk of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes for consumers, it is important that
Insurers adopt and implement controls specifically related to their use of AI that are designed to mitigate the risk
of Adverse Consumer Outcomes.
Consistent therewith, all Insurers authorized to do business in this state are expected to develop,
implement, and maintain a written program (an “AIS Program”) for the responsible use of AI Systems that make,
or support decisions related to regulated insurance practices. The AIS Program should be designed to mitigate the
risk of Adverse Consumer Outcomes, including, at a minimum, the statutory provisions set forth in Section 1 of
this bulletin.
The Department recognizes that robust governance, risk management controls, and internal audit
functions play a core role in mitigating the risk that decisions driven by AI Systems will violate unfair trade practice
laws and other applicable existing legal standards. The Department also encourages the development and use of
verification and testing methods to identify errors and bias in Predictive Models and AI Systems, as well as the
potential for unfair discrimination in the decisions and outcomes resulting from the use of Predictive Models and
AI Systems.
The controls and processes that an Insurer adopts and implements as part of its AIS Program should be
reflective of, and commensurate with, the Insurers own assessment of the degree and nature of risk posed to
consumers by the AI Systems that it uses, considering: (i) the nature of the decisions being made, informed, or
supported using the AI System; (ii) the type and Degree of Potential Harm to Consumers resulting from the use of
AI Systems; (iii) the extent to which humans are involved in the final decision-making process; (iv) the transparency
and explainability of outcomes to the impacted consumer; and (v) the extent and scope of the insurer’s use or
reliance on data, Predictive Models, and AI Systems from third parties. Similarly, controls and processes should
be commensurate with both the risk of Adverse Consumer Outcomes and the Degree of Potential Harm to
Consumers.
As discussed in Section 4, the decisions made as a result of an Insurers use of AI Systems are subject to
the Departments examination to determine that the reliance on AI Systems are compliant with all applicable
existing legal standards governing the conduct of the Insurer.
AIS Program Guidelines
1.0 General Guidelines
1.1 The AIS Program should be designed to mitigate the risk that the Insurer’s use of an AI System will
result in Adverse Consumer Outcomes.
1.2 The AIS Program should address governance, risk management controls, and internal audit
functions.
© 2023 National Association of Insurance Commissioners 5
1.3 The AIS Program should vest responsibility for the development, implementation, monitoring, and
oversight of the AIS Program and for setting the Insurer’s strategy for AI Systems with senior management
accountable to the board or an appropriate committee of the board.
1.4 The AIS Program should be tailored to and proportionate with the Insurers use and reliance on
AI and AI Systems. Controls and procedures should be focused on the mitigation of Adverse Consumer Outcomes
and the scope of the controls and procedures applicable to a given AI System use case should reflect and align
with the Degree of Potential Harm to Consumers with respect to that use case.
1.5 The AIS Program may be independent of or part of the Insurers existing Enterprise Risk
Management (ERM) program. The AIS Program may adopt, incorporate, or rely upon, in whole or in part, a
framework or standards developed by an official third-party standard organization, such as the National Institute
of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, Version 1.0.
1.6 The AIS Program should address the use of AI Systems across the insurance life cycle, including
areas such as product development and design, marketing, use, underwriting, rating and pricing, case
management, claim administration and payment, and fraud detection.
1.7 The AIS Program should address all phases of an AI System’s life cycle, including design,
development, validation, implementation (both systems and business), use, on-going monitoring, updating and
retirement.
1.8 The AIS Program should address the AI Systems used with respect to regulated insurance practices
whether developed by the Insurer or a third-party vendor.
1.9 The AIS Program should include processes and procedures providing notice to impacted
consumers that AI Systems are in use and provide access to appropriate levels of information based on the phase
of the insurance life cycle in which the AI Systems are being used.
2.0 Governance
The AIS Program should include a governance framework for the oversight of AI Systems used by the
Insurer. Governance should prioritize transparency, fairness, and accountability in the design and implementation
of the AI Systems, recognizing that proprietary and trade secret information must be protected. An Insurer may
consider adopting new internal governance structures or rely on the Insurer’s existing governance structures;
however, in developing its governance framework, the Insurer should consider addressing the following items:
2.1 The policies, processes, and procedures, including risk management and internal controls, to be
followed at each stage of an AI System life cycle, from proposed development to retirement.
2.2 The requirements adopted by the Insurer to document compliance with the AIS Program policies,
processes, procedures, and standards. Documentation requirements should be developed with Section 4 in mind.
2.3 The Insurer’s internal AI System governance accountability structure, such as:
a) The formation of centralized, federated, or otherwise constituted committees comprised of
representatives from appropriate disciplines and units within the Insurer, such as business
units, product specialists, actuarial, data science and analytics, underwriting, claims,
compliance, and legal.
© 2023 National Association of Insurance Commissioners 6
b) Scope of responsibility and authority, chains of command, and decisional hierarchies.
c) The independence of decision-makers and lines of defense at successive stages of the AI
System life cycle.
d) Monitoring, auditing, escalation, and reporting protocols and requirements.
e) Development and implementation of ongoing training and supervision of personnel.
2.4 Specifically with respect to Predictive Models: the Insurer’s processes and procedures for
designing, developing, verifying, deploying, using, updating, and monitoring Predictive Models, including a
description of methods used to detect and address errors, performance issues, outliers, or unfair discrimination
in the insurance practices resulting from the use of the Predictive Model.
3.0 Risk Management and Internal Controls
The AIS Program should document the Insurers risk identification, mitigation, and management
framework and internal controls for AI Systems generally and at each stage of the AI System life cycle. Risk
management and internal controls should address the following items:
3.1 The oversight and approval process for the development, adoption, or acquisition of AI Systems,
as well as the identification of constraints and controls on automation and design to align and balance function
with risk.
3.2 Data practices and accountability procedures, including data currency, lineage, quality, integrity,
bias analysis and minimization, and suitability.
3.3 Management and oversight of Predictive Models (including algorithms used therein), including:
a) Inventories and descriptions of the Predictive Models.
b) Detailed documentation of the development and use of the Predictive Models.
c) Assessments such as interpretability, repeatability, robustness, regular tuning,
reproducibility, traceability, model drift, and the auditability of these measurements where
appropriate.
3.4 Validating, testing, and retesting as necessary to assess the generalization of AI System outputs
upon implementation, including the suitability of the data used to develop, train, validate and audit the model.
Validation can take the form of comparing model performance on unseen data available at the time of model
development to the performance observed on data post-implementation, measuring performance against expert
review, or other methods.
3.5 The protection of non-public information, particularly consumer information, including
unauthorized access to the Predictive Models themselves.
3.6 Data and record retention.
© 2023 National Association of Insurance Commissioners 7
3.7 Specifically with respect to Predictive Models: a narrative description of the model’s intended
goals and objectives and how the model is developed and validated to ensure that the AI Systems that rely on
such models correctly and efficiently predict or implement those goals and objectives.
4.0 Third-Party AI Systems and Data
Each AIS Program should address the Insurer’s process for acquiring, using, or relying on (i) third-party
data to develop AI Systems; and (ii) AI Systems developed by a third party, which may include, as appropriate, the
establishment of standards, policies, procedures, and protocols relating to the following considerations:
4.1 Due diligence and the methods employed by the Insurer to assess the third party and its data or
AI Systems acquired from the third party to ensure that decisions made or supported from such AI Systems that
could lead to Adverse Consumer Outcomes will meet the legal standards imposed on the Insurer itself.
4.2 Where appropriate and available, the inclusion of terms in contracts with third parties that:
a) Provide audit rights and/or entitle the Insurer to receive audit reports by qualified auditing
entities.
b) Require the third party to cooperate with the Insurer with regard to regulatory inquiries and
investigations related to the Insurers use of the third-partys product or services.
4.3 The performance of contractual rights regarding audits and/or other activities to confirm the
third-party’s compliance with contractual and, where applicable, regulatory requirements.
SECTION 4: REGULATORY OVERSIGHT AND EXAMINATION CONSIDERATIONS
The Department’s regulatory oversight of Insurers includes oversight of an Insurer’s conduct in the state,
including its use of AI Systems to make or support decisions that impact consumers. Regardless of the existence
or scope of a written AIS Program, in the context of an investigation or market conduct action, an Insurer can
expect to be asked about its development, deployment, and use of AI Systems, or any specific Predictive Model,
AI System or application and its outcomes (including Adverse Consumer Outcomes) from the use of those AI
Systems, as well as any other information or documentation deemed relevant by the Department.
Insurers should expect those inquiries to include (but not be limited to) the Insurer’s governance
framework, risk management, and internal controls (including the considerations identified in Section 3). In
addition to conducting a review of any of the items listed in this Bulletin, a regulator may also ask questions
regarding any specific model, AI System, or its application, including requests for the following types of
information and/or documentation:
1. Information and Documentation Relating to AI System Governance, Risk Management, and Use
Protocols
1.1. Information and documentation related to or evidencing the Insurer’s AIS Program, including:
a) The written AIS Program.
b) Information and documentation relating to or evidencing the adoption of the AIS Program.
© 2023 National Association of Insurance Commissioners 8
c) The scope of the Insurers AIS Program, including any AI Systems and technologies not
included in or addressed by the AIS Program.
d) How the AIS Program is tailored to and proportionate with the Insurer’s use and reliance on
AI Systems, the risk of Adverse Consumer Outcomes, and the Degree of Potential Harm to
Consumers.
e) The policies, procedures, guidance, training materials, and other information relating to the
adoption, implementation, maintenance, monitoring, and oversight of the Insurer’s AIS
Program, including:
i. Processes and procedures for the development, adoption, or acquisition of AI Systems,
such as:
(1) Identification of constraints and controls on automation and design.
(2) Data governance and controls, any practices related to data lineage, quality, integrity,
bias analysis and minimization, suitability, and Data Currency.
ii. Processes and procedures related to the management and oversight of Predictive Models,
including measurements, standards, or thresholds adopted or used by the Insurer in the
development, validation, and oversight of models and AI Systems.
iii. Protection of non-public information, particularly consumer information, including
unauthorized access to Predictive Models themselves.
1.2. Information and documentation relating to the Insurers pre-acquisition/pre-use diligence,
monitoring, oversight, and auditing of data or AI Systems developed by a third party.
1.3. Information and documentation relating to or evidencing the Insurer’s implementation and
compliance with its AIS Program, including documents relating to the Insurer’s monitoring and audit activities
respecting compliance, such as:
a) Documentation relating to or evidencing the formation and ongoing operation of the Insurer’s
coordinating bodies for the development, use, and oversight of AI Systems.
b) Documentation related to data practices and accountability procedures, including data
lineage, quality, integrity, bias analysis and minimization, suitability, and Data Currency.
c) Management and oversight of Predictive Models and AI Systems, including:
i. The Insurer’s inventories and descriptions of Predictive Models, and AI Systems used by
the Insurer to make or support decisions that can result in Adverse Consumer Outcomes.
ii. As to any specific Predictive Model or AI System that is the subject of investigation or
examination:
(1) Documentation of compliance with all applicable AI Program policies, protocols, and
procedures in the development, use, and oversight of Predictive Models and AI
Systems deployed by the Insurer.
© 2023 National Association of Insurance Commissioners 9
(2) Information about data used in the development and oversight of the specific model
or AI System, including the data source, provenance, data lineage, quality, integrity,
bias analysis and minimization, suitability, and Data Currency.
(3) Information related to the techniques, measurements, thresholds, and similar
controls used by the Insurer.
d) Documentation related to validation, testing, and auditing, including evaluation of Model
Drift to assess the reliability of outputs that influence the decisions made based on Predictive
Models. Note that the nature of validation, testing, and auditing should be reflective of the
underlying components of the AI System, whether based on Predictive Models or Generative
AI.
2. Third-Party AI Systems and Data
In addition, if the investigation or examination concerns data, Predictive Models, or AI Systems collected
or developed in whole or in part by third parties, the Insurer should also expect the Department to request the
following additional types of information and documentation.
2.1 Due diligence conducted on third parties and their data, models, or AI Systems.
2.2 Contracts with third-party AI System, model, or data vendors, including terms relating to
representations, warranties, data security and privacy, data sourcing, intellectual property rights,
confidentiality and disclosures, and/or cooperation with regulators.
2.3 Audits and/or confirmation processes performed regarding third-party compliance with
contractual and, where applicable, regulatory obligations.
2.4 Documentation pertaining to validation, testing, and auditing, including evaluation of Model Drift.
The Department recognizes that Insurers may demonstrate their compliance with the laws that regulate
their conduct in the state in their use of AI Systems through alternative means, including through practices that
differ from those described in this bulletin. The goal of the bulletin is not to prescribe specific practices or to
prescribe specific documentation requirements. Rather, the goal is to ensure that Insurers in the state are aware
of the Department’s expectations as to how AI Systems will be governed and managed and of the kinds of
information and documents about an Insurer’s AI Systems that the department expects an Insurer to produce
when requested.
As in all cases, investigations and market conduct actions may be performed using procedures that vary
in nature, extent, and timing in accordance with regulatory judgment. Work performed may include inquiry,
examination of company documentation, or any of the continuum of market actions described in the NAICs
Market Regulation Handbook. These activities may involve the use of contracted specialists with relevant subject
matter expertise. Nothing in this bulletin limits the authority of the Department to conduct any regulatory
investigation, examination, or enforcement action relative to any act or omission of any Insurer that the
Department is authorized to perform.