With the lack of clear federal policy framework around artificial intelligence, AI regulations that have implications for financial services can be complex and opaque. Firms practicing strong AI governance can form policies based on state level regulation, federal policy guidance, and precedent from SEC enforcement cases. This policy tracker (last updated February 2026) is designed to help you navigate this complex regulatory environment and apply strategies for mitigating AI compliance risk to finanical services uses of AI.

AI Regulation Tracker
Key federal, state & international regulations & upcoming policies.

Federal (US)

Executive Order 14179 — Removing Barriers to American Leadership in Artificial Intelligence

FederalActiveHigh Priority
Signed: January 23, 2025 | Effect: Revoked Biden Executive Order 14110- Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

President Trump's first AI executive order revoked the Biden-era Executive Order 14110 and directed agencies to remove barriers to US AI leadership. It rescinded related agency implementations and directed OMB to revise or rescind M-24-10 guidance inconsistent with the new pro-innovation posture.[1]

⚠ Compliance Watch

Executive Order 14179, Executive Order on Removing Barriers to American Leadership in Artificial Intelligence, commissioned an action plan to review all policies pursuant to EO 14110 that may not be consistent with the policy directives of EO 14179. The Trump administration continues to emphasize a deregulatory environment to foster AI innovation.

Source: White House, Jan. 23, 2025

Executive Order 14365 — Ensuring a National Policy Framework for Artificial Intelligence

FederalActiveHigh Priority
Signed: December 11, 2025

Declares a minimally burdensome national AI policy and establishes a DOJ AI Litigation Task Force to challenge "onerous" state AI laws on commerce or preemption grounds. Directs Commerce to evaluate state laws and condition BEAD broadband funding on compliance. Directs FTC to issue a policy statement applying Section 4 to AI model outputs, and FCC to explore a federal disclosure standard that preempts conflicting state laws. Does not independently preempt state law but signals federal-state tensions ahead.[2]

⚠ Key Tension

Colorado AI Act and California TFAIA are implicated. State laws remain enforceable until Congress acts or courts rule. Companies should continue compliance with enacted state laws while monitoring federal litigation and legislative recommendations.

SEC Regulation S-P (Privacy of Consumer Financial Information)

FederalActiveHigh Priority
Enacted: 2000 | Amended: May 2024 (effective May 2025 for large entities)

Requires broker-dealers, investment companies, and advisers to safeguard customer information. 2024 amendments significantly strengthen incident response requirements (72-hour notification to affected customers in some cases), third-party service provider oversight, and annual program reviews. AI systems that process customer financial data are within scope.[3]

🔴 Liability Risk

SEC has brought numerous Reg S-P enforcement actions (see Morgan Stanley Smith Barney, $35M). AI tools that ingest customer data without adequate vendor contracts or disposal protocols create direct exposure.

SEC Cybersecurity Disclosure Rules

FederalActiveHigh Priority
Adopted: July 26, 2023 | Effective: December 2023

Requires registrants to report material cybersecurity incidents on Form 8-K within 4 business days and provide annual disclosures on cybersecurity risk management, strategy, and governance (Form 10-K Item 1C). Particularly relevant where AI increases the attack surface (e.g., prompt injection, model exfiltration) or where AI-generated narratives are used in filings.[4]

🔴 Enforcement Pattern

SEC has charged Avaya, Blackbaud, Check Point, Flagstar, Mimecast, and Unisys for misleading cybersecurity disclosures.

NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)

FederalActive
Released: January 26, 2023

Voluntary framework that was widely adopted as a baseline for reasonable AI risk management. Referenced under Texas TRAIGA, Colorado AI Act compliance guidance, and federal procurement standards. Increasingly a litigation touchstone for whether a company exercised reasonable care regarding AI risk management.[5]

NIST Cybersecurity Framework (CSF) 2.0

FederalActive
Released: February 26, 2024

Updated baseline adding a new "Govern" function; now designed for organizations of all sizes and sectors, not just critical infrastructure. Frequently used alongside NIST AI RMF for securing AI workloads, vendor risk management, and incident response mapping.[6]

Bank Secrecy Act / USA PATRIOT Act (AML/KYC)

FederalActiveHigh Priority
Enacted: 1970 / 2001

Requires AML programs, suspicious activity reporting, and customer due diligence. Many firms deploy AI for transaction monitoring and alert triage. Regulators (FinCEN, OCC, Fed) expect adequate governance of AI-driven monitoring, including model validation, explainability documentation, and bias testing. Supervisory guidance continues to evolve.[7]

DOJ Final Rule — Bulk Sensitive Data Transfers to Countries of Concern

FederalActiveHigh Priority
Final rule published: January 8, 2025 | Effective: April 8, 2025

Implements Executive Order 14117. Prohibits and restricts certain transactions involving bulk US sensitive personal data (genomics, biometric, health, financial, geolocation, covered personal identifiers) with designated countries of concern (China, Russia, Iran, N. Korea, Cuba, Venezuela). Impacts AI training pipelines, vendor relationships, and cross-border data sharing with foreign subprocessors.[8]

FTC Section 5 — Unfair/Deceptive Acts & AI Marketing Guidance

FederalActive
Guidance released: February 27, 2023 |

FTC treats false or unsubstantiated claims of AI product functionality as a core enforcement priority, including "AI-washing" — overstating AI capabilities in marketing, disclosures, or regulatory filings. Executive Order 14365 has directed the FTC to issue a policy statement on the application of the FTC Act to AI models and identifying which state laws requiring alterations to AI model outputs per Section 5.[9]

🔴 Enforcement Pattern

In September of 2024, the FTC announced an enforcement sweep called Operation AI Comply, which included 5 enforcement actions against companies making false claims to promote AI tools and services. In December of 2025, the FTC issued an order to reopen and set aside a 2024 final consent order for one of the five companies, Rytr LLC. The FTC determined that the complaint was in violation of the Trump Administration's Executive Order 14179 and "America's AI Action Plan." [10]

America's AI Action Plan

FederalActive — Policy
Released: July 23, 2025

White House policy roadmap encouraging federal agencies to seek AI "dominance" through minimal regulation. Directed agencies to review and remove barriers to AI adoption. NIST's AI Safety Institute was renamed the Center for AI Standards and Innovation (CAISI) in June 2025, with a refocused mission on US innovation and security rather than safety/ethics evaluation.[11]

State (US)

Colorado Artificial Intelligence Act (SB 24-205) — as extended by SB25B-004

State — ColoradoUpcomingHigh Priority
Passed: 2024 | Operative date extended to: June 30, 2026

One of the first comprehensive US state AI laws. Imposes duties on developers and deployers of "high-risk" AI systems to use reasonable care to avoid algorithmic discrimination. Requires impact assessments, consumer notices, AG reporting, and transparency about AI-informed decisions. EO 14365 explicitly calls out the Colorado AI Act as potentially "forcing AI models to embed DEI" — it is the primary target of the DOJ Litigation Task Force.[12]

⚠ Federal Preemption Risk

Despite federal pressure, the Colorado AI Act remains law until courts rule otherwise. Companies with Colorado nexus (or portfolio companies deploying high-risk AI to Colorado residents) should continue compliance planning for the June 30, 2026 effective date.

Texas Responsible AI Governance Act (TRAIGA / HB 149)

State — TexasActiveHigh Priority
Signed: June 22, 2025 | Effective: January 1, 2026

TRAIGA prohibits AI systems developed or deployed with intent to: manipulate behavior through deceptive means, assign social scores, discriminate unlawfully, infringe constitutional rights, capture biometric data without consent, or facilitate deepfakes. AG enforcement with civil penalties ($10K–$200K per violation). Includes an innovation sandbox (36 months) and NIST AI RMF compliance as an affirmative defense.[13]

💡 Key Distinction

Unlike Colorado, Texas uses intent-based rather than impact-based liability. This makes litigation harder to sustain but also creates documentation obligations around developer/deployer intent.

California TFAIA — Transparency in Frontier AI Act (SB 53)

State — CaliforniaActiveHigh Priority
Signed: October 2025 | Effective: January 1, 2026

Governor Newsom signed SB 53 after vetoing the more expansive SB 1047 in September 2024. Requires developers of frontier AI models (>10^26 FLOPS) to publish safety protocols publicly and report serious safety incidents to state officials. Focuses on transparency rather than SB 1047's prescriptive safety requirements. Referenced in EO 14365 drafts as a model for acceptable state AI regulation.[14]

California AB 2013 — AI Training Data Transparency Act

State — CaliforniaActive
Signed: September 29, 2024 | Effective: January 1, 2026

Requires developers of generative AI systems made available to Californians to post high-level summaries of training datasets on their websites — including whether personal information or copyrighted material is included. Applies to any person who designs, codes, produces, or substantially modifies a GenAI system released on or after January 1, 2022. Similar to GPAI transparency requirements in the EU AI Act.[15]

California CPPA Regulations — ADMT, Risk Assessments & Cybersecurity Audits

State — CaliforniaActiveHigh Priority
Approved: September 22, 2025 | Effective: January 1, 2026

California Privacy Protection Agency regulations under CCPA/CPRA introduce mandate for disclosure about privacy policies and automated decision-making technology processes, risk assessments and opt-out options for automated decision-making technology (ADMT) for "significant decisions" (housing, employment, finance, healthcare). Requires mandatory cybersecurity audits for qualifying businesses and data protection risk assessments for high-risk processing. Businesses that process the personal information of California consumers are significantly affected.[16]

California SB 1047 — Safe and Secure Innovation for Frontier AI (VETOED)

State — CaliforniaVetoedHistorical
Passed legislature: August 2024 | Vetoed by Gov. Newsom: September 29, 2024

Would have required developers of models costing over $100M to train to create safety and security protocols, implement kill switches, conduct annual third-party audits, and allow AG enforcement for imminent critical harms. Vetoed because Newsom disagreed with using compute thresholds as the regulatory trigger rather than actual risk of harm. Influenced the design of subsequent SB 53 and the national debate about where to regulate (model level vs. use/deployment level).[17]

Illinois Biometric Information Privacy Act (BIPA)

State — IllinoisActiveHigh Priority
Enacted: 2008 | Amended: 2024

Regulates collection, use, and storage of biometric identifiers including fingerprints, retinal scans, voiceprints, and face geometry. Requires written informed consent before collection. Creates private right of action with statutory damages ($1,000 per negligent violation; $5,000 per intentional/reckless violation per biometric). AI voice transcription and facial recognition tools are at high risk of violating BIPA. 2024 amendment limits per-exposure damages in class actions.[18]

New York City Local Law 144 — Automated Employment Decision Tools (AEDTs)

Local — NYCActive
Law: 2021 | Enforced by: NYC DCWP

Regulates AI-driven hiring and promotion tools. Requires annual bias audits by independent auditors, public posting of audit summaries, and advance notice to candidates and employees. Applies when AEDTs are used to "substantially assist or replace" discretionary employment decisions. Relevant for portfolio companies' HR tech stacks and vendor due diligence.[19]

Washington My Health My Data Act

State — WashingtonActiveHigh Priority
Effective: July 23, 2023

Broad privacy obligations focused on "consumer health data" — broadly defined to include data that identifies a consumer's past, present, or future physical or mental health condition. Heightened consent requirements, geofencing restrictions, and private right of action. AI models trained on or inferring sensitive health data can trigger violations — particularly in adtech, wellness, and consumer apps.[20]

Utah Artificial Intelligence Policy Act (UAIPA)

State — UtahActive
Effective: May 1, 2024 | Amended by HB 452: 2025

Requires disclosures when generative AI interacts with consumers in regulated industries. Clarifies that businesses remain liable under consumer protection law for AI outputs. 2025 amendment (HB 452) adds an affirmative defense for providers maintaining documented AI governance measures — one of the first state laws to create a compliance-incentive safe harbor structure.[21]

Tennessee Ensuring Likeness Voice and Image Security Act (ELVIS Act)

State — TennesseeActive
Enacted: March 21, 2024 | Effective: July 1, 2024

Expands right-of-publicity protections to address AI-based imitation of individuals' voices and likenesses without authorization. Creates civil cause of action. Relevant for portfolio media companies, marketing tools, voice AI products, and synthetic media platforms — increases IP/rights clearance importance for any training data or output involving identifiable voices.[22]

International

EU Artificial Intelligence Act (Regulation 2024/1689)

EUActive — PhasedHigh Priority
Entered into force: August 1, 2024

Comprehensive AI law across the European Union, which promotes a risk-based framework to address prohibited practices, general purpose artificial intelligence model transparency, and strict compliance requirements for high-risk AI systems. Extra-territorial effect: applies to any AI services or products provided to the EU market. Penalties up to €35M or 7% of global annual turnover for the most serious violations.[23]

📅 Key Milestones

Already in effect: AI literacy obligations, prohibited system bans. Aug 2025: GPAI transparency. Aug 2026: High-risk system compliance. Note: EU Digital Omnibus (Nov 2025) proposes to streamline certain obligations — monitor for amendments.

EU General Data Protection Regulation (GDPR)

EUActiveHigh Priority
Effective: May 25, 2018 | Cumulative fines: €5.88B+ as of Jan 2025

Core data privacy framework governing AI systems that process personal data. GDPR applies alongside the EU AI Act — wherever an AI system involves personal data, both regulations apply. Key AI-specific requirements: DPIAs for high-risk automated processing, transparency obligations, Article 22 restrictions on solely automated decisions with significant effects, lawful basis for training data (consent or legitimate interests), and data minimisation/storage limitation. EDPB Opinion 28/2024 provides guidance on AI model training compliance.[24]

🔴 Enforcement Trajectory

€1.2B in fines in 2024 alone. AI-specific enforcement accelerating: Clearview AI fined €30.5M (Dutch DPA, 2024) for facial recognition GDPR violations; LinkedIn fined €310M (Irish DPC, Oct 2024) for behavioral profiling without consent. Regulators increasingly using GDPR as interim guardrail for AI regulation.

⚠ Digital Omnibus (Proposed — Nov 2025)

EU Commission's Digital Omnibus proposal (Nov 19, 2025) would clarify that legitimate interests can serve as a lawful basis for AI training data processing, consolidate breach reporting, and reduce SME compliance burdens. Still subject to Council/Parliament approval — expected 2027-2028 implementation.

EU Digital Operational Resilience Act (DORA)

EUActiveHigh Priority
Effective: January 17, 2025

Operational resilience requirements for EU financial entities and their critical ICT third-party service providers. Covers AI systems as ICT systems: risk management, incident classification and reporting, resilience testing (including penetration testing for significant firms), and ICT third-party risk management including mandatory contractual requirements. Overlap with Reg S-P safeguards and NIS2 for firms with EU operations.[25]

UK Pro-Innovation AI Regulatory Framework

UKActive — Principles-Based
White Paper: March 2023 | No dedicated AI Act as of Feb 2026

UK has no standalone AI Act; regulation flows through sector-specific regulators (ICO, FCA, CMA, Ofcom, MHRA) applying five cross-cutting principles: safety/security/robustness; transparency/explainability; fairness; accountability/governance; contestability/redress. The AI Safety Institute was renamed the AI Security Institute in February 2025, signalling a shift toward security and national-security risks. A comprehensive AI Bill was delayed in June 2025, with a more targeted approach now expected in 2026 parliamentary session.[26]

💡 Practical Note

UK GDPR applies to AI systems processing UK residents' data. EU AI Act applies where UK-developed AI outputs are used in the EU. ICO issued AI-in-recruitment audit outcomes and GDPR-meets-GenAI guidance in 2024-2025.

China — Interim Measures for Management of Generative AI Services

ChinaActiveHigh Priority
Issued: July 10, 2023 | Effective: August 15, 2023

Applies to providers of public-facing generative AI services in China. Obligations include: lawful training data use, personal information protection compliance, content management (prohibited categories), algorithmic transparency labeling, and filings with Cyberspace Administration of China. Applies to foreign providers whose services reach Chinese users. Relevant for portfolio companies offering GenAI products into China or using China-based distribution partners.[27]

Council of Europe — Framework Convention on AI

Council of EuropeActive — Treaty
Opened for signature: September 5, 2024

First legally binding international AI treaty. Requires signatory nations to take measures ensuring AI lifecycle activities align with human rights, democracy, and rule of law. Signatories include EU member states, US, UK, and others. Influences national legislative frameworks and public procurement standards. Not self-executing but signals global convergence direction.[28]

EU Digital Omnibus — Proposed AI Act + GDPR Amendments

EUProposedHigh Priority — Monitor
Published: November 19, 2025 | Legislative timeline: 2027–2028 (est.)

The European Commission's Digital Omnibus proposes simultaneous amendments to the GDPR, EU AI Act, Data Act, and ePrivacy Directive. AI Act changes would include postponement of some high-risk AI obligations and streamlined conformity assessment procedures. GDPR changes would clarify legitimate interests as a lawful basis for AI training data processing and consolidate breach reporting. Subject to Council and Parliament legislative process — expect material changes before enactment.[29]

SEC Enforcement Cases
15 enforcement actions (2018–2025). · Click row to expand details.

SEC Enforcement: AI & Cybersecurity Cases

Actions involving cyber disclosure failures, AI-washing, privacy violations & information barriers

All enforcement data from NYU SEED Law Database[30]

15 cases · 2018–2025
Click any row to expand details  ·  Esc to collapse
Entity Year Violation / Theme Penalty
Morgan Stanley Smith Barney LLC
Reg S-P
2022 Failure to protect ~15M customers' PII; deficient data disposal practices for decommissioned hardware.[32] $35M
Theme
Reg S-P safeguards; secure data disposal obligations
AI Relevance
AI tools that touch or retain customer PII (including training data, embeddings, logs) require equally rigorous retention/deletion protocols and vendor controls
Key Controls
Data minimization; encryption at rest/transit; vendor diligence; verified deletion procedures with audit trail
Concurrent Action
OCC simultaneously fined Morgan Stanley $60M — total regulatory exposure $95M+
AI Use Cases & Liabilities
PE and financial services AI use cases with enhanced descriptions, risk profiles, and applicable regulatory frameworks.

Private Equity Use Cases

1. AI-Powered Deal Sourcing & Origination

PE Core

Description: AI systems for identifying potential acquisition targets through pattern recognition across company financials, market trends, news articles, SEC filings, job postings, web traffic signals, pricing data, and relationship intelligence platforms. AI models can identify high-growth companies matching investment criteria — often before formal marketing processes begin — using hiring velocity, product traction, competitive position, and leadership network mapping.

💡 Key Benefits

Expands sourcing capacity continuously across public and third-party sectors, and has the potential to surface opportunities earlier in the cycle. Identifies introduction paths via relationship mapping. Can generate prospects against proprietary investment criteria at scale.

🔴 Liability Risks

MNPI exposure: Sourcing tools that access data streams containing material non-public information create Rule 10b-5 exposure. Data provenance: Third-party datasets may contain scraped personal data triggering data privacy laws. Model bias: Scoring models may systematically de-prioritize certain geographies/demographics. Competitive intelligence: Scraping/accessing data in violation of ToS or CFAA creates separate liability.

Applicable Regulations: SEC Rule 10b-5, GDPR/CCPA, Investment Advisers Act §206, CFAA

2. Automated Due Diligence & Document Review

PE Core

Description: AI systems conducting financial, legal, operational, and regulatory due diligence on potential acquisitions. Includes automated financial statement analysis, contract clause extraction and obligation mapping, compliance gap scanning, and risk scoring. LLM-based Q&A over virtual data rooms is increasingly common, as is AI-assisted projection modeling and scenario analysis.

💡 Key Benefits

Accelerated review of large data rooms. Abilities for clause extraction and variance detection across hundreds of contracts. Rapid scenario modeling for financial projections. Earlier identification of material issues before closing.

🔴 Liability Risks

Hallucinations: AI summaries may state inaccurate contract terms — missed material issues can create post-closing liability. Prompt injection: Malicious language embedded in uploaded documents can manipulate AI analysis. Data leakage: Confidential target data sent to third-party AI vendors may violate NDA terms or trigger data protection obligations. Over-reliance: Fiduciary exposure if investment committee relies on AI summaries without underlying document review.

Applicable Regulations: Investment Advisers Act §206, NIST AI RMF, NDA/contractual obligations, GDPR (if EU targets)

3. Portfolio Company Performance Monitoring

PE Core

Description: Potential for AI-powered aggregation of portfolio KPIs and operational metrics into standardized dashboards, which can use anomaly detection models to flag performance deviations. Natural language processing tools monitor news, reviews, and regulatory filings for updates. Cross-portfolio analytics use consistent KPI definitions and provide benchmarking against comparable companies.

💡 Key Benefits

Earlier warning signals for underperformance. Standardized cross-portfolio analytics that removes company-by-company inconsistencies in valuation methodology. Automated variance alerts can surface issues before quarterly board meetings. Operational efficiency benchmarking.

🔴 Liability Risks

MNPI exposure: Centralized monitoring data constitutes MNPI — access controls and information barriers are essential. Data breach: Aggregated portfolio data is a high-value target. Bias in assessment: AI models may unfairly score management performance. Wall breaks: If monitoring data flows to deal teams, information barriers may be breached.

Applicable Regulations: SEC Rule 10b-5, GDPR/CCPA, Investment Advisers Act §206, information barrier policies

4. LP Communications & Reporting Automation

PE Core

Description: Generative AI to draft quarterly investor letters, portfolio company updates, Q&A responses for LP requests, performance explanations, ESG/impact reporting, and first drafts of regulatory filing narratives (e.g., Form ADV, PFRD filings). AI systems can also draft responses to LP due diligence questionnaires (DDQs) by synthesizing information from internal databases.

💡 Key Benefits

Significant time savings on repetitive quarterly reporting. Consistent tone and format across LP communications. Faster response times to LP information requests. Cross-portfolio consistency in ESG metric reporting.

🔴 Liability Risks

Hallucinated misstatements: AI-generated financial or performance data that differs from the books creates securities fraud exposure. Selective disclosure / MNPI: AI may inadvertently include restricted information in LP communications. Filing accuracy: Inadequate human review of AI-drafted regulatory filings. Books and records: AI-assisted communications may not be captured in required recordkeeping systems.

Applicable Regulations: Investment Advisers Act §206 (fiduciary duties, marketing rules), Books-and-Records Rules 204-2, SEC cybersecurity disclosure rules for material incidents

5. Value Creation at Portfolio Companies

PE Operational

Description: AI identifying and quantifying operational improvement opportunities across portfolio companies: cost reduction via vendor consolidation, dynamic pricing optimization, supply chain analytics, workforce productivity modeling, and customer churn prediction. AI benchmarking tools compare portfolio company efficiency against peers and identify operational gaps. GP operating partners increasingly use proprietary AI tools deployed across the portfolio.

🔴 Liability Risks

Disparate impact: Workforce optimization AI may have discriminatory impact on protected classes — EEOC exposure, state AI employment laws (NYC LL144, etc.). Employee privacy: Analytics on workforce productivity may trigger state privacy laws. Short-term optimization: AI over-optimizes for margin at expense of longer-term sustainability or employee welfare. Union/contractual obligations: AI-driven workforce changes may violate existing agreements.

Applicable Regulations: Employment discrimination laws (Title VII, ADEA), state AI employment laws, privacy laws, NLRA considerations

Financial Services Use Cases

6. AI Transcription & Voice Processing

FS Operational

Description: AI transcription for client meetings, supervision, compliance monitoring, and call center operations. Voice AI systems may generate voiceprints as a byproduct — potentially creating biometric identifiers regulated under BIPA (Illinois) and similar laws. Use cases include: automated meeting notes, compliance surveillance of advisor calls, sentiment analysis of client interactions, and voice-activated trading systems.

🔴 Liability Risks

Biometric / BIPA: Voiceprints created without written informed consent trigger BIPA exposure with private right of action and liquidated damages. Cross-border transfer: Recordings and transcripts sent to foreign AI vendors may violate DOJ EO 14117 bulk data rules or GDPR. Retention: Retention periods for voice data must align with BIPA's requirements and SEC recordkeeping rules. Privilege: AI generated documents or transcripts may not be considered under attorney-client privilege.

Applicable Regulations: BIPA, GDPR/UK GDPR, CCPA/CPRA, SEC Rule 17a-4, DOJ bulk data rules

7. AI Document Analysis & Q&A

FS Operational

Description: LLM-based Q&A and summarization over financial, legal, and regulatory documents — contracts, SEC filings, fund documents, compliance policies, research reports. RAG (Retrieval-Augmented Generation) systems allow natural language queries over large internal document repositories. Use cases include regulatory research, contract review, policy gap analysis, and generating first drafts of disclosure documents.

🔴 Liability Risks

MNPI wall risk: Shared document repositories/AI tools may inadvertently provide deal teams access to MNPI held by other business units. Prompt injection: Malicious content in uploaded documents can manipulate AI to exfiltrate data or provide misleading outputs. Logging gaps: AI Q&A sessions may not be captured in required books-and-records systems. Unverified summaries: Reliance on AI-generated contract summaries without underlying review creates fiduciary risk.

Applicable Regulations: SEC Rule 10b-5, Information barrier policies, Books-and-Records Rules, Reg S-P safeguards

8. AI-Generated Content & Marketing

FS Marketing

Description: Generative AI for marketing materials, investor updates, research publications, social media content, website copy, regulatory filing narratives, and client communications. AI may also be used to generate synthetic data for illustrations, or to create visuals and infographics for investor presentations. This includes both internal drafting tools and externally facing automated content systems.

🔴 Liability Risks

AI-washing: Overstating AI capabilities in marketing materials — direct SEC/FTC enforcement risk (Delphia, Global Predictions, Presto). Hallucinated facts: Performance figures, fund returns, or market statistics that are fabricated by AI in communications. Inconsistent disclosures: AI-generated content may contradict prior disclosures or financial records. IP/copyright: AI-generated content incorporating copyrighted training data may create infringement exposure. Tennessee ELVIS Act/deepfakes: Synthetic voice/likeness in marketing without consent creates liability.

Applicable Regulations: Investment Advisers Act Marketing Rule (Rule 206(4)-1), FTC Section 5, SEC cybersecurity disclosure rules, IP/copyright laws, ELVIS Act (TN)

9. Trading Algorithms & Robo-Advisory

FS Trading

Description: AI-driven trading strategies, portfolio optimization algorithms, execution management, and automated investment advice platforms. Includes: quantitative factor models, reinforcement learning-based execution, robo-advisory platforms providing automated recommendations, and AI-augmented research used in discretionary investment processes. The line between "AI-assisted" and "AI-directed" investment decisions has significant regulatory implications.

🔴 Liability Risks

Suitability / fiduciary: Automated recommendations must meet fiduciary standards — model drift over time may cause systematic suitability failures. Conflicts embedded in optimization: Objective functions may inadvertently optimize for GP economics. Supervision failures: Advisers remain responsible for AI-generated recommendations. Model decay: Trading models trained in different market regimes may behave unexpectedly.

Applicable Regulations: Investment Advisers Act §206, FINRA Rule 3110 (supervision), Regulation Best Interest, applicable market conduct rules

10. Fraud Detection & AML/KYC

FS Compliance

Description: AI for transaction monitoring, sanctions screening, suspicious activity detection, network relationship analysis, and alert triage. NLP tools scan unstructured data (news, filings) for adverse media. ML models score transactions, counterparties, and relationships for risk. Alert management systems use AI to prioritize human review queues. Customer risk rating models use ML features for dynamic scoring.

💡 Regulatory Expectation

OCC, FinCEN, and Fed guidance increasingly expects robust governance of AI-driven AML models, including validation, explainability, bias testing, and change management procedures analogous to model risk management expectations.

🔴 Liability Risks

False negatives: AI model misses material suspicious activity — direct BSA enforcement risk. Explainability: Suspicious Activity Reporting narratives generated by AI must be defensible in regulatory examination. Feedback-loop bias: Training data reflecting historical enforcement patterns may perpetuate demographic bias. Model validation: Insufficient independent validation creates supervisory findings.

Applicable Regulations: BSA/Patriot Act, FinCEN guidance, OCC model risk management guidance (SR 11-7), OFAC sanctions

11. AI Security Vulnerabilities

FS Cyber / Risk

Description: Financial firms must defend against AI-specific attacks and disclose appropriately. Attack classes include: prompt injection (instructions embedded in user inputs or documents to override system instructions), data poisoning (manipulation of training data to alter model behavior), model inversion/extraction (recovering training data or model weights), adversarial examples (inputs designed to fool classifiers), agent/tool misuse (autonomous agents accessing unauthorized systems), and retrieval attacks (manipulating retrieval-augmented generation systems to return sensitive data).

🔴 Liability Risks

AI security incidents can trigger SEC cybersecurity disclosure obligations. Reg S-P safeguards apply to systems that access customer data. DORA (EU) operational resilience requirements apply. Input validation, output filtering, adversarial testing, access logging, and monitoring are control expectations — their absence will be examined post-incident.

Applicable Regulations: SEC cybersecurity disclosure rules (Form 8-K), Reg S-P safeguards, DORA (EU), NIST CSF 2.0, NIST AI RMF

12. AI to Support or Automate Compliance Functions

FS Compliance

Description: AI used to support or automate compliance activities, including: surveillance alert triage and prioritization, marketing review (checking for misleading statements), KYC/AML alert investigation, evidence collection and case management, regulatory change tracking and policy mapping, drafting regulatory filings and responses, and conducting compliance risk assessments.

💡 Common AI-Enabled Compliance Workflows

Alert clustering and prioritization, automated electronic communication surveillance, policy-to-control mapping, KYC/AML monitoring, horizon scanning for regulatory changes.

🔴 Liability Risks

(1) False negatives / missed escalations: AI-driven triage that misclassifies a material issue creates direct supervisory liability. (2) Hallucinated compliance violations: AI can produce false positives or hallucinate compliance violations. (3) Automation bias: Compliance staff over-relying on AI recommendations without independent judgment. (4) Vendor risk: AI vendors with access to examination materials, SAR data, or investigation files create privilege and confidentiality concerns. (5) Auditability: AI-assisted compliance decisions may not be adequately documented for examination purposes.

Applicable Regulations: FINRA Rule 3110, Investment Advisers Act §206, applicable AML/BSA supervision rules, books-and-records requirements

13. AI for Research and Data Collection

FS Compliance

Description: AI used to conduct web based research and collect data has a high risk of breaking alternative data, data privacy protection, or webscraping protocols. Due to risks of potential receipt of MNPI and misuse of customer PII, many financial services firms have strict restrictions on data collection procedures.

💡 Common AI-Enabled Workflows

Analyze large datasets of text, extract data from websites, run automated schedules to collect new data or update existing data sets, ensure accuracy of data.

🔴 Liability Risks

(1) Regulatory risk: Data privacy, webscraping violations (2) Bias: Biased AI algorithms can lead to collection of biased data. (3) Errors: AI data collection tools have risks of erronous data interpretation or AI hallucination.

Applicable Regulations: FINRA Rule 3110, Investment Advisers Act §206, applicable AML/BSA supervision rules, books-and-records requirements

14. Agentic & Multi-Agent AI Systems

Emerging Risk

Description: AI agents that autonomously take actions across systems — retrieving data, calling APIs, executing code, sending communications, and initiating workflows. Multi-agent architectures involve multiple AI models collaborating: one orchestrator delegates tasks to specialized sub-agents. These are emerging across deal origination (autonomous market scanning agents), compliance (autonomous investigation agents), and portfolio management (autonomous reporting agents).

💡 Why Multi-Agent Risk Differs

Agent chaining dramatically increases attack surface: a compromised or misconfigured agent can instruct other agents to access restricted data, ignore safety constraints, or take unauthorized actions. Inter-agent messages may not be logged or monitored by existing compliance systems. Each "hop" in an agent chain is an opportunity for privilege escalation or instruction override. AI agents that leverage tools to collect information are at risk of tools leaking information or providing inaccurate information.

🔴 Liability Risks

(1) MNPI wall erosion: Agents with cross-system access can inadvertently traverse information barriers. (2) Privilege escalation: Agents may access systems or data beyond their intended scope. (3) Jailbreak propagation: Successful prompt injection of one agent can cascade to others. (4) Monitoring gaps: Inter-agent communications and tool calls often fall outside existing supervision architectures. (5) Accountability gaps: Unclear human accountability for autonomous agent decisions. (6) Regulatory novelty: Regulators have not yet issued comprehensive guidance on agentic AI accountability.

Applicable Regulations: Investment Advisers Act §206, information barrier requirements, FINRA supervision rules, Reg S-P safeguards, Virtu Financial precedent
Sources & References
Primary sources, regulatory bodies, legal databases, and numbered footnotes for all claims in this tracker.

Federal Resources

SEC — U.S. Securities and Exchange Commission

Regulatory guidance, enforcement releases, rules, and amendments. Primary source for all SEC enforcement cases and Rule S-P, Rule 10b-5, and disclosure-related content.

Visit sec.gov →

NIST — National Institute of Standards and Technology

AI RMF 1.0 (Jan 2023), CSF 2.0 (Feb 2024), AI Safety Institute resources (now CAISI).

Visit nist.gov/artificial-intelligence →

White House / OMB

Executive orders on AI (EO 14179, EO 14365), OMB Memoranda (M-24-10), America's AI Action Plan, and related policy artifacts.

Visit whitehouse.gov →

Federal Register

Official publication for US federal rules. Primary source for DOJ EO 14117 final rule, Reg S-P amendments, and SEC cybersecurity disclosure rules.

Visit federalregister.gov →

FTC — Federal Trade Commission

Consumer protection enforcement, AI marketing guidance, and Section 5 policy statements.

Visit ftc.gov →

State Resources

CNTR AISLE — Brown University

AI legislation tracking portal from Brown University's Center for Technological Responsibility, Reimagination and Redesign (CNTR). Database of 5,000+ AI-related bills across all 50 states and federal level, analyzed across 6 governance dimensions. Primary source for all State AI Regulation Map data.

Visit cntr-aisle.org →

International Resources

European Commission

Official EU AI Act text, implementation timelines, Digital Omnibus proposals (Nov 2025), and AI Office resources.

Visit commission.europa.eu →

EDPB — European Data Protection Board

GDPR enforcement decisions, Opinion 28/2024 on AI model training, and Coordinated Enforcement Framework (CEF) actions.

Visit edpb.europa.eu →

Council of Europe

Framework Convention on AI and Human Rights (Sept 2024) and related materials.

Visit coe.int →

UK DSIT — Department for Science, Innovation and Technology

UK AI regulation White Paper, AI Opportunities Action Plan (Jan 2025), and AI Security Institute resources.

Visit DSIT →

Legal & Research Databases

NYU SEED Law Database

Searchable securities enforcement action database. Used for comprehensive SEC enforcement case research.

Visit seed.law.nyu.edu →

DLA Piper GDPR Fines Survey

Annual GDPR enforcement data, breach notification statistics, and AI enforcement trends.

Visit dlapiper.com →

Footnotes

[1] EO 14179 — "Removing Barriers to American Leadership in Artificial Intelligence," Jan. 23, 2025. Source →
[2] EO 14365 — "Ensuring a National Policy Framework for Artificial Intelligence," Dec. 11, 2025. Source →
[3] Reg S-P 2024 Amendments — SEC Final Rule, "Regulation S-P: Privacy of Consumer Financial Information and Safeguarding Customer Information," effective May 2024/2025. Source →
[4] SEC Cybersecurity Rules — "SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies," July 26, 2023. Source →
[5] NIST AI RMF 1.0 — National Institute of Standards and Technology, "AI Risk Management Framework," January 26, 2023. Source →
[6] NIST CSF 2.0 — "NIST Releases Version 2.0 of Landmark Cybersecurity Framework," February 26, 2024. Source →
[7] BSA / AML Regulations — OCC overview of Bank Secrecy Act and related AML regulations governing financial institution compliance programs. Source →
[8] DOJ Bulk Data Rule — Federal Register, DOJ final rule implementing EO 14117 on preventing access to US sensitive personal data by countries of concern, effective April 8, 2025. Source →
[9] FTC AI Claims Enforcement — Holland & Knight analysis of FTC's evaluation of deceptive AI claims and application of the FTC Act to AI model outputs, June 2025. Source →
[10] FTC — Rytr Order Rescission — FTC press release: FTC reopens and sets aside the Rytr LLC final order in response to Trump Administration's AI Action Plan, December 2025. Source →
[11] America's AI Action Plan — White House, July 23, 2025; CAISI renaming: Commerce Dept. announcement, June 2025. Source →
[12] Colorado AI Act (SB 24-205) — Colorado General Assembly, effective June 30, 2026 (extended by SB25B-004). Source →  |  Extension: SB25B-004 →
[13] Texas TRAIGA (HB 149) — Signed June 22, 2025, effective January 1, 2026. Baker Botts analysis. Source →
[14] California TFAIA (SB 53) — Signed October 2025, effective January 1, 2026. Georgetown CSET analysis. Source →
[15] California AB 2013 — AI Training Data Transparency Act, signed September 29, 2024, effective January 1, 2026. Cooley client alert. Source →
[16] California CPPA ADMT Regulations — OAL approval September 22, 2025; effective January 1, 2026. Automated decision-making technology rules under CCPA/CPRA. Source →
[17] California SB 1047 (Vetoed) — Vetoed September 29, 2024. Morgan Lewis client update. Source →
[18] Illinois BIPA — Illinois Biometric Information Privacy Act, 740 ILCS 14. Regulates collection and use of biometric identifiers; creates private right of action. Source →
[19] NYC Local Law 144 (AEDT) — Enforced by NYC DCWP. Requires annual bias audits for automated employment decision tools. Source →
[20] Washington My Health My Data Act — Signed April 27, 2023. WA Attorney General overview of consumer health data privacy requirements. Source →
[21] Utah UAIPA + HB 452 — Effective May 1, 2024; amended 2025 to add compliance-incentive safe harbor. Davis Polk analysis. Source →
[22] Tennessee ELVIS Act — Enacted March 21, 2024, effective July 1, 2024. Extends right-of-publicity protections to AI-based voice and likeness imitation. Latham & Watkins analysis. Source →
[23] EU AI Act (Regulation 2024/1689) — Entered into force August 1, 2024. Penalties and enforcement provisions at Article 99. Source →  |  Full timeline: AI Act timeline →
[24] GDPR + AI Enforcement — DLA Piper GDPR Fines Survey 2025 (€5.88B cumulative; €1.2B in 2024); EDPB Opinion 28/2024 on AI training. Source →
[25] EU DORA — Digital Operational Resilience Act, effective January 17, 2025. EIOPA overview. Source →
[26] UK AI Regulation — DSIT White Paper, "AI Regulation: A Pro-Innovation Approach" (March 2023); AI Security Institute renaming (Feb 2025); AI Opportunities Action Plan (Jan 2025). Source →
[27] China GenAI Interim Measures — Issued July 10, 2023, effective August 15, 2023. Library of Congress Global Legal Monitor summary. Source →
[28] Council of Europe Framework Convention on AI — First legally binding international AI treaty, opened for signature September 5, 2024. Source →
[29] EU Digital Omnibus (Proposed) — European Commission, November 19, 2025. Proposed simultaneous amendments to GDPR, EU AI Act, Data Act, and ePrivacy Directive. White & Case analysis. Source →
[30] NYU SEED Law Enforcement Database — Searchable database of AI-related enforcement actions across federal and state regulators. Source →
[31] SEC v. Altaba, Inc. (Yahoo!) — SEC press release: Altaba, Formerly Known as Yahoo!, Charged With Failing to Disclose Massive Cybersecurity Breach; Agrees To Pay $35 Million (2018). Source →
[32] SEC v. Morgan Stanley Smith Barney LLC — SEC press release: Morgan Stanley Smith Barney to Pay $35 Million for Extensive Failures to Safeguard Personal Information of Millions of Customers (2022). Source →
[33] SEC v. Intercontinental Exchange (ICE) — SEC press release: SEC Charges Intercontinental Exchange and Nine Affiliates Including the New York Stock Exchange With Failing to Inform the Commission of a Cyber Intrusion (2024). Source →
[34] SEC v. Unisys Corp. — SEC press release: SEC Charges Four Companies With Misleading Cyber Disclosures (2024). Source →
[35] SEC v. Flagstar Bancorp — SEC administrative proceeding: SEC Charges Flagstar for Misleading Investors About Cyber Breach. Source →
[36] SEC v. Blackbaud, Inc. — SEC complaint: SEC charges Blackbaud for misleading disclosures about 2020 ransomware attack (2023). Source →
[37] SEC v. Virtu Financial / Virtu Americas — SEC litigation release: Virtu Financial, Inc.; Virtu Americas LLC — information barrier failures (2025). Source →
[38] SEC v. Avaya Holdings Corp. — SEC press release: SEC Charges Four Companies With Misleading Cyber Disclosures (2024). Source →
[39] SEC v. Pearson plc — SEC administrative proceeding: SEC Charges Pearson Plc for Misleading Investors About Cyber Breach. Source →
[40] SEC v. Check Point Software Technologies Ltd. — SEC press release: SEC Charges Four Companies With Misleading Cyber Disclosures (2024). Source →
[41] SEC v. Mimecast — SEC press release: SEC Charges Four Companies With Misleading Cyber Disclosures (2024). Source →
[42] SEC v. First American Financial Corporation — SEC administrative order: disclosure controls failures related to vulnerability exposing sensitive financial records (2021). Source →
[43] SEC v. Delphia (USA) Inc. — SEC press release: SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (2024). Source →
[44] SEC v. Global Predictions Inc. — SEC press release: SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (2024). Source →
[45] SEC v. Presto Automation Inc. — SEC administrative proceeding: SEC Charges Restaurant-Technology Company Presto Automation for Misleading Statements About AI Product (2025). Source →
State AI Regulation Map
AI-related legislation by state. Darker shading indicates more comprehensive AI legislative activity. Hover over any state for details.
Source: CNTR AISLE — Center for Technological Responsibility, Reimagination and Redesign, Brown University (cntr-aisle.org). Data covers AI-related bills introduced from January 2023 through early 2026. Tier classifications are based on the comprehensiveness and scope of each state's AI legislation, with bill count as a secondary indicator of activity.
ALAKAZCOFLGAINKSMEMNNCNDOKPASDTXWYMOWVILNMARCAHIIAKYMIMSMTNYOHORTNUTVAWAWINESCIDNVLA
Regulatory Intensity →
Comprehensive — Multiple enacted laws, broad scope (CA, CO, TX, IL)
Significant — Major enacted legislation (NY, WA, TN, UT)
Moderate — Active bills or sector laws (VA, CT, OR, MN, OH, IN, MA, NJ, MD, DE)
Minor / Disclosure — Deepfake laws, study committees, early activity
None enacted — No AI-specific legislation as of Feb 2026

State AI Legislation Profiles

Comprehensive Framework

New York

478 bills. A06453 Frontier AI Models regulation; S07263 chatbot impersonation liability. Comprehensive bills spanning hiring, consumer protection, transparency, and automated decision-making. Highest AI bill volume in the US.

Illinois

346 bills. Biometric Information Privacy Act (BIPA) with broad AI implications. Therapy resources oversight. AI governance bills across multiple sectors including employment, healthcare, and education.

California

311 bills. SB53 Frontier AI Models; AB 2930 Automated Decision Systems; SB 1047 Safe Frontier AI Innovation Act. Broad legislation spanning AI transparency, automated systems, worker protections, and health AI.

Texas

112 bills. HB149 comprehensive regulation of AI systems with civil penalties. Broad scope spanning multiple sectors with an AI regulatory framework and enforcement mechanisms.

Colorado

40 bills. Colorado AI Act SB 24-205 — first comprehensive US AI law. Consumer Protections for AI; Conversational AI Service Operator Requirements; Intimate Digital Depictions protections.

Significant Legislation

Massachusetts

221 bills. AI accountability and transparency legislation. Consumer protection in AI interactions. Algorithmic rent setting prohibition. Multiple AI study orders across governance areas.

New Jersey

218 bills. AI use notification requirements. School AI instruction mandates. Expedited AI approval processes. Automated decision-making regulation across multiple sectors.

Virginia

152 bills. HB2094 high-risk AI development, deployment, and use with civil penalties. AI Developer Act. Broad AI governance provisions spanning multiple policy areas.

Rhode Island

139 bills. Ethical AI development and deployment regulations. Automated Decision Tools Act. AI Accountability Act. Therapy AI oversight provisions.

Maryland

119 bills. AI Policies and Procedures Act. Consumer reporting algorithmic systems. Health insurance AI evaluation requirements. Child exploitation AI protections.

Minnesota

110 bills. RAISE Act (AI safety and disclosure requirements). GenAI in official records prohibition. AI worker displacement protections. Employee notice requirements for AI impacts.

Oklahoma

110 bills. Responsible Deployment of AI Systems Act. AI Council. AI Regulatory Sandbox Program. AI Workforce Development Program. Chatbot minor protections.

Washington

109 bills. Algorithmic discrimination protection. Government AI procurement guidelines. Automated decision-making systems regulation across hiring and public services.

Utah

84 bills. AI Amendments with broad scope. Surveillance and Investigatory Technology Amendments. Data sharing provisions. AI governance across multiple areas.

Connecticut

40 bills. SB00002 Comprehensive AI Act. Automated Decision-Making and Personal Data Privacy. Automated Decision Systems Protections for Employees. AI workforce study.

Vermont

58 bills. Regulating developers and deployers of automated decision systems. AI defenses in civil actions. AI and elections regulation. Child exploitation materials AI protections.

Moderate Activity

Hawaii

167 bills. AI protection of minors. Civil rights AI applicability. Multiple AI study resolutions across government agencies.

Florida

140 bills. AI Bill of Rights. Law enforcement AI provisions. CyberBay initiative. General appropriations with AI provisions.

South Carolina

122 bills. AI consumer protection. AI in education. Health insurance AI provisions. Medicaid fraud AI provisions.

Missouri

111 bills. Software accountability for education. Data center construction moratorium. AI licensing board restrictions. Constitutional AI amendment petition.

Tennessee

93 bills. Multiple AI-related code amendments spanning education, consumer protection, and technology regulation across various sectors.

Mississippi

92 bills. AI definition legislation. AI sexual assault criminal offense. AI in mental and behavioral health prohibition. Higher education AI provisions.

Pennsylvania

89 bills. AI workforce report. AI training disclosure. Federal AI law opposition resolution. Task Force on Conversational AI.

Iowa

86 bills. AI systems in state agencies. Chatbot deployer requirements. Utilization review AI. AI output ownership. Computer science education AI.

Georgia

77 bills. Election deepfake criminal offense. Insurance AI decision regulation. LEGACY Act (Likeness/Expression/GenAI). Surveillance pricing act.

West Virginia

73 bills. Deep fake media distribution act. Student AI accountability requirements. Advanced baseload energy AI provisions.

Arizona

67 bills. Election deepfake prohibition. Biometric identifiers commercial use prohibition. AI nursing tasks pilot. AI content verification.

Alabama

60 bills. AI-assisted review of state agency rules. Deepfake media criminal and civil penalties. Various AI-adjacent legislative activity.

Montana

60 bills. Limit government AI use. Revise AI laws. Name/voice/likeness AI protections. Health insurance AI regulation.

North Carolina

59 bills. Sexual exploitation prevention. Algorithmic rent fixing prohibition. Healthcare cost AI provisions. Rate payer protection.

New Mexico

54 bills. AI Synthetic Content Accountability Act. AI transparency in government. AI ethics as school elective.

Michigan

50 bills. Falsely depicting individuals civil action. Employment automated decision-making. Companion chatbot minors prohibition. AI energy authority.

Ohio

47 bills. Pricing algorithm regulation. AI Study Commission. Frontier Technologies Commission. Data center study.

Kentucky

44 bills. Protection of information act. Mental health chatbots regulation. Office of Public Defense AI. Health command establishment.

Louisiana

42 bills. AI insurance fairness act. Surveillance pricing discrimination. AI in campaign materials disclosure. Industrial AI study.

Kansas

37 bills. Age-appropriate design code act. AI in medical decisions transparency act. AI sexual exploitation of children protections.

Minor / Disclosure Only

Wisconsin

40 bills. AI in health claims denial regulation. Insurer claims AI auditing. Social media AI for minors.

Idaho

35 bills. Disclosing explicit synthetic media. AI review of administrative rules. Campaign finance AI transparency.

Nevada

34 bills. AI in health care requirements. Education AI provisions. Department of Corrections AI use.

Oregon

33 bills. AI companions regulation. General AI regulation. Criminal offenses AI in work. Elections AI provisions.

Indiana

32 bills. Foreign adversary protections. Health claims AI. Virtual currency kiosks. Mostly tangential AI references.

Nebraska

26 bills. AI Consumer Protection Act. Conversational AI Safety Act. AI Risk Management Transparency Act.

New Hampshire

23 bills. AI oversight regulation. AI technologies regulation. Personal data privacy from websites. State AI use exceptions.

Maine

22 bills. Synthetic media in campaign advertising. AI chatbot access for children. AI in mental health. Technology in classrooms study.

North Dakota

21 bills. Transportation AI study. Licensure retention study. Research technology park grants. Law enforcement robot use.

Alaska

20 bills. SB2 AI/Deepfakes/Cybersecurity/Data Transfers. SB33 Synthetic Media Elections. AI Legislative Task Force. AI disclosure in campaigns.

South Dakota

17 bills. Chatbot regulation for minors. AI in health insurance. Consumer chatbot notice requirements.

Arkansas

16 bills. SB258 Digital Responsibility Safety and Trust Act. AI-generated content ownership. Deepfake criminal offenses.

Wyoming

15 bills. Deepfake protection for kids. Ban on government social scoring with AI. K-12 public school AI provisions.

Minimal Activity

Delaware

9 bills. AI Commission amendments. Large energy use facilities. Minimal AI-specific legislative activity.