Cookies
We use cookies and related technologies to personalize and enhance your experience. By using this site you agree to the use of cookies and related tracking technologies.
With the lack of clear federal policy framework around artificial intelligence, AI regulations that have implications for financial services can be complex and opaque. Firms practicing strong AI governance can form policies based on state level regulation, federal policy guidance, and precedent from SEC enforcement cases. This policy tracker (last updated February 2026) is designed to help you navigate this complex regulatory environment and apply strategies for mitigating AI compliance risk to finanical services uses of AI.
President Trump's first AI executive order revoked the Biden-era Executive Order 14110 and directed agencies to remove barriers to US AI leadership. It rescinded related agency implementations and directed OMB to revise or rescind M-24-10 guidance inconsistent with the new pro-innovation posture.[1]
Executive Order 14179, Executive Order on Removing Barriers to American Leadership in Artificial Intelligence, commissioned an action plan to review all policies pursuant to EO 14110 that may not be consistent with the policy directives of EO 14179. The Trump administration continues to emphasize a deregulatory environment to foster AI innovation.
Declares a minimally burdensome national AI policy and establishes a DOJ AI Litigation Task Force to challenge "onerous" state AI laws on commerce or preemption grounds. Directs Commerce to evaluate state laws and condition BEAD broadband funding on compliance. Directs FTC to issue a policy statement applying Section 4 to AI model outputs, and FCC to explore a federal disclosure standard that preempts conflicting state laws. Does not independently preempt state law but signals federal-state tensions ahead.[2]
Colorado AI Act and California TFAIA are implicated. State laws remain enforceable until Congress acts or courts rule. Companies should continue compliance with enacted state laws while monitoring federal litigation and legislative recommendations.
Requires broker-dealers, investment companies, and advisers to safeguard customer information. 2024 amendments significantly strengthen incident response requirements (72-hour notification to affected customers in some cases), third-party service provider oversight, and annual program reviews. AI systems that process customer financial data are within scope.[3]
SEC has brought numerous Reg S-P enforcement actions (see Morgan Stanley Smith Barney, $35M). AI tools that ingest customer data without adequate vendor contracts or disposal protocols create direct exposure.
Requires registrants to report material cybersecurity incidents on Form 8-K within 4 business days and provide annual disclosures on cybersecurity risk management, strategy, and governance (Form 10-K Item 1C). Particularly relevant where AI increases the attack surface (e.g., prompt injection, model exfiltration) or where AI-generated narratives are used in filings.[4]
SEC has charged Avaya, Blackbaud, Check Point, Flagstar, Mimecast, and Unisys for misleading cybersecurity disclosures.
Voluntary framework that was widely adopted as a baseline for reasonable AI risk management. Referenced under Texas TRAIGA, Colorado AI Act compliance guidance, and federal procurement standards. Increasingly a litigation touchstone for whether a company exercised reasonable care regarding AI risk management.[5]
Updated baseline adding a new "Govern" function; now designed for organizations of all sizes and sectors, not just critical infrastructure. Frequently used alongside NIST AI RMF for securing AI workloads, vendor risk management, and incident response mapping.[6]
Requires AML programs, suspicious activity reporting, and customer due diligence. Many firms deploy AI for transaction monitoring and alert triage. Regulators (FinCEN, OCC, Fed) expect adequate governance of AI-driven monitoring, including model validation, explainability documentation, and bias testing. Supervisory guidance continues to evolve.[7]
Implements Executive Order 14117. Prohibits and restricts certain transactions involving bulk US sensitive personal data (genomics, biometric, health, financial, geolocation, covered personal identifiers) with designated countries of concern (China, Russia, Iran, N. Korea, Cuba, Venezuela). Impacts AI training pipelines, vendor relationships, and cross-border data sharing with foreign subprocessors.[8]
FTC treats false or unsubstantiated claims of AI product functionality as a core enforcement priority, including "AI-washing" — overstating AI capabilities in marketing, disclosures, or regulatory filings. Executive Order 14365 has directed the FTC to issue a policy statement on the application of the FTC Act to AI models and identifying which state laws requiring alterations to AI model outputs per Section 5.[9]
In September of 2024, the FTC announced an enforcement sweep called Operation AI Comply, which included 5 enforcement actions against companies making false claims to promote AI tools and services. In December of 2025, the FTC issued an order to reopen and set aside a 2024 final consent order for one of the five companies, Rytr LLC. The FTC determined that the complaint was in violation of the Trump Administration's Executive Order 14179 and "America's AI Action Plan." [10]
White House policy roadmap encouraging federal agencies to seek AI "dominance" through minimal regulation. Directed agencies to review and remove barriers to AI adoption. NIST's AI Safety Institute was renamed the Center for AI Standards and Innovation (CAISI) in June 2025, with a refocused mission on US innovation and security rather than safety/ethics evaluation.[11]
One of the first comprehensive US state AI laws. Imposes duties on developers and deployers of "high-risk" AI systems to use reasonable care to avoid algorithmic discrimination. Requires impact assessments, consumer notices, AG reporting, and transparency about AI-informed decisions. EO 14365 explicitly calls out the Colorado AI Act as potentially "forcing AI models to embed DEI" — it is the primary target of the DOJ Litigation Task Force.[12]
Despite federal pressure, the Colorado AI Act remains law until courts rule otherwise. Companies with Colorado nexus (or portfolio companies deploying high-risk AI to Colorado residents) should continue compliance planning for the June 30, 2026 effective date.
TRAIGA prohibits AI systems developed or deployed with intent to: manipulate behavior through deceptive means, assign social scores, discriminate unlawfully, infringe constitutional rights, capture biometric data without consent, or facilitate deepfakes. AG enforcement with civil penalties ($10K–$200K per violation). Includes an innovation sandbox (36 months) and NIST AI RMF compliance as an affirmative defense.[13]
Unlike Colorado, Texas uses intent-based rather than impact-based liability. This makes litigation harder to sustain but also creates documentation obligations around developer/deployer intent.
Governor Newsom signed SB 53 after vetoing the more expansive SB 1047 in September 2024. Requires developers of frontier AI models (>10^26 FLOPS) to publish safety protocols publicly and report serious safety incidents to state officials. Focuses on transparency rather than SB 1047's prescriptive safety requirements. Referenced in EO 14365 drafts as a model for acceptable state AI regulation.[14]
Requires developers of generative AI systems made available to Californians to post high-level summaries of training datasets on their websites — including whether personal information or copyrighted material is included. Applies to any person who designs, codes, produces, or substantially modifies a GenAI system released on or after January 1, 2022. Similar to GPAI transparency requirements in the EU AI Act.[15]
California Privacy Protection Agency regulations under CCPA/CPRA introduce mandate for disclosure about privacy policies and automated decision-making technology processes, risk assessments and opt-out options for automated decision-making technology (ADMT) for "significant decisions" (housing, employment, finance, healthcare). Requires mandatory cybersecurity audits for qualifying businesses and data protection risk assessments for high-risk processing. Businesses that process the personal information of California consumers are significantly affected.[16]
Would have required developers of models costing over $100M to train to create safety and security protocols, implement kill switches, conduct annual third-party audits, and allow AG enforcement for imminent critical harms. Vetoed because Newsom disagreed with using compute thresholds as the regulatory trigger rather than actual risk of harm. Influenced the design of subsequent SB 53 and the national debate about where to regulate (model level vs. use/deployment level).[17]
Regulates collection, use, and storage of biometric identifiers including fingerprints, retinal scans, voiceprints, and face geometry. Requires written informed consent before collection. Creates private right of action with statutory damages ($1,000 per negligent violation; $5,000 per intentional/reckless violation per biometric). AI voice transcription and facial recognition tools are at high risk of violating BIPA. 2024 amendment limits per-exposure damages in class actions.[18]
Regulates AI-driven hiring and promotion tools. Requires annual bias audits by independent auditors, public posting of audit summaries, and advance notice to candidates and employees. Applies when AEDTs are used to "substantially assist or replace" discretionary employment decisions. Relevant for portfolio companies' HR tech stacks and vendor due diligence.[19]
Broad privacy obligations focused on "consumer health data" — broadly defined to include data that identifies a consumer's past, present, or future physical or mental health condition. Heightened consent requirements, geofencing restrictions, and private right of action. AI models trained on or inferring sensitive health data can trigger violations — particularly in adtech, wellness, and consumer apps.[20]
Requires disclosures when generative AI interacts with consumers in regulated industries. Clarifies that businesses remain liable under consumer protection law for AI outputs. 2025 amendment (HB 452) adds an affirmative defense for providers maintaining documented AI governance measures — one of the first state laws to create a compliance-incentive safe harbor structure.[21]
Expands right-of-publicity protections to address AI-based imitation of individuals' voices and likenesses without authorization. Creates civil cause of action. Relevant for portfolio media companies, marketing tools, voice AI products, and synthetic media platforms — increases IP/rights clearance importance for any training data or output involving identifiable voices.[22]
Comprehensive AI law across the European Union, which promotes a risk-based framework to address prohibited practices, general purpose artificial intelligence model transparency, and strict compliance requirements for high-risk AI systems. Extra-territorial effect: applies to any AI services or products provided to the EU market. Penalties up to €35M or 7% of global annual turnover for the most serious violations.[23]
Already in effect: AI literacy obligations, prohibited system bans. Aug 2025: GPAI transparency. Aug 2026: High-risk system compliance. Note: EU Digital Omnibus (Nov 2025) proposes to streamline certain obligations — monitor for amendments.
Core data privacy framework governing AI systems that process personal data. GDPR applies alongside the EU AI Act — wherever an AI system involves personal data, both regulations apply. Key AI-specific requirements: DPIAs for high-risk automated processing, transparency obligations, Article 22 restrictions on solely automated decisions with significant effects, lawful basis for training data (consent or legitimate interests), and data minimisation/storage limitation. EDPB Opinion 28/2024 provides guidance on AI model training compliance.[24]
€1.2B in fines in 2024 alone. AI-specific enforcement accelerating: Clearview AI fined €30.5M (Dutch DPA, 2024) for facial recognition GDPR violations; LinkedIn fined €310M (Irish DPC, Oct 2024) for behavioral profiling without consent. Regulators increasingly using GDPR as interim guardrail for AI regulation.
EU Commission's Digital Omnibus proposal (Nov 19, 2025) would clarify that legitimate interests can serve as a lawful basis for AI training data processing, consolidate breach reporting, and reduce SME compliance burdens. Still subject to Council/Parliament approval — expected 2027-2028 implementation.
Operational resilience requirements for EU financial entities and their critical ICT third-party service providers. Covers AI systems as ICT systems: risk management, incident classification and reporting, resilience testing (including penetration testing for significant firms), and ICT third-party risk management including mandatory contractual requirements. Overlap with Reg S-P safeguards and NIS2 for firms with EU operations.[25]
UK has no standalone AI Act; regulation flows through sector-specific regulators (ICO, FCA, CMA, Ofcom, MHRA) applying five cross-cutting principles: safety/security/robustness; transparency/explainability; fairness; accountability/governance; contestability/redress. The AI Safety Institute was renamed the AI Security Institute in February 2025, signalling a shift toward security and national-security risks. A comprehensive AI Bill was delayed in June 2025, with a more targeted approach now expected in 2026 parliamentary session.[26]
UK GDPR applies to AI systems processing UK residents' data. EU AI Act applies where UK-developed AI outputs are used in the EU. ICO issued AI-in-recruitment audit outcomes and GDPR-meets-GenAI guidance in 2024-2025.
Applies to providers of public-facing generative AI services in China. Obligations include: lawful training data use, personal information protection compliance, content management (prohibited categories), algorithmic transparency labeling, and filings with Cyberspace Administration of China. Applies to foreign providers whose services reach Chinese users. Relevant for portfolio companies offering GenAI products into China or using China-based distribution partners.[27]
First legally binding international AI treaty. Requires signatory nations to take measures ensuring AI lifecycle activities align with human rights, democracy, and rule of law. Signatories include EU member states, US, UK, and others. Influences national legislative frameworks and public procurement standards. Not self-executing but signals global convergence direction.[28]
The European Commission's Digital Omnibus proposes simultaneous amendments to the GDPR, EU AI Act, Data Act, and ePrivacy Directive. AI Act changes would include postponement of some high-risk AI obligations and streamlined conformity assessment procedures. GDPR changes would clarify legitimate interests as a lawful basis for AI training data processing and consolidate breach reporting. Subject to Council and Parliament legislative process — expect material changes before enactment.[29]
Actions involving cyber disclosure failures, AI-washing, privacy violations & information barriers
All enforcement data from NYU SEED Law Database[30]
| Entity⇅⇅ | Year⇅⇅ | Violation / Theme⇅⇅ | Penalty▲⇅ |
|---|---|---|---|
|
Presto Automation Inc.
AI-Washing
|
2025 | Misleading investors about automation capabilities of "Presto Voice" AI product — required human agents for a significant majority of orders.[45] | Cease & desist |
|
Global Predictions Inc.
AI-Washing
|
2024 | False and misleading statements claiming to be the "first AI-powered financial advisor" and use of "expert AI".[44] | $175K |
|
Delphia (USA) Inc.
AI-Washing
|
2024 | Falsely claimed to use proprietary AI to incorporate client personal data into investment decisions.[43] | $225K |
|
First American Financial Corp.
Cyber disclosure
|
2021 | Disclosure controls failures tied to a vulnerability exposing ~800M pages of sensitive financial records.[42] | $487,616 |
|
Mimecast
Cyber disclosure
|
2024 | Misleading statements about the nature and impact of a cybersecurity incident involving source code access.[41] | $990K |
|
Check Point Software Technologies
Cyber disclosure
|
2024 | Materially misleading risk disclosures that did not accurately describe cybersecurity incidents experienced.[40] | $995K |
|
Pearson plc
Cyber disclosure
|
2021 | Misleading statements about a cyber intrusion involving theft of millions of student records and login credentials.[39] | $1M |
|
Avaya Holdings
Cyber disclosure
|
2024 | Misleading statements about scope and impact of a significant cybersecurity incident.[38] | $1M |
|
Virtu Financial / Virtu Americas
MNPI / Info Barriers
|
2025 | Misleading statements about information barriers between market-making and data analytics businesses; failure to establish adequate MNPI controls.[37] | $2.5M |
|
Blackbaud, Inc.
Cyber disclosure
|
2023 | Misleading disclosures regarding a ransomware attack impacting donors and customers' personal data.[36] | $3M |
|
Flagstar Bancorp
Privacy / PII
|
2024 | Misleading statements about scope of cyberattack and exfiltration of customers' PII.[35] | $3.55M |
|
Unisys Corp.
Cyber disclosure
|
2024 | Materially misleading risk disclosures that downplayed actual cybersecurity incidents experienced by the company.[34] | $4M |
|
Intercontinental Exchange (ICE)
Reg SCI
|
2024 | Failure to timely notify SEC of a systems intrusion affecting an ICE subsidiary as required by Regulation SCI.[33] | $10M |
|
Altaba Inc. (formerly Yahoo!)
Cyber disclosure
|
2018 | Failure to disclose a massive data breach impacting ~500M user accounts in a timely manner.[31] | $35M |
|
Morgan Stanley Smith Barney LLC
Reg S-P
|
2022 | Failure to protect ~15M customers' PII; deficient data disposal practices for decommissioned hardware.[32] | $35M |
|
Theme Reg S-P safeguards; secure data disposal obligations AI Relevance AI tools that touch or retain customer PII (including training data, embeddings, logs) require equally rigorous retention/deletion protocols and vendor controls Key Controls Data minimization; encryption at rest/transit; vendor diligence; verified deletion procedures with audit trail Concurrent Action OCC simultaneously fined Morgan Stanley $60M — total regulatory exposure $95M+ Source |
|||
Description: AI systems for identifying potential acquisition targets through pattern recognition across company financials, market trends, news articles, SEC filings, job postings, web traffic signals, pricing data, and relationship intelligence platforms. AI models can identify high-growth companies matching investment criteria — often before formal marketing processes begin — using hiring velocity, product traction, competitive position, and leadership network mapping.
Expands sourcing capacity continuously across public and third-party sectors, and has the potential to surface opportunities earlier in the cycle. Identifies introduction paths via relationship mapping. Can generate prospects against proprietary investment criteria at scale.
MNPI exposure: Sourcing tools that access data streams containing material non-public information create Rule 10b-5 exposure. Data provenance: Third-party datasets may contain scraped personal data triggering data privacy laws. Model bias: Scoring models may systematically de-prioritize certain geographies/demographics. Competitive intelligence: Scraping/accessing data in violation of ToS or CFAA creates separate liability.
Description: AI systems conducting financial, legal, operational, and regulatory due diligence on potential acquisitions. Includes automated financial statement analysis, contract clause extraction and obligation mapping, compliance gap scanning, and risk scoring. LLM-based Q&A over virtual data rooms is increasingly common, as is AI-assisted projection modeling and scenario analysis.
Accelerated review of large data rooms. Abilities for clause extraction and variance detection across hundreds of contracts. Rapid scenario modeling for financial projections. Earlier identification of material issues before closing.
Hallucinations: AI summaries may state inaccurate contract terms — missed material issues can create post-closing liability. Prompt injection: Malicious language embedded in uploaded documents can manipulate AI analysis. Data leakage: Confidential target data sent to third-party AI vendors may violate NDA terms or trigger data protection obligations. Over-reliance: Fiduciary exposure if investment committee relies on AI summaries without underlying document review.
Description: Potential for AI-powered aggregation of portfolio KPIs and operational metrics into standardized dashboards, which can use anomaly detection models to flag performance deviations. Natural language processing tools monitor news, reviews, and regulatory filings for updates. Cross-portfolio analytics use consistent KPI definitions and provide benchmarking against comparable companies.
Earlier warning signals for underperformance. Standardized cross-portfolio analytics that removes company-by-company inconsistencies in valuation methodology. Automated variance alerts can surface issues before quarterly board meetings. Operational efficiency benchmarking.
MNPI exposure: Centralized monitoring data constitutes MNPI — access controls and information barriers are essential. Data breach: Aggregated portfolio data is a high-value target. Bias in assessment: AI models may unfairly score management performance. Wall breaks: If monitoring data flows to deal teams, information barriers may be breached.
Description: Generative AI to draft quarterly investor letters, portfolio company updates, Q&A responses for LP requests, performance explanations, ESG/impact reporting, and first drafts of regulatory filing narratives (e.g., Form ADV, PFRD filings). AI systems can also draft responses to LP due diligence questionnaires (DDQs) by synthesizing information from internal databases.
Significant time savings on repetitive quarterly reporting. Consistent tone and format across LP communications. Faster response times to LP information requests. Cross-portfolio consistency in ESG metric reporting.
Hallucinated misstatements: AI-generated financial or performance data that differs from the books creates securities fraud exposure. Selective disclosure / MNPI: AI may inadvertently include restricted information in LP communications. Filing accuracy: Inadequate human review of AI-drafted regulatory filings. Books and records: AI-assisted communications may not be captured in required recordkeeping systems.
Description: AI identifying and quantifying operational improvement opportunities across portfolio companies: cost reduction via vendor consolidation, dynamic pricing optimization, supply chain analytics, workforce productivity modeling, and customer churn prediction. AI benchmarking tools compare portfolio company efficiency against peers and identify operational gaps. GP operating partners increasingly use proprietary AI tools deployed across the portfolio.
Disparate impact: Workforce optimization AI may have discriminatory impact on protected classes — EEOC exposure, state AI employment laws (NYC LL144, etc.). Employee privacy: Analytics on workforce productivity may trigger state privacy laws. Short-term optimization: AI over-optimizes for margin at expense of longer-term sustainability or employee welfare. Union/contractual obligations: AI-driven workforce changes may violate existing agreements.
Description: AI transcription for client meetings, supervision, compliance monitoring, and call center operations. Voice AI systems may generate voiceprints as a byproduct — potentially creating biometric identifiers regulated under BIPA (Illinois) and similar laws. Use cases include: automated meeting notes, compliance surveillance of advisor calls, sentiment analysis of client interactions, and voice-activated trading systems.
Biometric / BIPA: Voiceprints created without written informed consent trigger BIPA exposure with private right of action and liquidated damages. Cross-border transfer: Recordings and transcripts sent to foreign AI vendors may violate DOJ EO 14117 bulk data rules or GDPR. Retention: Retention periods for voice data must align with BIPA's requirements and SEC recordkeeping rules. Privilege: AI generated documents or transcripts may not be considered under attorney-client privilege.
Description: LLM-based Q&A and summarization over financial, legal, and regulatory documents — contracts, SEC filings, fund documents, compliance policies, research reports. RAG (Retrieval-Augmented Generation) systems allow natural language queries over large internal document repositories. Use cases include regulatory research, contract review, policy gap analysis, and generating first drafts of disclosure documents.
MNPI wall risk: Shared document repositories/AI tools may inadvertently provide deal teams access to MNPI held by other business units. Prompt injection: Malicious content in uploaded documents can manipulate AI to exfiltrate data or provide misleading outputs. Logging gaps: AI Q&A sessions may not be captured in required books-and-records systems. Unverified summaries: Reliance on AI-generated contract summaries without underlying review creates fiduciary risk.
Description: Generative AI for marketing materials, investor updates, research publications, social media content, website copy, regulatory filing narratives, and client communications. AI may also be used to generate synthetic data for illustrations, or to create visuals and infographics for investor presentations. This includes both internal drafting tools and externally facing automated content systems.
AI-washing: Overstating AI capabilities in marketing materials — direct SEC/FTC enforcement risk (Delphia, Global Predictions, Presto). Hallucinated facts: Performance figures, fund returns, or market statistics that are fabricated by AI in communications. Inconsistent disclosures: AI-generated content may contradict prior disclosures or financial records. IP/copyright: AI-generated content incorporating copyrighted training data may create infringement exposure. Tennessee ELVIS Act/deepfakes: Synthetic voice/likeness in marketing without consent creates liability.
Description: AI-driven trading strategies, portfolio optimization algorithms, execution management, and automated investment advice platforms. Includes: quantitative factor models, reinforcement learning-based execution, robo-advisory platforms providing automated recommendations, and AI-augmented research used in discretionary investment processes. The line between "AI-assisted" and "AI-directed" investment decisions has significant regulatory implications.
Suitability / fiduciary: Automated recommendations must meet fiduciary standards — model drift over time may cause systematic suitability failures. Conflicts embedded in optimization: Objective functions may inadvertently optimize for GP economics. Supervision failures: Advisers remain responsible for AI-generated recommendations. Model decay: Trading models trained in different market regimes may behave unexpectedly.
Description: AI for transaction monitoring, sanctions screening, suspicious activity detection, network relationship analysis, and alert triage. NLP tools scan unstructured data (news, filings) for adverse media. ML models score transactions, counterparties, and relationships for risk. Alert management systems use AI to prioritize human review queues. Customer risk rating models use ML features for dynamic scoring.
OCC, FinCEN, and Fed guidance increasingly expects robust governance of AI-driven AML models, including validation, explainability, bias testing, and change management procedures analogous to model risk management expectations.
False negatives: AI model misses material suspicious activity — direct BSA enforcement risk. Explainability: Suspicious Activity Reporting narratives generated by AI must be defensible in regulatory examination. Feedback-loop bias: Training data reflecting historical enforcement patterns may perpetuate demographic bias. Model validation: Insufficient independent validation creates supervisory findings.
Description: Financial firms must defend against AI-specific attacks and disclose appropriately. Attack classes include: prompt injection (instructions embedded in user inputs or documents to override system instructions), data poisoning (manipulation of training data to alter model behavior), model inversion/extraction (recovering training data or model weights), adversarial examples (inputs designed to fool classifiers), agent/tool misuse (autonomous agents accessing unauthorized systems), and retrieval attacks (manipulating retrieval-augmented generation systems to return sensitive data).
AI security incidents can trigger SEC cybersecurity disclosure obligations. Reg S-P safeguards apply to systems that access customer data. DORA (EU) operational resilience requirements apply. Input validation, output filtering, adversarial testing, access logging, and monitoring are control expectations — their absence will be examined post-incident.
Description: AI used to support or automate compliance activities, including: surveillance alert triage and prioritization, marketing review (checking for misleading statements), KYC/AML alert investigation, evidence collection and case management, regulatory change tracking and policy mapping, drafting regulatory filings and responses, and conducting compliance risk assessments.
Alert clustering and prioritization, automated electronic communication surveillance, policy-to-control mapping, KYC/AML monitoring, horizon scanning for regulatory changes.
(1) False negatives / missed escalations: AI-driven triage that misclassifies a material issue creates direct supervisory liability. (2) Hallucinated compliance violations: AI can produce false positives or hallucinate compliance violations. (3) Automation bias: Compliance staff over-relying on AI recommendations without independent judgment. (4) Vendor risk: AI vendors with access to examination materials, SAR data, or investigation files create privilege and confidentiality concerns. (5) Auditability: AI-assisted compliance decisions may not be adequately documented for examination purposes.
Description: AI used to conduct web based research and collect data has a high risk of breaking alternative data, data privacy protection, or webscraping protocols. Due to risks of potential receipt of MNPI and misuse of customer PII, many financial services firms have strict restrictions on data collection procedures.
Analyze large datasets of text, extract data from websites, run automated schedules to collect new data or update existing data sets, ensure accuracy of data.
(1) Regulatory risk: Data privacy, webscraping violations (2) Bias: Biased AI algorithms can lead to collection of biased data. (3) Errors: AI data collection tools have risks of erronous data interpretation or AI hallucination.
Description: AI agents that autonomously take actions across systems — retrieving data, calling APIs, executing code, sending communications, and initiating workflows. Multi-agent architectures involve multiple AI models collaborating: one orchestrator delegates tasks to specialized sub-agents. These are emerging across deal origination (autonomous market scanning agents), compliance (autonomous investigation agents), and portfolio management (autonomous reporting agents).
Agent chaining dramatically increases attack surface: a compromised or misconfigured agent can instruct other agents to access restricted data, ignore safety constraints, or take unauthorized actions. Inter-agent messages may not be logged or monitored by existing compliance systems. Each "hop" in an agent chain is an opportunity for privilege escalation or instruction override. AI agents that leverage tools to collect information are at risk of tools leaking information or providing inaccurate information.
(1) MNPI wall erosion: Agents with cross-system access can inadvertently traverse information barriers. (2) Privilege escalation: Agents may access systems or data beyond their intended scope. (3) Jailbreak propagation: Successful prompt injection of one agent can cascade to others. (4) Monitoring gaps: Inter-agent communications and tool calls often fall outside existing supervision architectures. (5) Accountability gaps: Unclear human accountability for autonomous agent decisions. (6) Regulatory novelty: Regulators have not yet issued comprehensive guidance on agentic AI accountability.
Regulatory guidance, enforcement releases, rules, and amendments. Primary source for all SEC enforcement cases and Rule S-P, Rule 10b-5, and disclosure-related content.
Visit sec.gov →AI RMF 1.0 (Jan 2023), CSF 2.0 (Feb 2024), AI Safety Institute resources (now CAISI).
Visit nist.gov/artificial-intelligence →Executive orders on AI (EO 14179, EO 14365), OMB Memoranda (M-24-10), America's AI Action Plan, and related policy artifacts.
Visit whitehouse.gov →Official publication for US federal rules. Primary source for DOJ EO 14117 final rule, Reg S-P amendments, and SEC cybersecurity disclosure rules.
Visit federalregister.gov →Consumer protection enforcement, AI marketing guidance, and Section 5 policy statements.
Visit ftc.gov →AI legislation tracking portal from Brown University's Center for Technological Responsibility, Reimagination and Redesign (CNTR). Database of 5,000+ AI-related bills across all 50 states and federal level, analyzed across 6 governance dimensions. Primary source for all State AI Regulation Map data.
Visit cntr-aisle.org →Official EU AI Act text, implementation timelines, Digital Omnibus proposals (Nov 2025), and AI Office resources.
Visit commission.europa.eu →GDPR enforcement decisions, Opinion 28/2024 on AI model training, and Coordinated Enforcement Framework (CEF) actions.
Visit edpb.europa.eu →Framework Convention on AI and Human Rights (Sept 2024) and related materials.
Visit coe.int →UK AI regulation White Paper, AI Opportunities Action Plan (Jan 2025), and AI Security Institute resources.
Visit DSIT →Searchable securities enforcement action database. Used for comprehensive SEC enforcement case research.
Visit seed.law.nyu.edu →Annual GDPR enforcement data, breach notification statistics, and AI enforcement trends.
Visit dlapiper.com →478 bills. A06453 Frontier AI Models regulation; S07263 chatbot impersonation liability. Comprehensive bills spanning hiring, consumer protection, transparency, and automated decision-making. Highest AI bill volume in the US.
346 bills. Biometric Information Privacy Act (BIPA) with broad AI implications. Therapy resources oversight. AI governance bills across multiple sectors including employment, healthcare, and education.
311 bills. SB53 Frontier AI Models; AB 2930 Automated Decision Systems; SB 1047 Safe Frontier AI Innovation Act. Broad legislation spanning AI transparency, automated systems, worker protections, and health AI.
112 bills. HB149 comprehensive regulation of AI systems with civil penalties. Broad scope spanning multiple sectors with an AI regulatory framework and enforcement mechanisms.
40 bills. Colorado AI Act SB 24-205 — first comprehensive US AI law. Consumer Protections for AI; Conversational AI Service Operator Requirements; Intimate Digital Depictions protections.
221 bills. AI accountability and transparency legislation. Consumer protection in AI interactions. Algorithmic rent setting prohibition. Multiple AI study orders across governance areas.
218 bills. AI use notification requirements. School AI instruction mandates. Expedited AI approval processes. Automated decision-making regulation across multiple sectors.
152 bills. HB2094 high-risk AI development, deployment, and use with civil penalties. AI Developer Act. Broad AI governance provisions spanning multiple policy areas.
139 bills. Ethical AI development and deployment regulations. Automated Decision Tools Act. AI Accountability Act. Therapy AI oversight provisions.
119 bills. AI Policies and Procedures Act. Consumer reporting algorithmic systems. Health insurance AI evaluation requirements. Child exploitation AI protections.
110 bills. RAISE Act (AI safety and disclosure requirements). GenAI in official records prohibition. AI worker displacement protections. Employee notice requirements for AI impacts.
110 bills. Responsible Deployment of AI Systems Act. AI Council. AI Regulatory Sandbox Program. AI Workforce Development Program. Chatbot minor protections.
109 bills. Algorithmic discrimination protection. Government AI procurement guidelines. Automated decision-making systems regulation across hiring and public services.
84 bills. AI Amendments with broad scope. Surveillance and Investigatory Technology Amendments. Data sharing provisions. AI governance across multiple areas.
40 bills. SB00002 Comprehensive AI Act. Automated Decision-Making and Personal Data Privacy. Automated Decision Systems Protections for Employees. AI workforce study.
58 bills. Regulating developers and deployers of automated decision systems. AI defenses in civil actions. AI and elections regulation. Child exploitation materials AI protections.
167 bills. AI protection of minors. Civil rights AI applicability. Multiple AI study resolutions across government agencies.
140 bills. AI Bill of Rights. Law enforcement AI provisions. CyberBay initiative. General appropriations with AI provisions.
122 bills. AI consumer protection. AI in education. Health insurance AI provisions. Medicaid fraud AI provisions.
111 bills. Software accountability for education. Data center construction moratorium. AI licensing board restrictions. Constitutional AI amendment petition.
93 bills. Multiple AI-related code amendments spanning education, consumer protection, and technology regulation across various sectors.
92 bills. AI definition legislation. AI sexual assault criminal offense. AI in mental and behavioral health prohibition. Higher education AI provisions.
89 bills. AI workforce report. AI training disclosure. Federal AI law opposition resolution. Task Force on Conversational AI.
86 bills. AI systems in state agencies. Chatbot deployer requirements. Utilization review AI. AI output ownership. Computer science education AI.
77 bills. Election deepfake criminal offense. Insurance AI decision regulation. LEGACY Act (Likeness/Expression/GenAI). Surveillance pricing act.
73 bills. Deep fake media distribution act. Student AI accountability requirements. Advanced baseload energy AI provisions.
67 bills. Election deepfake prohibition. Biometric identifiers commercial use prohibition. AI nursing tasks pilot. AI content verification.
60 bills. AI-assisted review of state agency rules. Deepfake media criminal and civil penalties. Various AI-adjacent legislative activity.
60 bills. Limit government AI use. Revise AI laws. Name/voice/likeness AI protections. Health insurance AI regulation.
59 bills. Sexual exploitation prevention. Algorithmic rent fixing prohibition. Healthcare cost AI provisions. Rate payer protection.
54 bills. AI Synthetic Content Accountability Act. AI transparency in government. AI ethics as school elective.
50 bills. Falsely depicting individuals civil action. Employment automated decision-making. Companion chatbot minors prohibition. AI energy authority.
47 bills. Pricing algorithm regulation. AI Study Commission. Frontier Technologies Commission. Data center study.
44 bills. Protection of information act. Mental health chatbots regulation. Office of Public Defense AI. Health command establishment.
42 bills. AI insurance fairness act. Surveillance pricing discrimination. AI in campaign materials disclosure. Industrial AI study.
37 bills. Age-appropriate design code act. AI in medical decisions transparency act. AI sexual exploitation of children protections.
40 bills. AI in health claims denial regulation. Insurer claims AI auditing. Social media AI for minors.
35 bills. Disclosing explicit synthetic media. AI review of administrative rules. Campaign finance AI transparency.
34 bills. AI in health care requirements. Education AI provisions. Department of Corrections AI use.
33 bills. AI companions regulation. General AI regulation. Criminal offenses AI in work. Elections AI provisions.
32 bills. Foreign adversary protections. Health claims AI. Virtual currency kiosks. Mostly tangential AI references.
26 bills. AI Consumer Protection Act. Conversational AI Safety Act. AI Risk Management Transparency Act.
23 bills. AI oversight regulation. AI technologies regulation. Personal data privacy from websites. State AI use exceptions.
22 bills. Synthetic media in campaign advertising. AI chatbot access for children. AI in mental health. Technology in classrooms study.
21 bills. Transportation AI study. Licensure retention study. Research technology park grants. Law enforcement robot use.
20 bills. SB2 AI/Deepfakes/Cybersecurity/Data Transfers. SB33 Synthetic Media Elections. AI Legislative Task Force. AI disclosure in campaigns.
17 bills. Chatbot regulation for minors. AI in health insurance. Consumer chatbot notice requirements.
16 bills. SB258 Digital Responsibility Safety and Trust Act. AI-generated content ownership. Deepfake criminal offenses.
15 bills. Deepfake protection for kids. Ban on government social scoring with AI. K-12 public school AI provisions.
9 bills. AI Commission amendments. Large energy use facilities. Minimal AI-specific legislative activity.