AI Washing Is the New Greenwashing: How the SEC’s Emerging Technologies Unit Is Rewriting Compliance

Companies are racing to showcase their artificial intelligence capabilities — but not all AI claims are created equal. Regulators are increasingly concerned that some firms are overstating or misrepresenting the use and impact of AI, a practice now dubbed “AI washing.” With the U.S. Securities and Exchange Commission’s Emerging Technologies Unit sharpening its focus on these risks, the compliance landscape is rapidly changing. Understanding AI washing, and how to avoid it, is becoming a critical task for legal, compliance, and marketing teams alike.

Share:

From Greenwashing to AI Washing: A New Era of Regulatory Scrutiny

For years, regulators and investors have grappled with “greenwashing” — the practice of exaggerating or falsely claiming environmental or ESG credentials. As artificial intelligence becomes the latest corporate buzzword, a similar pattern is emerging: “AI washing.” Companies highlight AI-driven products, strategies, or tools in ways that may not accurately reflect reality, creating potential for investor confusion and, crucially, regulatory action.

In the securities context, any claim that may influence investment decisions is subject to strict truth-in-advertising and disclosure obligations. This is where the U.S. Securities and Exchange Commission (SEC) comes in. Its Emerging Technologies Unit is sharpening focus on misleading AI narratives, treating them much like overstated ESG claims. The message is clear: when you talk about AI in securities offerings, disclosures, or marketing, you must be able to back it up.

Abstract representation of AI technology with legal and regulatory icons

What Is AI Washing?

AI washing refers to overstating, misrepresenting, or opportunistically branding products, strategies, or processes as “AI-powered” or “driven by artificial intelligence” when those claims are inaccurate, incomplete, or materially misleading. While there is no single statutory definition, the concept is emerging as a parallel to greenwashing — but for technology and data-driven claims.

Common Forms of AI Washing

AI washing shows up in several recurring patterns across the financial and corporate landscape:

Where these claims intersect with securities offerings, investment products, or public-company disclosures, they can rise to the level of material misrepresentation, triggering securities-law risks.

The SEC’s Emerging Technologies Unit: Why It Matters

The SEC’s Emerging Technologies Unit (sometimes described as part of its enforcement or specialized oversight functions) focuses on how new and complex technologies impact markets, investors, and the integrity of disclosures. AI is now central to that mandate.

Focus Areas for Enforcement

While specific enforcement actions will evolve, several priority themes are evident when regulators look at AI claims:

In effect, the Emerging Technologies Unit acts as a bridge between traditional securities-enforcement principles and the rapidly evolving world of AI tools, algorithms, and data-intensive business models.

Why AI Washing Is a Securities-Law Problem

AI washing is not just a marketing concern; in the securities world, it is a disclosure and antifraud issue. Under long‑standing securities laws, issuers and registrants must not make materially false or misleading statements, or omit material facts needed to prevent statements from being misleading.

Materiality and Investor Reliance

Regulators will ask whether a reasonable investor might consider AI claims important in making an investment decision. Given current market enthusiasm for AI — and the valuation premiums often associated with AI narratives — the answer is increasingly yes.

These issues tie directly into antifraud provisions, misstatement liability, and selective disclosure concerns.

Parallels to Greenwashing

AI washing follows a playbook familiar from ESG and sustainability claims:

Just as greenwashing enforcement has matured, one can expect a more structured and assertive regulatory stance around AI claims, especially in financial markets.

Where AI Claims Typically Appear in the Securities Context

To manage risk, organizations must first identify where AI narratives surface in their materials. These touchpoints often include:

1. Public Filings and Disclosure Documents

AI features and strategies may appear in:

2. Investment Product Marketing

Asset managers and broker-dealers have embraced AI themes in:

3. Corporate Communications and Investor Relations

AI narratives may be woven into:

Each of these channels can fall within the SEC’s line of sight, especially if they influence the market’s perception of value, growth, or risk.

Corporate compliance professionals reviewing AI-related disclosures and documents

Key Risk Areas in AI-Related Disclosures

Not every mention of AI creates enforcement risk, but several themes are particularly sensitive under the lens of the Emerging Technologies Unit.

Overpromising Performance and Predictive Power

AI is often sold as a way to spot trends, forecast markets, or outperform benchmarks. When describing these capabilities, regulators will ask:

Downplaying Model Risk, Bias, and Data Limitations

AI products can fail in unexpected ways. If messaging presents AI as inherently more objective, accurate, or safe than human judgment, disclosures should also explain:

Mischaracterizing Human Oversight

Investors often care about whether decisions are automated, human‑in‑the‑loop, or subject to independent review. Misleading impressions around the level of human supervision, especially for trading, credit, or risk decisions, can raise questions about governance and control.

Building a Compliance Framework for AI Narratives

To respond to the emerging enforcement landscape, firms should move beyond ad hoc review and formalize how they assess AI-related statements.

Governance: Who Owns AI Claims?

Effective oversight requires clear roles and accountability. Consider:

Substantiation and Evidence

Just as with financial projections, AI claims should be supported by concrete evidence:

Firms should be prepared to show regulators how they arrived at their AI characterizations, not just rely on high‑level descriptions.

Practical Tip: AI Claim Vetting Checklist

Before approving any AI-related disclosure or marketing piece, verify: (1) You can technically describe what the AI does in plain language; (2) Performance statements are backed by documented testing or analysis; (3) Limitations, assumptions, and risks are disclosed proportionately; (4) References to “AI,” “machine learning,” or “automation” are consistent with internal documentation and vendor contracts; (5) Legal and compliance have reviewed the final language.

Practical Steps to Avoid AI Washing in Securities Materials

Organizations can structure their approach in concrete, repeatable steps.

  1. Inventory Existing AI Claims: Collect all references to AI across filings, prospectuses, websites, marketing decks, social media, and investor communications. Map where and how AI is mentioned.
  2. Compare Claims to Reality: For each claim, consult with technical and product teams to determine whether the description matches the actual functionality, scope, and maturity of the AI.
  3. Rate Materiality: Identify claims most likely to influence investor decisions (e.g., those featured prominently in offering materials or IR presentations) and prioritize them for review.
  4. Strengthen Risk Disclosures: Update filings and product literature to address AI‑specific risks, including model error, data quality, operational dependencies, and governance.
  5. Standardize Language: Develop internal guidance on how to describe AI tools accurately (e.g., distinguishing between pilots, limited‑scope automation, and core decision engines).
  6. Enhance Review Processes: Incorporate AI‑specific questions into marketing and disclosure approval workflows, including sign‑off by a technical owner for accuracy.
  7. Train Key Teams: Educate marketing, sales, investor relations, and product leads on the regulatory expectations around AI claims and how to spot risky language.

Coordinating Legal, Compliance, and Technical Teams

One challenge with AI washing is the gap between those who build systems and those who describe them externally. Bridging this divide is essential for credible disclosures.

Creating a Shared Vocabulary

Technical teams may use nuanced terminology (e.g., supervised vs. unsupervised learning, feature engineering, model drift) that can be lost in translation. Legal and IR teams need a simplified but accurate vocabulary:

Ongoing Information Flow

AI systems evolve over time. Governance should ensure that:

Aspect AI Washing Approach Compliance‑Oriented Approach
Use of Terminology Broad use of “AI” for any analytics or automation Careful distinction between AI, rules‑based tools, and basic analytics
Performance Claims Highlight best‑case scenarios; minimal context Grounded in testing data; clearly framed with caveats and assumptions
Risk Disclosure Focus on benefits; AI framed as inherently superior Balanced view of benefits and risks, including bias and model failure
Governance Ad hoc approval; limited technical involvement Cross‑functional review with documented controls and accountability
Investor Impact Short‑term marketing appeal, long‑term enforcement risk Credible, sustainable narratives aligned with regulatory expectations

Vendor and Third‑Party AI Tools: Hidden Compliance Traps

Many financial institutions and issuers deploy third‑party AI tools — for trading signals, customer analytics, compliance monitoring, or credit assessment. Relying on vendors does not absolve firms of disclosure obligations.

Due Diligence on Vendors

Before touting vendor‑supplied AI capabilities in offering or marketing materials, firms should:

Accurate Attribution and Description

Communications should be clear about whether AI tools are proprietary or third‑party, and how central they are to the firm’s value proposition. Overstating “in‑house” innovation or implying deeper integration than actually exists can be particularly sensitive in an enforcement investigation.

Conceptual image of ethical artificial intelligence and balanced technology use

Preparing for a Proactive Regulatory Environment

As regulators gain experience with AI-focused investigations, expectations will likely become more specific. Firms that get ahead of the curve by tightening AI narratives and controls can reduce enforcement risk and build investor trust.

Anticipating Future Developments

While details will vary, organizations should anticipate:

Embedding AI‑specific review into core disclosure and product‑development processes is likely to become a baseline expectation, not a differentiator.

Final Thoughts

AI washing is emerging as the technological cousin of greenwashing — persuasive in the short term but increasingly risky in a market where regulators and investors are asking tougher questions. The SEC’s Emerging Technologies Unit is a clear signal that AI narratives in securities offerings and public disclosures will not be treated as harmless marketing puffery when they cross into the realm of material misrepresentation.

Firms that invest now in accurate descriptions, robust evidence, and disciplined governance for AI claims will be better positioned as enforcement and expectations evolve. The path forward is not to stop talking about AI, but to talk about it with precision, balance, and a clear understanding of how securities laws apply to emerging technologies.

Editorial note: This article provides general information on evolving regulatory expectations around AI-related disclosures and should not be taken as legal advice. For more detailed analysis and context, see the original discussion by McMillan LLP at https://mcmillan.ca.