AI Washing Is the New Greenwashing: How the SEC’s Emerging Technologies Unit Is Rewriting Compliance
Companies are racing to showcase their artificial intelligence capabilities — but not all AI claims are created equal. Regulators are increasingly concerned that some firms are overstating or misrepresenting the use and impact of AI, a practice now dubbed “AI washing.” With the U.S. Securities and Exchange Commission’s Emerging Technologies Unit sharpening its focus on these risks, the compliance landscape is rapidly changing. Understanding AI washing, and how to avoid it, is becoming a critical task for legal, compliance, and marketing teams alike.
From Greenwashing to AI Washing: A New Era of Regulatory Scrutiny
For years, regulators and investors have grappled with “greenwashing” — the practice of exaggerating or falsely claiming environmental or ESG credentials. As artificial intelligence becomes the latest corporate buzzword, a similar pattern is emerging: “AI washing.” Companies highlight AI-driven products, strategies, or tools in ways that may not accurately reflect reality, creating potential for investor confusion and, crucially, regulatory action.
In the securities context, any claim that may influence investment decisions is subject to strict truth-in-advertising and disclosure obligations. This is where the U.S. Securities and Exchange Commission (SEC) comes in. Its Emerging Technologies Unit is sharpening focus on misleading AI narratives, treating them much like overstated ESG claims. The message is clear: when you talk about AI in securities offerings, disclosures, or marketing, you must be able to back it up.
What Is AI Washing?
AI washing refers to overstating, misrepresenting, or opportunistically branding products, strategies, or processes as “AI-powered” or “driven by artificial intelligence” when those claims are inaccurate, incomplete, or materially misleading. While there is no single statutory definition, the concept is emerging as a parallel to greenwashing — but for technology and data-driven claims.
Common Forms of AI Washing
AI washing shows up in several recurring patterns across the financial and corporate landscape:
- Inflated capabilities: Suggesting a model is fully autonomous, predictive, or self-learning when in reality it follows simple, rules-based logic or limited automation.
- Overstated integration: Marketing an entire investment strategy as AI-driven when AI is used in only a narrow, non-core function or in limited pilots.
- Implied regulatory blessings: Hinting that AI tools reduce regulatory risk or ensure compliance in a way that suggests endorsement or approval by regulators.
- Unsubstantiated performance claims: Claiming that AI consistently outperforms traditional methods without robust, reproducible evidence.
- Loose use of AI buzzwords: Rebranding conventional statistical or data tools as “AI” to ride the hype wave, especially in investor-facing materials.
Where these claims intersect with securities offerings, investment products, or public-company disclosures, they can rise to the level of material misrepresentation, triggering securities-law risks.
The SEC’s Emerging Technologies Unit: Why It Matters
The SEC’s Emerging Technologies Unit (sometimes described as part of its enforcement or specialized oversight functions) focuses on how new and complex technologies impact markets, investors, and the integrity of disclosures. AI is now central to that mandate.
Focus Areas for Enforcement
While specific enforcement actions will evolve, several priority themes are evident when regulators look at AI claims:
- Truthfulness of AI-related disclosures: Are AI systems described accurately in prospectuses, MD&A, risk factors, and investor decks?
- Alignment between internal reality and external messaging: Do marketing claims match what the technology actually does in practice?
- Risk disclosure around AI use: Are risks such as model error, bias, data quality, operational vulnerabilities, and cybersecurity candidly disclosed?
- Controls and governance: Are there controls ensuring that AI-related messaging is vetted by legal, compliance, and technical stakeholders?
In effect, the Emerging Technologies Unit acts as a bridge between traditional securities-enforcement principles and the rapidly evolving world of AI tools, algorithms, and data-intensive business models.
Why AI Washing Is a Securities-Law Problem
AI washing is not just a marketing concern; in the securities world, it is a disclosure and antifraud issue. Under long‑standing securities laws, issuers and registrants must not make materially false or misleading statements, or omit material facts needed to prevent statements from being misleading.
Materiality and Investor Reliance
Regulators will ask whether a reasonable investor might consider AI claims important in making an investment decision. Given current market enthusiasm for AI — and the valuation premiums often associated with AI narratives — the answer is increasingly yes.
- If a fund is sold as “AI-powered” and gathers assets largely on that basis, inflated claims can be material.
- If a public company highlights proprietary AI as a key growth driver, but that capability is rudimentary or experimental, investors may be misled.
- If risk disclosures understate limitations or vulnerabilities of AI systems, the picture presented to investors may be incomplete.
These issues tie directly into antifraud provisions, misstatement liability, and selective disclosure concerns.
Parallels to Greenwashing
AI washing follows a playbook familiar from ESG and sustainability claims:
- Hype-driven misalignment: Marketing language runs far ahead of actual capabilities.
- Opaque methodologies: Complex scoring, modeling, or analytics are marketed without clear explanation.
- Selective storytelling: Positive features are highlighted; limitations and trade-offs are sidelined.
Just as greenwashing enforcement has matured, one can expect a more structured and assertive regulatory stance around AI claims, especially in financial markets.
Where AI Claims Typically Appear in the Securities Context
To manage risk, organizations must first identify where AI narratives surface in their materials. These touchpoints often include:
1. Public Filings and Disclosure Documents
AI features and strategies may appear in:
- Annual and quarterly reports (e.g., in discussions of strategy, competitive advantage, and technology capabilities)
- Offering documents for funds or structured products
- Management’s discussion and analysis (MD&A) sections describing business models and operational efficiencies
- Risk-factor sections that address technology, cyber, and operational risks
2. Investment Product Marketing
Asset managers and broker-dealers have embraced AI themes in:
- Fund factsheets and pitch decks that highlight AI-driven stock selection or risk management
- Websites and digital campaigns describing “smart” or “intelligent” products
- Third-party distributor and platform materials that repeat or amplify AI claims
3. Corporate Communications and Investor Relations
AI narratives may be woven into:
- Earnings calls and investor presentations
- Press releases about new AI tools or partnerships
- Thought‑leadership pieces and white papers aimed at investors and analysts
Each of these channels can fall within the SEC’s line of sight, especially if they influence the market’s perception of value, growth, or risk.
Key Risk Areas in AI-Related Disclosures
Not every mention of AI creates enforcement risk, but several themes are particularly sensitive under the lens of the Emerging Technologies Unit.
Overpromising Performance and Predictive Power
AI is often sold as a way to spot trends, forecast markets, or outperform benchmarks. When describing these capabilities, regulators will ask:
- Are performance claims based on rigorous testing, with appropriate back‑testing and validation?
- Are limitations, error rates, and uncertainty clearly conveyed?
- Is past performance presented responsibly, without implying guaranteed future results?
Downplaying Model Risk, Bias, and Data Limitations
AI products can fail in unexpected ways. If messaging presents AI as inherently more objective, accurate, or safe than human judgment, disclosures should also explain:
- Potential for biased outcomes based on training data
- Vulnerability to data quality issues or data drift
- Operational risks, including reliance on third‑party vendors or cloud infrastructure
Mischaracterizing Human Oversight
Investors often care about whether decisions are automated, human‑in‑the‑loop, or subject to independent review. Misleading impressions around the level of human supervision, especially for trading, credit, or risk decisions, can raise questions about governance and control.
Building a Compliance Framework for AI Narratives
To respond to the emerging enforcement landscape, firms should move beyond ad hoc review and formalize how they assess AI-related statements.
Governance: Who Owns AI Claims?
Effective oversight requires clear roles and accountability. Consider:
- Designated owners: Assign responsibility for AI-related disclosures to cross‑functional stakeholders (legal, compliance, technology, product, and investor relations).
- Documentation: Maintain internal memos or reports explaining what the AI actually does, its limitations, and the basis for any performance statements.
- Approval workflows: Ensure that AI‑themed marketing passes through legal and compliance review, particularly where securities are involved.
Substantiation and Evidence
Just as with financial projections, AI claims should be supported by concrete evidence:
- Technical documentation from internal teams or vendors
- Testing and validation reports, including stress testing in different market conditions
- Impact assessments explaining how AI tools change decision‑making and risk
Firms should be prepared to show regulators how they arrived at their AI characterizations, not just rely on high‑level descriptions.
Practical Tip: AI Claim Vetting Checklist
Before approving any AI-related disclosure or marketing piece, verify: (1) You can technically describe what the AI does in plain language; (2) Performance statements are backed by documented testing or analysis; (3) Limitations, assumptions, and risks are disclosed proportionately; (4) References to “AI,” “machine learning,” or “automation” are consistent with internal documentation and vendor contracts; (5) Legal and compliance have reviewed the final language.
Practical Steps to Avoid AI Washing in Securities Materials
Organizations can structure their approach in concrete, repeatable steps.
- Inventory Existing AI Claims: Collect all references to AI across filings, prospectuses, websites, marketing decks, social media, and investor communications. Map where and how AI is mentioned.
- Compare Claims to Reality: For each claim, consult with technical and product teams to determine whether the description matches the actual functionality, scope, and maturity of the AI.
- Rate Materiality: Identify claims most likely to influence investor decisions (e.g., those featured prominently in offering materials or IR presentations) and prioritize them for review.
- Strengthen Risk Disclosures: Update filings and product literature to address AI‑specific risks, including model error, data quality, operational dependencies, and governance.
- Standardize Language: Develop internal guidance on how to describe AI tools accurately (e.g., distinguishing between pilots, limited‑scope automation, and core decision engines).
- Enhance Review Processes: Incorporate AI‑specific questions into marketing and disclosure approval workflows, including sign‑off by a technical owner for accuracy.
- Train Key Teams: Educate marketing, sales, investor relations, and product leads on the regulatory expectations around AI claims and how to spot risky language.
Coordinating Legal, Compliance, and Technical Teams
One challenge with AI washing is the gap between those who build systems and those who describe them externally. Bridging this divide is essential for credible disclosures.
Creating a Shared Vocabulary
Technical teams may use nuanced terminology (e.g., supervised vs. unsupervised learning, feature engineering, model drift) that can be lost in translation. Legal and IR teams need a simplified but accurate vocabulary:
- Define internally what counts as “AI” versus rules‑based or deterministic automation.
- Agree on how to describe experimental tools, pilots, and proof‑of‑concepts.
- Standardize explanations of limitations and assumptions in plain language.
Ongoing Information Flow
AI systems evolve over time. Governance should ensure that:
- Material changes to AI tools (scope, performance, risk profile) are communicated to legal and compliance.
- New external partnerships or vendor changes that affect AI capabilities are captured in disclosure reviews.
- Incident reports and model failures feed into updated risk disclosure where material.
| Aspect | AI Washing Approach | Compliance‑Oriented Approach |
|---|---|---|
| Use of Terminology | Broad use of “AI” for any analytics or automation | Careful distinction between AI, rules‑based tools, and basic analytics |
| Performance Claims | Highlight best‑case scenarios; minimal context | Grounded in testing data; clearly framed with caveats and assumptions |
| Risk Disclosure | Focus on benefits; AI framed as inherently superior | Balanced view of benefits and risks, including bias and model failure |
| Governance | Ad hoc approval; limited technical involvement | Cross‑functional review with documented controls and accountability |
| Investor Impact | Short‑term marketing appeal, long‑term enforcement risk | Credible, sustainable narratives aligned with regulatory expectations |
Vendor and Third‑Party AI Tools: Hidden Compliance Traps
Many financial institutions and issuers deploy third‑party AI tools — for trading signals, customer analytics, compliance monitoring, or credit assessment. Relying on vendors does not absolve firms of disclosure obligations.
Due Diligence on Vendors
Before touting vendor‑supplied AI capabilities in offering or marketing materials, firms should:
- Review contracts to understand the scope and limitations of the vendor’s technology.
- Request documentation regarding model performance, data sources, and risk controls.
- Clarify responsibilities for monitoring, updating, and validating models over time.
Accurate Attribution and Description
Communications should be clear about whether AI tools are proprietary or third‑party, and how central they are to the firm’s value proposition. Overstating “in‑house” innovation or implying deeper integration than actually exists can be particularly sensitive in an enforcement investigation.
Preparing for a Proactive Regulatory Environment
As regulators gain experience with AI-focused investigations, expectations will likely become more specific. Firms that get ahead of the curve by tightening AI narratives and controls can reduce enforcement risk and build investor trust.
Anticipating Future Developments
While details will vary, organizations should anticipate:
- More targeted sweeps and inquiries focused on AI themes in securities offerings.
- Guidance, speeches, or risk alerts elaborating on what regulators view as misleading AI claims.
- Cross‑border coordination among regulators as AI and digital‑asset narratives overlap.
Embedding AI‑specific review into core disclosure and product‑development processes is likely to become a baseline expectation, not a differentiator.
Final Thoughts
AI washing is emerging as the technological cousin of greenwashing — persuasive in the short term but increasingly risky in a market where regulators and investors are asking tougher questions. The SEC’s Emerging Technologies Unit is a clear signal that AI narratives in securities offerings and public disclosures will not be treated as harmless marketing puffery when they cross into the realm of material misrepresentation.
Firms that invest now in accurate descriptions, robust evidence, and disciplined governance for AI claims will be better positioned as enforcement and expectations evolve. The path forward is not to stop talking about AI, but to talk about it with precision, balance, and a clear understanding of how securities laws apply to emerging technologies.
Editorial note: This article provides general information on evolving regulatory expectations around AI-related disclosures and should not be taken as legal advice. For more detailed analysis and context, see the original discussion by McMillan LLP at https://mcmillan.ca.