← Back to Resources
What Your Auditor Will Ask About AI in 2026
Compliance··10 min read·By BerTech

What Your Auditor Will Ask About AI in 2026

SEC AI guidance, Colorado AI Act (June 2026), and EU AI Act ripple effects are accelerating. Here's the exact questions regulators are asking.

For the past three years, the standard advice on AI regulation was: watch the space, get ready, but nothing binding is here yet. That advice is now wrong. The regulatory calendar for AI in professional services has arrived. The SEC is actively examining AI use in its inspection process. Colorado's AI Act takes effect June 1, 2026. The EU AI Act is already in force for high-risk systems. And professional liability frameworks in law, accounting, and real estate are generating their first AI-related enforcement actions.

This article is not about regulatory theory. It is about the specific questions you will be asked — by auditors, examiners, and opposing counsel — and whether you can answer them. Firms that have built governance programs are prepared. Firms that have not are about to find out what that means.

The SEC: AI Is Now an Examination Priority

The SEC's Division of Examinations included AI as a priority in its most recent examination priorities letter. For registered investment advisers and broker-dealers, this means AI is on the examination checklist in the same way cybersecurity and AML controls have been for years. Examiners are looking at three areas:

Marketing and Investment Advice

The SEC's marketing rule and its guidance on the use of AI in investment recommendations make clear that AI-generated content used in client communications or investment advice must meet the same substantiation and disclosure standards as human-generated content. If an AI tool generated or materially assisted in drafting a client proposal, a model output, or an investment recommendation, that fact may need to be disclosed. More importantly, the adviser must be able to demonstrate that the output was reviewed, understood, and affirmed by a qualified human before delivery.

Conflicts of Interest

Predictive analytics and AI tools that optimize recommendations can introduce undisclosed conflicts of interest — for example, if a tool systematically favors certain products because of how it was trained. The SEC has issued specific guidance that advisers using such tools must analyze them for potential conflicts and disclose any that exist. Examiners will ask whether you have conducted that analysis.

Supervision and Recordkeeping

If AI tools are being used by your supervised persons — registered reps, investment adviser representatives — you are expected to have supervisory procedures that cover those tools. Examiners will ask what AI tools your personnel use, what your written supervisory procedures say about them, and what training you have provided. If the answer to any of these is 'we don't know' or 'nothing,' that is an examination finding.

Colorado AI Act: What It Actually Requires (Effective June 1, 2026)

Colorado SB 205 is the most comprehensive state AI law in the United States and it takes effect in less than four months. If your firm operates in Colorado or makes consequential decisions affecting Colorado residents, you need to understand what it requires.

The law applies to 'deployers' of 'high-risk AI systems' — systems that make or materially influence consequential decisions. Consequential decisions include those affecting:

  • Education enrollment or completion
  • Employment or employment opportunities
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

If your firm uses AI to assist in underwriting decisions, credit assessments, employment screening, lease applications, legal work product, or insurance recommendations, you are likely within scope.

What Deployers Must Do

  • Implement a risk management policy for high-risk AI systems, using a recognized framework such as NIST AI RMF or ISO 42001
  • Conduct and document impact assessments before deploying or materially modifying a high-risk AI system
  • Notify consumers when a consequential decision is made using a high-risk AI system
  • Provide consumers with the opportunity to appeal an adverse decision and have a human review it
  • Disclose to consumers the types of data used and the purpose of the AI system
  • Monitor deployed systems for algorithmic discrimination
The Colorado AI Act does not require that you stop using AI. It requires that you know what AI you are using, that you have assessed its risks, and that you can demonstrate both to a regulator. Firms that have done the governance work will find compliance straightforward. Firms that have not will face both implementation costs and exposure for past deployments.

EU AI Act: Ripple Effects for US Professional Services Firms

The EU AI Act is already in force. The compliance timelines are staggered — prohibited AI practices were banned in February 2025, high-risk system requirements apply from August 2026 — but the extraterritorial scope is broad. The Act applies to any AI system placed on the EU market or put into service in the EU, regardless of where the provider or deployer is established.

For US professional services firms, the ripple effects come through four channels:

  • European clients: If you serve European clients and use AI in delivering services to them, the AI Act may apply to your deployments — particularly if the AI system influences consequential decisions affecting those clients.
  • European offices or personnel: If your firm has European offices, those operations are directly subject to EU AI Act requirements.
  • Vendor contracts: Your technology vendors that supply AI systems are updating their terms to include EU AI Act compliance representations. You may be asked to make corresponding representations about your own deployments.
  • Enterprise client requirements: Large enterprise clients with EU operations are increasingly requiring their professional services providers to demonstrate AI Act compliance as a condition of contract.

The Exact Questions You Should Expect

Based on current examination trends, regulatory guidance, and the text of the applicable laws, here are the questions you should be prepared to answer:

From SEC Examiners

  • What AI tools do your supervised persons use in the course of their duties?
  • What are your written supervisory procedures for AI tool use?
  • Has your firm analyzed its AI tools for potential conflicts of interest? What did that analysis find, and what disclosures have you made?
  • How do you ensure that AI-generated content used in client communications meets your marketing rule obligations?
  • What training have you provided to supervised persons on the use of AI tools?
  • How do you supervise and review AI-assisted work product before it is delivered to clients?

From Colorado AI Act Auditors

  • Which AI systems deployed by your firm qualify as high-risk under SB 205?
  • Provide your impact assessments for each high-risk AI system.
  • Describe your risk management policy for high-risk AI systems and identify the framework you used.
  • How do you notify consumers when a consequential decision is made using a high-risk AI system?
  • Describe the process by which a consumer can appeal an adverse AI-assisted decision and request human review.
  • How do you monitor deployed AI systems for evidence of algorithmic discrimination?

From Opposing Counsel in Litigation

  • Identify all AI tools used in the preparation of this work product.
  • Produce all prompts submitted to AI tools in connection with this matter.
  • What data was provided to AI tools in connection with this engagement?
  • Was the AI output reviewed by a qualified human before delivery? By whom, and how?
  • What are your firm's policies governing AI use in client engagements?

How to Prepare Before the First Inquiry Arrives

The firms that answer these questions well share a common characteristic: they did not build their governance program in response to a regulatory inquiry. They built it before one arrived. Here is what that preparation looks like in practice:

  • Maintain an AI inventory. Know what tools are in use, by whom, for what purposes, and on what data. This is the foundation every other governance requirement depends on.
  • Write your supervisory procedures now. For SEC-regulated firms, this is not optional — it is a compliance requirement. Written supervisory procedures for AI tool use should describe what tools are approved, what oversight is required, and how AI-assisted work product is reviewed before client delivery.
  • Conduct and document impact assessments. For any AI system that influences consequential decisions, document the assessment before deployment. A template-based assessment completed now is far better than a retroactive reconstruction during an examination.
  • Build your audit trail. Maintain records of AI tool use at the engagement level. When a matter involves AI-assisted work product, that fact and the review performed should be in the file.
  • Train your people. Every person who uses an AI tool in client work should understand the firm's policies, the data handling rules, and their individual obligations. Training records should be kept.
  • Get your vendor agreements in order. Ensure that the AI tools your firm uses have data processing agreements, enterprise-tier data handling terms, or other contractual protections appropriate to the sensitivity of the data being processed.

The regulatory environment for AI in professional services is no longer a future concern. It is present. The firms that have built governance programs are positioned to demonstrate compliance and move forward. The firms that have not are facing both the cost of building the program and the exposure from the period before it existed.

The question is not whether you will be asked these questions. The question is whether you will be able to answer them.


Ready to get governance in place?

Take the free AI Governance Risk Score to understand your firm's current exposure, or talk to BerTech about building a governance program.