← Back to blog

30 April 2026 · 8 min read

AFSL and AI: How Mortgage Brokers Stay on the Right Side of Best Interests Duty

ASIC has signalled increased focus on AI-assisted advice in 2026. For AFSL-licensed mortgage brokers and financial planners, the question isn't whether AI helps — it's whether the audit trail survives a Best Interests Duty review. Here's what compliant looks like.

Roman SilantevFounder, AI Lab Australia

Why this matters in 2026

ASIC's Regulatory Guide 273 has governed Best Interests Duty for mortgage brokers since the obligation became operational in 2021, and the regulator has steadily sharpened its expectations since. The 2024 enforcement focus was on documentation quality — could a broker actually demonstrate, on file, that they considered the alternatives? The 2026 focus is shifting to AI-assisted advice: when the broker uses an AI tool to help generate a Statement of Credit Assistance or to prepare a fact-find narrative, can the broker explain what the AI contributed, what it didn't, and why the human's recommendation overrides any AI suggestion that disagreed?

The answer most brokers can give right now is unsatisfying. They use ChatGPT or a generic AI assistant through a browser tab; they paste in client information, edit the output, and copy it into the aggregator platform. There is no record of the prompt, the model version, the data that went in, or the AI's full response. If ASIC asks how the recommendation in a specific file was reached, the answer is partial reconstruction at best.

What ASIC actually requires under BID

RG 273 doesn't prohibit AI assistance. It requires that the broker can demonstrate the recommendation was in the client's best interests at the moment it was made, considering the alternatives the broker reasonably could have considered, given the client's stated objectives and the available product set.

For an AI-assisted file, this translates into four pieces of evidence the broker needs to produce on demand: the inputs the AI received (the fact-find data, the client's stated objectives, the available product set), the AI's response (the suggested products, the rationale, any flagged risks), the broker's review (what the broker accepted, what the broker overrode, the reason), and the final recommendation as it went to the client. The chain has to be unbroken, time-stamped, and tied to the specific matter.

Most generic AI tools provide one or two of those pieces. None provide all four in a form that survives an enforcement review.

What a compliant audit trail looks like

The trail starts at the moment the broker initiates an AI-assisted action — typically inside the CRM or the aggregator-side fact-find. The system captures the inputs verbatim, including the client identifiers (tokenised, but resolvable inside the broker's session), the stated objectives, and the lender pool considered. The prompt template that goes to the AI is logged with its version. The AI's full response is logged, not just the parts the broker accepted. The broker's review is captured as a structured record — what was accepted, what was overridden, the rationale for any override.

When the Statement of Credit Assistance is generated, the AI's role is documented as a working paper attached to the matter. The broker's signature on the Statement is the legal commitment; the AI's contribution is evidence that helped get there. ASIC's review of the file produces a coherent answer: this is what the AI considered, this is what the broker decided, here is why the recommendation went to the client.

The output rail that catches unlicensed advice patterns

AFSL holders have one obligation that AI tools rarely respect: the licensee is responsible for every word that leaves the firm in the firm's name. If an AI tool drafts text that constitutes financial product advice without proper disclosure — or worse, that crosses into personal advice when only general advice was intended — the licensee is on the hook regardless of whether a human reviewed it carefully or not.

SydClaw's output rail screens every AI response before delivery against twelve categories of unlicensed-advice language: directional product recommendations without the licensee disclosure, return projections without the basis-of-projection note, comparisons that imply suitability assessment, and so on. Flagged content is blocked or routed for licensed-professional review before it reaches the client. The rail is configurable per AFSL holder — different licensees have different scopes and different disclosure obligations — but the default is calibrated to the conservative end of ASIC's published expectations.

This is not a substitute for the broker's own review. It is a backstop. A broker who is paying attention will catch the issues the rail would have flagged. The rail catches the issues the broker missed because it was a Friday afternoon and the third loan that day had the same kind of pattern as the other two.

What ASIC reviewers actually look at

When ASIC selects a broker file for review, the review process is structured around the BID requirements: did you understand the client's circumstances, did you consider the alternatives, did you recommend the right product, did you communicate the recommendation clearly, did you keep the file in compliance with the record-keeping obligations. For an AI-assisted file, two additional questions surface: what did the AI contribute, and did the broker exercise independent judgement on the AI's contribution?

The answers don't need to be lengthy. They need to be evidenced. A file that produces a clear narrative — "the AI was used to draft the BID rationale; the broker reviewed and amended sections X, Y, Z; the final recommendation reflects the broker's independent assessment" — passes. A file that says "ChatGPT helped" without substantiation fails, regardless of whether the underlying recommendation was correct.

This is the practical impact of the Privacy Act 2026 ADM transparency provisions arriving in December. The transparency requirement for individuals affected by automated decisions overlaps with ASIC's expectations for AFSL holders using AI in advice. A firm that is ready for one is largely ready for the other.

What to do this quarter if you're an AFSL holder

Three steps, in order. First: audit your current AI usage. Walk every member of the firm through what AI tools they actually use — including the unsanctioned ones — and produce a written register of every AI-assisted workflow that touches a client file. Most firms find the list is longer than they expected.

Second: classify each workflow against BID exposure. AI used to draft an internal memo is low-exposure; AI used to draft client-facing advice or a Statement of Credit Assistance is high-exposure. The high-exposure workflows need an audit trail that can survive ASIC review. The low-exposure ones can run on lighter governance.

Third: for the high-exposure workflows, either replace the tooling with one that produces the trail natively, or accept the manual documentation burden of capturing the prompt, the response, the review, and the rationale for every assisted file. Both paths are legitimate. The firms that go through this exercise once tend to choose the first path; the manual burden is unsustainable at any meaningful scale.

Disclosure: I'm the founder of the company that builds SydClaw, the platform referenced in the audit-trail section. We sell into AFSL-licensed firms among others. If you'd like to discuss your specific BID exposure with no obligation to consider our product, reach me at info@ailabaustralia.com.

About the author

Roman Silantev Founder, AI Lab Australia. Roman is the founder of AI Lab Australia Pty Ltd, the company that builds and operates SydClaw. He has spent the last decade building enterprise software for Australian professional services firms, and writes regularly on AI compliance and Privacy Act obligations.