← Back to blog

30 April 2026 · 7 min read

Why Australian Firms Shouldn't Use ChatGPT for Client Work — and What to Use Instead

ChatGPT is the AI tool every team has a tab open to. For Australian professional services firms with regulatory obligations, using it on client data exposes the firm in ways most partners haven't yet absorbed. Here's the specific exposure, and what compliant looks like.

Roman SilantevFounder, AI Lab Australia

The browser tab that ate your compliance posture

Walk through any Australian professional services firm's office today and you will find ChatGPT open in someone's browser. A junior accountant drafting a client email. A paralegal summarising a contract. A mortgage broker preparing a Statement of Credit Assistance. A property manager triaging tenant complaints. The tool is good; the work it produces is fast; the partners are mostly unaware of how much of it is happening.

Nothing about that scene is operationally wrong. What is wrong is the regulatory exposure the firm is accumulating, quietly, with every browser tab. Each interaction sends client data to an overseas model provider in plaintext, leaves no record the firm controls, and generates outputs the firm cannot explain to a regulator if asked. The exposure compounds: every junior using the tool every day is creating audit-trail liability that will surface the moment the regulator asks for an explanation of an automated decision, the firm fails to produce one, and the OAIC, ASIC, the TPB, the Commission, or the relevant aggregator decides the firm is non-compliant.

What ChatGPT does well

Before getting to the criticisms, the credit. ChatGPT is the most capable conversational AI assistant available at consumer pricing. The underlying models are state of the art. The user experience is excellent. For an individual user working on their own data — drafting a personal email, summarising a research paper, brainstorming an internal memo — ChatGPT delivers value that earlier generations of productivity tools could not. Anthropic's Claude offers similar capability with a slightly different stylistic profile, and Microsoft Copilot does the same inside the Microsoft 365 surface. The technology is good; the professionals using it are usually getting better outcomes faster than they would otherwise.

The issue is not the AI's capability. The issue is the regulatory frame the firm operates in, and the gap between what the consumer-grade product captures and what the regulator expects.

Where the exposure actually sits

Three categories. First: data residency and personal information. Most Australian firms operate under the Privacy Act 1988 (as amended) and have data-handling commitments to clients that specify Australian or jurisdictionally controlled storage. ChatGPT's default data-handling sends prompts to OpenAI's processing infrastructure, which is not Australian. ChatGPT Enterprise has stronger commitments, but most firms running ChatGPT through individual paid subscriptions have not contractually agreed to enterprise terms. The data flow is technically out of compliance with most firms' published privacy notices.

Second: audit trail. The Privacy Act 2026 ADM transparency provisions activating 10 December 2026 require an explanation of any automated decision that significantly affects an individual. ChatGPT Enterprise's audit logs capture interaction metadata, but the link between a specific interaction and a specific client matter — and the chain of inputs, prompt template, model version, and reviewer that the regulator wants to see — is something the firm has to maintain itself. Most firms don't.

Third: vertical compliance. AFSL Best Interests Duty, NDIS Practice Standards, Aged Care Quality Standards, TASA 2009 — all impose obligations that go beyond the generic privacy frame. A consumer AI tool, however good, does not maintain the vertical-specific evidence those obligations demand.

What compliant looks like

Compliant doesn't mean abandoning AI. It means routing AI through infrastructure the firm controls, with the audit trail the firm needs, and the data-handling the firm's clients are entitled to expect.

The core controls: client data is tokenised before any external model call, so the model provider never receives raw personal information; the inputs, the model version, the prompt template, and the reviewer are captured against the matter; the output passes through a screening layer that catches unlicensed-advice patterns, regulator-defined PII leakage, and other category-specific risks before delivery; the audit log is immutable and exportable in a format the relevant regulator accepts.

This is what SydClaw provides as a managed service. It is not unique to SydClaw — any platform that combines tokenisation, dedicated audit infrastructure, vertical compliance modules, and Australian residency can satisfy the same controls. The point is that consumer ChatGPT does not, and patching it with manual workflows is operationally unsustainable.

What to do if your firm is using ChatGPT today

First, audit. Produce a written register of every AI tool actually in use, every workflow that touches client data, and every staff member using each tool. The register will be longer than expected. Second, classify the workflows by regulatory exposure: drafting an internal memo is low-risk, drafting a Statement of Credit Assistance is high-risk. Third, for the high-risk workflows, either move them to a tool that produces the audit trail natively, or document them manually. The manual path is legitimate but operationally painful at any meaningful scale.

Don't try to ban ChatGPT outright. The firms that try this find that the tool is back in use within a week through personal devices and unsanctioned accounts; the only thing the ban achieves is removing the firm's visibility into what's happening. Better to provide a compliant alternative for client work, leave ChatGPT in place for non-client uses, and govern by where the data goes rather than which tool is open.

Disclosure: I'm the founder of the company that builds SydClaw, the AI workforce platform purpose-built for Australian professional services firms. We sell against ChatGPT directly. The audit-the-current-AI-usage exercise above is something we run with prospects in our discovery sessions, and we share the resulting register regardless of whether the firm ends up using SydClaw. If you'd like one of those sessions with no obligation, reach me at info@ailabaustralia.com.

About the author

Roman Silantev Founder, AI Lab Australia. Roman is the founder of AI Lab Australia Pty Ltd, the company that builds and operates SydClaw. He has spent the last decade building enterprise software for Australian professional services firms, and writes regularly on AI compliance and Privacy Act obligations.