29 April 2026 · 8 min read
Privacy Act 2026: What Australian Professional Services Firms Must Do Before December 10
From 10 December 2026, every Australian business making automated decisions about individuals must explain those decisions on request. Here's what changes, what's already in force, and what to fix before the deadline.
Roman Silantev — Founder, AI Lab Australia
What actually changes on 10 December 2026
The Privacy and Other Legislation Amendment Act 2024 was passed by the Australian Parliament in November 2024. Most of its provisions are already in force, but the substantive change that matters for any firm using AI to make decisions about individuals — automated decision-making (ADM) transparency under new sections 1.7 to 1.9 of the Australian Privacy Principles — activates on 10 December 2026.
The core obligation is straightforward: if your business uses an automated process (AI, an algorithm, a rules engine) to make a decision that significantly affects an individual, you must include a meaningful description of that process in your privacy policy, and you must be able to explain — on request — the kinds of personal information the process used, the kinds of decisions it makes, and the impact those decisions have on the individual.
The scope is broad. It covers credit decisions, insurance underwriting, fraud screening, tenancy assessments, employment screening, debt collection escalation, and any AI-drafted recommendation that a person reasonably acts on. It does not require human decision-makers to disclose every internal thought; it requires automated systems to be explainable to a reasonable person.
What "significantly affects" actually means
The Office of the Australian Information Commissioner has not yet released definitive guidance, but the explanatory memorandum to the amending Act and the underlying policy intent suggest a pragmatic threshold. A decision "significantly affects" an individual if it materially changes their access to a service, their financial position, their employment, their housing, or their legal status.
For an Australian accounting firm, this captures: AI-drafted advice that flows to a client, automated debtor escalation that can damage a client relationship, AI-classified expense decisions that affect a client's tax position. For a legal firm: AI-generated risk assessments that the client relies on, conflict-checking outcomes that determine matter acceptance. For a mortgage broker: any AI-generated product recommendation that affects loan approval.
Note what is not captured: routine email triage, calendar scheduling, document categorisation, internal task routing. The threshold is the impact on the individual, not the use of automation per se. A firm using AI to draft an email to a client is not making an automated decision about the client; a firm using AI to decide which clients to fire is.
The audit-trail requirement and why most firms aren't ready
The substantive challenge of the new regime isn't writing a paragraph in a privacy policy — that's the easy part. The hard part is being able to answer the OAIC, or the affected individual, when they ask exactly which inputs led to a specific decision on a specific date.
Most firms using AI today do not have this trail. They use ChatGPT, Claude, or Microsoft Copilot through a generic interface. The model receives a prompt, returns a response, and the conversation history may or may not be retained. There is no record of which model version produced the output, which prompt template was used, what data was injected into the context window, or which human reviewed and approved the result. When the OAIC asks why the firm declined a credit application on 14 March 2027, there is nothing to produce.
This is not a hypothetical risk. The OAIC has explicit enforcement powers under the amending Act, including civil penalties of up to $50 million for serious or repeated interferences with privacy. The regulator has signalled that ADM transparency will be a focus area in 2027 enforcement.
What a compliant audit trail looks like
A defensible ADM audit trail captures, for every automated decision: the inputs the system relied on (with PII handled separately), the model and version that produced the output, the prompt template, the timestamp, the human (if any) who reviewed and approved the result, and the action taken. The trail is immutable — meaning the records cannot be amended after the fact — and it is queryable, so a request from an affected individual can be answered in days rather than weeks.
For firms using SydClaw, this trail is generated as a side-effect of using the platform. Every AI action is logged with a SHA-256 hash chain that prevents silent tampering. Every prompt, response, model version, and reviewer is captured against the matter or client record. When a client asks how a decision was made, the export takes one click.
For firms using other tools, the trail must be assembled manually. This is not impossible, but it is operationally expensive — typically 5–10 minutes per AI-assisted decision, multiplied by the volume of decisions a busy firm makes. Most firms will need to either replace their AI tooling with something that produces the trail natively, or accept a documentation burden that grows with usage.
Privacy policy updates: the easier half
Every Australian business with a privacy policy will need to update it before 10 December 2026 to include a description of any ADM systems in use. The OAIC has indicated that generic statements like "we use AI to improve our service" will not satisfy the requirement. The description must be specific enough that a reasonable individual can understand what kinds of decisions are being made by automation, what kinds of information are being used, and what their rights are if they disagree with a decision.
A usable template structure: name the system, name the kinds of decisions it makes, name the kinds of personal information it uses, name the human review process if any, and name the contact path for an affected individual to request an explanation or seek review. Most existing privacy policies will need a new section of approximately 200–500 words.
For regulated industries — financial services under AFSL, legal practices under Uniform Law, accounting firms under TASA — the privacy policy update sits alongside the existing professional disclosure documents. The OAIC requirement does not displace any of those; it adds to them.
What to do this quarter
Eight months before the deadline is the right time to start, not the time to panic. The work falls into three buckets, in order: identify, document, fix.
Identify means producing a register of every AI system the firm uses that produces an output a person acts on. This includes the obvious cases (AI-drafted client communications, AI-generated reports) and the less obvious (AI-classified email triage, AI-suggested next-best-actions in a CRM). Most firms find this list is longer than they expected.
Document means writing the privacy policy update for each system in the register. For systems the firm controls, this is straightforward. For third-party tools the firm uses, the firm needs to confirm what the tool actually does — most vendor disclosures are either too vague or too marketing-flavoured to satisfy the OAIC requirement.
Fix means ensuring that for every system in the register, the firm has either an audit trail that supports the new explainability requirements, or has stopped using the system for decisions that significantly affect individuals. This is where most firms find the gap: they are willing to disclose what AI is doing, but they cannot explain why a specific output was produced on a specific date, because the underlying tool doesn't capture that.
The firms that get ahead of this requirement aren't just minimising regulatory risk — they're building the operational capability to answer harder questions about AI accountability that will keep coming. Privacy Act 2026 is the first regulatory mechanism. It will not be the last.
Disclosure
I'm the founder of the company that builds SydClaw, the platform referenced in the audit-trail section above. The platform was built specifically to address the explainability requirements that the Privacy Act amendments introduce, and it sells primarily into Australian professional services firms. If you're evaluating compliance posture and you'd like a second opinion on how your current AI tooling handles audit trails, I'm happy to take the call regardless of whether SydClaw turns out to be the right fit. Reach me at info@ailabaustralia.com.
About the author
Roman Silantev — Founder, AI Lab Australia. Roman is the founder of AI Lab Australia Pty Ltd, the company that builds and operates SydClaw. He has spent the last decade building enterprise software for Australian professional services firms, and writes regularly on AI compliance and Privacy Act obligations.