30 April 2026 · 9 min read
The 10-Day Deployment: From Contract to Live AI Employee
What actually happens between signing a SydClaw pilot and the AI employee handling its first real workflow. Day-by-day walkthrough — what the AI Lab Australia team does, what the client team does, and what trips up the deployments that miss the date.
Roman Silantev — Founder, AI Lab Australia
Why ten days, and not three
The honest answer to "how fast can SydClaw go live" is: not as fast as the demo suggests, and slower than a SaaS sign-up, because there is real configuration work that determines whether the system is useful or noise. We landed on ten business days as the right window: long enough to do the work properly, short enough that the firm sees a return inside the first month.
What takes the ten days is not the technology. The infrastructure provisioning is automated and runs in under thirty minutes. What takes the time is understanding the firm's actual workflows in enough depth that the AI's outputs sound like the firm's own voice, route to the right people, and respect the policies that exist on paper but live mostly in the partners' heads. That work cannot be rushed without producing a system that drafts emails the firm wouldn't send.
Day 0 — Pre-kickoff
Once the pilot agreement is signed, we send a discovery pack the day before kickoff. It asks for: the firm's tone-of-voice samples (three to five recent client emails the partners are happy with), the current systems list (Xero / Microsoft 365 / SafetyCulture / HubSpot / etc.), the policies that gate AI behaviour (what requires partner approval, what does not, what the AI must never do), and the nominated launch champion on the firm's side.
The discovery pack is not optional. It is the difference between a deployment that lands on day ten and one that drifts to day twenty. Firms that complete it before kickoff start the meter on day one; firms that do not effectively start on the day they finish it.
Days 1–2 — Infrastructure and integrations
We provision the dedicated cloud instance: a fresh Supabase project in the Sydney region, a fresh Vercel deployment, a fresh set of AES-256 encryption keys held only for this client. The firm receives a custom subdomain (acme.sydclaw.com.au) and admin login credentials. There is no shared multi-tenant database; every firm gets their own.
The firm's launch champion authorises the OAuth integrations from inside SydClaw: Microsoft 365, Gmail, Xero, SafetyCulture, HubSpot, SharePoint, whatever is in scope for the pilot. Each authorisation produces a short live test — a sample read against the real account to confirm the scope is correct and the data flows are wired. Anything that doesn't authenticate cleanly gets resolved this week, not later. The remaining integrations queue for week two if the pilot scope includes them.
Days 3–4 — Policy authoring and tone calibration
This is the highest-value work in the deployment and the work most clients underestimate. We sit down with the launch champion (and ideally one operating partner) for two structured sessions. Session one captures the policies — what the AI is allowed to do unprompted, what requires partner approval, what should never happen. Session two captures the tone — we feed the AI the email samples from the discovery pack and have it draft three response emails in the firm's voice. The partner reviews, corrects, and the calibration loop runs until the drafts are publishable.
Firms that have explicit, written policies for delegation to administrative staff find this stage natural — they translate. Firms that operate on tacit understandings find it harder, and the work spills into week two. That is fine; we'd rather take the time and ship a system the partners actually trust.
Days 5–7 — Module configuration and workflow scripting
Each module that the firm has licensed is configured against the firm's specific systems and policies. The email module learns the firm's classification taxonomy (which kinds of inbound need partner attention, which can be triaged by the AI, which should be escalated immediately). The accounting module is connected to the right Xero file structure (single firm, multi-entity, group consolidation). The compliance module has its register seeded with the firm's existing audit history.
The scripted workflows — the morning briefing at 7am, the BAS pre-fill four weeks before due, the debtor escalation cadence, the weekly compliance evidence assembly — are configured against the firm's calendar and tagged against the firm's matter management structure. Every scheduled action is reviewed before activation. Nothing runs autonomously until the partners explicitly green-light it.
Days 8–9 — Shadow week
The system is fully configured and ready to operate. For two days, we run it in shadow mode: every action the AI proposes is queued for review by the launch champion, but nothing executes without explicit approval. The shadow week catches the things that look fine in configuration but feel wrong in production — drafts that are technically correct but read as too formal, classifications that are right on the rule but wrong on the spirit, cadences that fit the policy but feel pushy in practice.
We iterate quickly during shadow week. By day nine, the AI's proposed actions match what the launch champion would have done themselves at least 90% of the time. The remaining 10% surfaces to a human as it should, with the audit trail capturing both the AI's proposal and the human's intervention.
Day 10 — Go-live
On day ten the firm flips from shadow mode to live operation. The morning briefing fires at 7am. The email triage runs. The scheduled compliance checks execute. The BAS pre-fill begins for the next quarter. Approval-gated actions still surface to a human before execution; everything else runs autonomously inside the policy boundaries.
The first week of live operation has elevated monitoring from our side — we watch the audit trail closely, look for any classification drift, intervene if any workflow misbehaves. By the end of week three of live operation the system is operating on its own and the firm's involvement drops to weekly review of metrics and any escalated approvals.
What trips up deployments that miss the date
Three things, in order of frequency. First: the firm's launch champion has a competing priority that surfaces in week one, and the policy and tone sessions get rescheduled. The deployment slips one day for every day the sessions slip; there is no shortcut. Second: an integration the firm assumed was available is gated by a vendor approval process the firm hadn't started (Xero's audit-firm partner program, SafetyCulture's API approval, any aggregator-side OAuth that requires a paperwork submission). We surface these on day one when we can, but we cannot fix them. Third: a policy disagreement between partners that surfaces during the policy session and needs board-level resolution. We pause the session until it is resolved; we do not paper over it with our own assumption.
The deployments that hit day ten are the ones where the launch champion is genuinely available, the integrations are pre-approved, and the partners are aligned on the AI's policy boundaries before kickoff. When all three are true, ten days is generous. When any of them is uncertain, the deployment will land — but on a date the firm's calendar accepts, not the marketing one.
After day ten
The pilot continues for thirty days under the agreement. The firm has a guaranteed rollback window if the system isn't delivering measurable value — we refund the setup and decommission the deployment without question. We've never had it triggered, but we've kept it because firms with nothing to lose evaluate honestly, and honest evaluation is what we want.
After the thirty-day pilot, the firm is on a month-to-month managed-service relationship at $360 per user per month. There is no minimum term beyond the initial setup period. The contract continues as long as the system delivers value; it ends with thirty days notice when it doesn't. That posture is not generous — it is what gives us the discipline to keep delivering.
About the author
Roman Silantev — Founder, AI Lab Australia. Roman is the founder of AI Lab Australia Pty Ltd, the company that builds and operates SydClaw. He has spent the last decade building enterprise software for Australian professional services firms, and writes regularly on AI compliance and Privacy Act obligations.