Your firm uses AI. That doesn't mean you're ready for it.
by Ivor Padilla
Co-Founder & Engineering Director

We talk to professional services firms every week. Asesorias, despachos, consultoras. Most of them tell us the same thing within the first five minutes: "We already use AI."
Then we ask what that means. And the answer is almost always one of four things.
The four versions of "we already use AI"
"We use ChatGPT for emails and summaries."
Good. You have discovered the most basic use case. This is roughly equivalent to buying a car and only using the radio. ChatGPT is useful for drafting text. It is not automation. It does not touch your workflows, your documents or your client data. Nothing in your operations has changed. Your team still spends the same hours on the same repetitive tasks. You just write faster emails about those tasks.
"We hired a consultant who gave us a report."
Also fine. But a report is not a result. Did anything change after the consultant left? Can you point to one workflow that runs differently today than it did six months ago? If the answer is a 40-page strategy document sitting in a shared drive, you paid for a PDF, not a transformation.
"Our sector is different. AI doesn't apply the same way here."
This one is partially true and mostly dangerous. Yes, a law firm handles data differently than a marketing agency. Yes, GDPR and the EU AI Act add constraints that a Silicon Valley startup does not face. But the core problem is the same: your team spends hours on repetitive, pattern-based work that follows predictable rules. Contract review. Document classification. Data extraction. Report assembly. Client onboarding. The rules are stricter. The work is still automatable.
"We'll adapt when we need to."
This is the one that should concern you. Not because AI will replace your firm overnight. It won't. But because the firms that start automating now will compound that advantage month over month. In six months, the firm that automated client onboarding in February is processing three times the volume with the same team. The firm that waited is still doing it by hand and wondering why their competitor keeps winning new clients at a lower cost basis.
You don't react to a shift like this while it is happening. You prepare before it arrives. And for professional services, it has arrived.
What "AI-ready" actually looks like
Being ready has nothing to do with which chatbot your team uses. It has everything to do with whether AI touches your actual operations.
Here is the difference:
Not ready: Your team uses ChatGPT to draft documents, but every document still goes through the same manual review process, the same filing steps, the same handoffs between people.
Ready: An AI agent reviews incoming contracts, extracts key terms, flags deviations from your standard clauses and presents a structured summary to the lawyer. The lawyer reviews the summary, not the full document. The review that took 90 minutes takes 15. The output is more consistent because the agent checks against the same criteria every time.
Not ready: You attended a webinar about AI in legal services. You have opinions about it.
Ready: You ran a two-week pilot on one specific workflow. You have before-and-after numbers. Hours saved, error rates, documents processed. Your partners made the next decision based on data, not a presentation.
Not ready: You bought a subscription to an AI tool and added it to the tech stack.
Ready: That tool is connected to your document management system, your practice management software and your client database. It runs as part of the workflow, not alongside it.
The gap between these states is not knowledge. Most firms understand that AI is changing their industry. The gap is execution: picking one workflow, building the automation, measuring what changed.
Why the gap keeps growing
Six months ago, the distance between firms that use AI and firms that don't was noticeable but manageable. Today, it is harder to close. In six more months, it may be decisive.
This is not because AI is improving at some abstract, theoretical level. It is because the firms that started earlier are stacking improvements. The firm that automated contract review in January has since added client onboarding. The one that built document classification has moved on to report generation. Each automation frees capacity that gets reinvested in the next one.
Meanwhile, the firm that is "planning to look into AI this quarter" is starting from zero. Against a competitor that has been compounding for months.
This is not a technology gap. It is an operations gap. And operations gaps compound.
The judgment problem
There is a reasonable objection here: AI makes mistakes. It hallucinates. It misses context. How can you trust it with client work in a regulated environment?
You can't trust it blindly. That is the point.
The firms getting the most from AI are not the ones that hand everything to a model and walk away. They are the ones that use AI for the pattern-based, high-volume first pass and reserve human judgment for the decisions that matter.
A contract review agent does not replace the lawyer. It replaces the two hours the lawyer spent reading boilerplate to find the three clauses that need attention. The lawyer still makes the judgment call. They just get to the judgment call faster.
A document classification agent does not replace the administrator. It replaces the morning spent sorting files into categories that follow consistent rules. The administrator handles the exceptions.
The skill that matters now is not "using AI." Everyone can use AI. The skill that matters is knowing where AI output is reliable enough to act on and where it needs a human check. That is professional judgment applied to a new tool. And it is exactly what your clients are paying you for.
One workflow. Two weeks. Real numbers.
If you have read this far and recognize your firm in one of the four categories above, here is what to do about it.
Do not restructure your firm. Do not hire an AI team. Do not sign an enterprise contract with a platform vendor.
Pick one workflow. The most repetitive, highest-volume process your team runs. The one where people do the same steps on different documents, week after week.
Map it. How long does it take? How many steps? Where do errors happen? What does "done correctly" look like?
Build a pilot. Run it for two weeks against real work. Measure the results: time saved, error rate, volume processed.
Then decide. If the numbers justify it, expand. If they don't, you spent two weeks and learned something concrete. Either way, you made a decision based on evidence, not a slide deck.
That is what ready looks like. Not a subscription. Not an opinion. A measured result on a real workflow.