GDPR and AI Automation: What Your Firm Should Actually Do
by Ivor Padilla
Co-Founder & Engineering Director

If your firm processes contracts, invoices, payroll, notarial files, or any document with clients' personal data — and in 2026 practically every firm does — the GDPR is the regulatory framework you play under every time someone says the words automation or artificial intelligence. And it's also the reason many firm partners have said "no way" to AI over the past two years.
What we're going to look at here is the other side of that reluctance: the GDPR does not forbid using AI. It requires you to use it in a specific way. Knowing where the line is is the difference between automating 60% of your firm's repetitive work and being paralysed by fear of a penalty from Spain's Data Protection Authority (AEPD).
TL;DR: The GDPR allows you to automate your firm's processes with AI provided that (1) there is a clear legal basis, (2) you process only the minimum necessary data, (3) you inform the client transparently, (4) you implement adequate technical measures, and (5) you sign a Data Processing Agreement with your provider. Penalties for serious non-compliance can reach €20 million or 4% of annual global turnover (art. 83 GDPR).
The real dilemma for firm partners
If you've been in the profession for years, you know the conversation: a client asks whether you can prepare their deeds, payroll, or tax filings faster. You know half the time your team spends goes into sorting papers, extracting data, and copying it into the firm's ERP. Someone says "with AI this could be done in a fraction of the time." And that's where the block starts.
The internal conversation usually goes: "yes, but client data is confidential. I'm not going to feed a client's payroll into ChatGPT. Or a divorce file. Or a property sale deed with bank details." And you're right to be cautious. But the usual conclusion — "so we automate nothing" — isn't the only way out.
GDPR doesn't distinguish between "automation with AI" and "automation without AI". What it regulates is how you process personal data, not what technology you use to process it. An unencrypted Excel on a lost USB stick violates GDPR. An AI hosted on EU servers with a signed DPA does not.
What GDPR says about AI (in plain terms)
The Regulation (EU) 2016/679 — the official GDPR text — was approved in 2016 and entered into force in May 2018. When it was drafted, generative AI wasn't yet a mainstream topic, but the articles are written to be technology-neutral: they apply equally to spreadsheets, CRMs, classical statistical models, and AI systems.
In Spain, the framework is completed by Organic Law 3/2018 on Personal Data Protection and Guarantee of Digital Rights (LOPD-GDD), which develops and refines GDPR for Spanish jurisdiction.
More recently, the EU Artificial Intelligence Regulation (Regulation (EU) 2024/1689, known as the AI Act) adds a layer of obligations specific to AI systems, classifying them by risk category.
The AI Act applies in progressive phases that stagger the different obligations over time: the prohibitions on unacceptable AI practices, the obligations for general-purpose AI models, and the obligations for high-risk systems all enter into force at different times, per the schedule in article 113 of the Regulation. How the Act classifies the typical automation systems used in law firms (document classification, data extraction, draft preparation) depends on the specific use case: most fall outside the high-risk list of Annex III, but it's worth verifying case-by-case when the system touches special categories of data (art. 9 GDPR) or decisions that significantly affect the data subject. For the exact phase calendar, the authoritative reference is the Regulation text itself on EUR-Lex.
But the foundation is still GDPR. Without GDPR compliance, the AI Act won't save you; without AI Act compliance, GDPR still demands its own.
The 5 principles you have to comply with, period
If you have to remember five things about GDPR when automating with AI in your firm, these are the ones.
1. Clear legal basis (art. 6 GDPR)
Before processing a single piece of data with AI, you need to know why you can legally process it. Art. 6 GDPR lists six possible legal bases: consent, contract performance, legal obligation, vital interest, public interest, and legitimate interest.
In a firm automating processes for clients, the usual basis is performance of the professional services contract: you process the client's data because the client has engaged you to perform a service (keeping the accounts, preparing deeds, handling an inheritance), and automating it is a way of delivering that service. You don't need to obtain specific additional consent to "use AI" in that processing, provided the automation is a way of fulfilling the contract and not a new, independent processing activity.
The catch: if the AI does something that is not part of the original engagement — for example, training a proprietary model on your client's data for use with other clients — that legal basis doesn't cover it. There you need another basis, typically legitimate interest with a documented balancing test, or explicit consent.
2. Data minimisation (art. 5.1.c GDPR)
You can process only the data strictly necessary for the purpose. If you're automating the classification of incoming invoices, you don't need to feed the full history of the client relationship into the AI system. Feed only the relevant fields.
This has practical implications when choosing a provider: a model that "needs to see the whole document to function" versus one that works with fields extracted discretely changes your risk exposure.
3. Transparency with the client (arts. 13 and 14 GDPR)
The client has the right to know that their data will pass through an automated system. You don't have to make them sign a specific consent if the legal basis is contractual, but you do have to inform them in your privacy policy or in the engagement letter: what processing you do, for what purpose, which processors are involved, how long you retain the data.
In practice, this means updating the firm's privacy policy when you start automating for real. It's not optional.
4. Technical and organisational security (art. 32 GDPR)
Art. 32 GDPR requires you to apply "appropriate technical and organisational measures to ensure a level of security appropriate to the risk". The text explicitly mentions:
- Pseudonymisation and encryption of personal data.
- The ability to ensure the confidentiality, integrity, availability, and resilience of processing systems.
- The ability to restore access rapidly in the event of an incident.
- Regular testing of the effectiveness of the measures.
In the AI context, this translates into concrete questions: does your provider encrypt data in transit and at rest? can you pseudonymise before sending to the model? do you have backups and a recovery plan? do you run periodic tests?
If the answer to any of these is "I don't know", art. 32 is talking to you.
5. Data processor (art. 28 GDPR)
When you contract an external provider — an AI vendor, a cloud service, or a consultancy — to process personal data on behalf of your firm, that provider becomes a data processor. Art. 28 GDPR requires:
- A written contract (the famous Data Processing Agreement, or DPA) governing the processing.
- The processor must handle data only on documented instructions from the controller.
- Confidentiality of the processor's personnel.
- Security measures equivalent to those in art. 32.
- Regulation of sub-processors.
- Assistance to the controller in handling data-subject rights and security breaches.
- Deletion or return of data at the end of the engagement.
- Making available all information needed to demonstrate compliance, including audits.
In plain English: without a signed DPA with your provider, you cannot give them your clients' personal data. Period. It doesn't matter if the tool is "just a test", it doesn't matter if it's a "free chat". It doesn't matter.
This is, in our experience, the point most often broken at firms that start experimenting with AI on their own. Using free ChatGPT with a client's file pasted into the chat is exactly the case art. 28 was designed to prevent: the provider hasn't signed anything with your firm, and that data goes to their servers under the service's terms, not yours.
What you can and cannot automate
With the five principles in mind, let's get concrete.
You can automate:
- Structured data extraction from documents — invoices, payroll, deeds, contracts — using an EU-hosted system with a signed DPA. For example, preparing data to feed the invoicing software we looked at in the VERIFACTU 2026 guide: automating input to your invoicing system is a classic use case.
- Classification and routing of emails and documents: this email is an invoice, this one is a query, this third is a court file.
- Drafts of letters, contracts, and memoranda from the firm's internal templates, provided the model is hosted on controlled infrastructure.
- Summaries of long documents so the professional reviews and decides faster.
- Calculations and data cross-checks to prepare filings, settlements, or reports.
You cannot (or shouldn't without extra controls):
- Paste real files into ChatGPT or another AI service without a Data Processing Agreement. It's not a philosophical question — it's literally breaking art. 28.
- Transfer personal data to providers outside the European Economic Area without ensuring a valid safeguarding mechanism: a European Commission adequacy decision, standard contractual clauses, or binding corporate rules. At the time of publication, the European Commission maintains adequacy decisions in force with Andorra, Argentina, Brazil, Canada (commercial organisations), the Faroe Islands, Guernsey, Israel, the Isle of Man, Japan, Jersey, New Zealand, the Republic of Korea, Switzerland, Uruguay and the European Patent Organisation, plus the United Kingdom (renewed in December 2025) and the United States (limited to organisations participating in the EU-US Data Privacy Framework, adopted in July 2023). The Commission reviews these decisions periodically — always verify the current list when contracting providers in any of these jurisdictions.
- Make decisions that significantly affect the client based solely on automated processing, without human intervention and without informing the client. This is governed by art. 22 GDPR (covered in the next section).
- Use special categories of data — health, sexual orientation, religious beliefs, biometric data; art. 9 GDPR — in automations without reinforced justification and additional measures. In employment, criminal, and healthcare-adjacent firms this is frequent: take special care.
The key point: the right question is not "can I use AI?", but "which provider, under what contract, with what data, with what controls?".
Penalties and art. 22 on automated decisions
Two GDPR details worth not forgetting.
Art. 22: automated decisions and human intervention
Art. 22 GDPR establishes that every person has the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal effects on them or similarly significantly affects them.
There are exceptions: explicit consent, contractual necessity, legal authorisation. But even when they apply, the controller must adopt suitable measures to safeguard the data subject's rights, including human intervention and the right to obtain an explanation and contest the decision.
For a firm, this is good news, not bad: GDPR requires a human professional to review and validate when the decision affects the client. You couldn't "let the AI decide on its own" even if you wanted to. Automation at a firm is always "AI proposes, the professional decides" — not because Gradion says so, but because art. 22 GDPR requires it.
Penalties (art. 83 GDPR)
GDPR penalties are regulated in art. 83 and structured in two tiers:
- Lower tier: up to €10 million or 2% of total annual global turnover of the preceding financial year, whichever is higher. Applies to breaches of controller and processor obligations (arts. 8, 11, 25-39, 42 and 43), certification bodies, and monitoring bodies.
- Higher tier: up to €20 million or 4% of total annual global turnover of the preceding financial year, whichever is higher. Applies to breaches of the basic processing principles (arts. 5, 6, 7 and 9), data subject rights (arts. 12-22) and international transfers (arts. 44-49).
Practical translation: the serious breaches — processing without a legal basis, ignoring client rights, transferring data to third countries without safeguards — fall in the higher tier. A small firm is unlikely to pay €20 million, but the AEPD grades penalties, and any six-figure sum is a bad day.
On top of the financial penalty, the AEPD publishes its resolutions. The reputational damage at a professional firm — where trust is everything — is usually worse than the fine itself.
GDPR and AI FAQ
Can I use ChatGPT with my clients' data?
Not with the free version of ChatGPT. OpenAI does not sign a Data Processing Agreement with free users, and the terms of service permit uses of the data that a firm cannot authorise on behalf of its client. With enterprise versions — ChatGPT Enterprise, Azure OpenAI, and similar — that do offer a DPA and data residency controls, it is possible, but you have to read the contract carefully and verify where the data is processed.
Do I have to sign something with the AI provider?
Yes: a Data Processing Agreement (art. 28 GDPR), commonly called a DPA. If the provider doesn't offer one or won't sign one with you, you cannot share your clients' personal data with them.
What if the AI makes a decision that affects my client?
Art. 22 GDPR requires you to ensure human intervention when the decision significantly affects the data subject. At a firm, this translates to the professional reviewing and signing what the AI prepares. There cannot be a final automated decision without a human in the loop.
Do I need to run a Data Protection Impact Assessment (DPIA) to automate?
It depends on the risk. Art. 35 GDPR requires a DPIA when processing entails a high risk to data subjects' rights, typically for large-scale processing, special categories of data, systematic monitoring, or significant automated decisions. A labour-law firm automating the classification of medical leave notes with AI probably needs a DPIA. A tax firm automating invoice intake may not, but a documented preliminary analysis is wise either way.
And if the provider is in the United States?
International transfers of personal data outside the European Economic Area are regulated in arts. 44-49 GDPR. For the United States specifically, the European Commission adopted in July 2023 an adequacy decision limited to commercial organisations participating in the EU-US Data Privacy Framework (DPF). In other words: a transfer to a US provider benefits from the decision only if that specific provider is signed up to the DPF. Providers outside the DPF still need standard contractual clauses and a transfer impact assessment (TIA). The Commission publishes periodic reviews of the framework — the most recent was in October 2024 — and its validity remains subject to review. The conservative recommendation is still: providers with EU infrastructure where possible; and where not possible, verify DPF enrolment and, in parallel, SCCs + TIA as a backup.
How we're solving this at Gradion
In Gradion projects, GDPR isn't something added at the end — it's part of the design from day one. We have three operating technical principles that respond directly to the five of GDPR:
- Data in the European Union. Gradion's infrastructure is hosted in EU data centres. There are no implicit international transfers when you engage us for a pilot — your firm's data does not leave the European Economic Area.
- Per-client isolation. Every firm working with Gradion has its own separated environment. There is no mixing of data across clients and no reuse of documents to train shared models. Gabriel Naranjo, co-founder and Cloud Architect certified on Azure, AWS, Google Cloud, and Oracle, is the technical guarantor of this architecture.
- Data Processing Agreement signed from the start. Before your firm hands us a single piece of personal data, we sign the corresponding DPA. It's not a formality — it's the precondition.
On top of that, the automation we propose follows the art. 22 pattern: we prepare the draft, the professional reviews and signs. The AI doesn't decide for your client; it saves you the hours your team spends sorting papers, extracting data, and copying information between systems. Professional judgement stays at the firm.
Is your team losing 15 hours a week on paperwork?
We solve it in 10 days, with a fixed-price pilot. Data in the EU, per-client isolation, DPA signed from day one.
Tell us about your case →

