AEPD and AI: what Spain's regulator expects from your firm
by Ivor Padilla
Co-Founder & Engineering Director

AEPD and AI: what Spain's regulator expects from your firm
By Ivor Padilla and Gabriel Naranjo, co-founders of Gradion · Published on 10 April 2026 · Last updated: 10 April 2026 · 21 min read
The AEPD — Spain's Agencia Española de Protección de Datos — is the authority that inspects, fines and publishes hard criteria on how artificial intelligence can (and cannot) be used when personal data is involved. It is not an abstract European body: it is the regulator that will put your firm's name on a binding resolution if things go wrong. Since 2020 it has been publishing AI-specific guidance, and if you read it carefully it tells you exactly which documents you need to have ready before an inspector knocks on the door.
This post lays out what the AEPD actually requires when your law firm, accountancy practice or notary introduces AI into a workflow — draft deeds, invoice classification, data extraction from client files, whatever it is. And which documents they will ask for first if your firm ever lands on their radar.
TL;DR: The AEPD does not prohibit the use of AI in professional services firms. What it requires is that you can demonstrate, on paper and with audit trails, that there is a lawful basis for the processing, that you have carried out a Data Protection Impact Assessment (DPIA) when one is required (art. 35 GDPR), that there is meaningful human intervention in any automated decision with significant effect (art. 22 GDPR), that you have a signed data processing agreement with your provider (art. 28 GDPR), and that your record of processing activities is up to date. Fines can reach €20 million or 4% of worldwide annual turnover — whichever is higher — under art. 83.5 GDPR.
What the AEPD is (and why it matters even if you only use AI "for the invoices")
The Agencia Española de Protección de Datos is the independent supervisory authority in Spain for everything covered by the General Data Protection Regulation (GDPR) and by Spain's Organic Law 3/2018 on the Protection of Personal Data and the guarantee of digital rights (LOPD-GDD). In the industry it is known simply as "la Agencia" — the Agency.
In practical terms, the AEPD does three things that affect your firm directly:
- It publishes criteria. Guidance notes, legal reports, position papers. Since 2020, specifically about how to square AI with the GDPR.
- It handles complaints. Any client of yours can lodge a complaint if they believe their data has been handled improperly. The AEPD will investigate it.
- It inspects and fines. Art. 58 of the GDPR grants it broad investigatory powers: it can request information, access premises, examine processing systems. Art. 83 sets the sanction regime.
An important nuance: the AEPD is not the only authority regulating AI in Spain. Since late 2023 there is also AESIA (Agencia Española de Supervisión de Inteligencia Artificial), created by Royal Decree 729/2023 of 22 August, approving the AESIA statute, published in the Spanish Official Gazette on 2 September 2023 and headquartered in A Coruña. AESIA is the Spanish supervisory body for the EU AI Act — prohibited practices, high-risk systems, governance of foundation models. The AEPD continues to deal with anything involving personal data, which is practically everything an AI system does inside a firm.
In practice, when you automate with AI in your firm, both authorities will usually apply in parallel: the AEPD because of personal data, AESIA because of the AI system. But it is the AEPD that has 45 years of form inspecting and fining. It is the one that will reach you first.
The AEPD's position on AI: what it has published since 2020
The AEPD started publishing dedicated AI material in 2020. This is not scattered doctrine or opinion pieces: it is reference documentation that an inspector will cite if a file is opened against you.
The most relevant documents for a firm that wants to automate are the following:
- "Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción" ("Aligning AI-based processing with the GDPR. An introduction", February 2020). The foundational paper. It introduces the criteria the AEPD applies when assessing whether an AI-based processing activity is compatible with the GDPR. It is still the starting point.
- "Requisitos para Auditorías de Tratamientos que incluyan IA" ("Audit Requirements for Processing Activities Involving AI", January 2021). The same angle from a "how to audit" perspective for systems already in production. It sets out concrete control objectives on algorithm inventory, transparency, proportionality, data quality and bias management.
- "Lists of types of processing requiring a data protection impact assessment (art. 35.4 GDPR)" (September 2019). It expressly includes large-scale profiling and processing that uses novel technologies at scale. If your AI falls into those categories, the DPIA is not optional.
- The "Innovation and technology" section on aepd.es and the infographic document "Processing involving Artificial Intelligence (AI)", where the Agency summarises its main criteria in visual form.
If you read all of this together, one thing becomes clear. The AEPD is not opposed to AI. It is opposed to AI without documentation. The distinction matters enormously. What the Agency asks — and repeats in every document — is that the data controller must be able to demonstrate, in writing and with records, that they have thought about the impact on people's rights before putting the system into production. The message is not "do not do it." The message is "do not do it blindly."
If you do not yet have a clear picture of the underlying framework, it is worth pausing over the post in which we break down the five GDPR principles as they apply to AI-based automation in firms. This post assumes that framework and goes straight to what the AEPD places on top of it.
The requirements the AEPD expects from any AI-based processing
If you distil the AEPD's guidance notes and cross-reference them with the GDPR articles it cites most often, nine requirements emerge that any processing involving AI — whether it is a firm classifying client files or a bank scoring credit risk — must satisfy. These are not "best practices"; they are the standard the Agency will apply if it looks at your system.
- A clear lawful basis before you start (art. 6 GDPR). Before a single piece of personal data goes into a model, you need to know which lawful basis under art. 6 you are relying on: consent, performance of a contract, legal obligation, legitimate interests, and so on. And it has to be documented. If the basis is legitimate interests, you also need the balancing test on record.
- Data minimisation (art. 5.1.c GDPR). Do not feed the model more data than strictly necessary for the purpose. The AEPD is very explicit on this: "client data may be available, but that does not mean you have to use all of it." If you can train or process with less, do so.
- Meaningful transparency to the data subject (arts. 13 and 14 GDPR). The client has the right to know that automated processing is involved, what its purpose is, what basic logic it follows and what the likely consequences are. Burying this in the small print of a service agreement does not cut it.
- A DPIA when one is required (art. 35 GDPR). A Data Protection Impact Assessment is mandatory when the processing is "likely to result in a high risk to the rights and freedoms" of individuals. The AEPD's list explicitly includes automated profiling and decisions with legal effect. If your AI is in that territory, the DPIA is done before the system goes live, not after.
- Adequate technical and organisational security (art. 32 GDPR). Encryption in transit and at rest, role-based access control, pseudonymisation where possible, backups, a recovery plan, access logs. The AEPD does not prescribe specific technologies, but it does require proportionality to the risk.
- A data processing agreement with your provider (art. 28 GDPR). If you use an external AI provider — and most firms do, at least in part — you have to sign a DPA before any personal data leaves your system for theirs. Without a DPA there is no lawful processing, full stop. Our earlier post on GDPR and AI-based automation goes into what the DPA should contain and why it is not a formality.
- An up-to-date record of processing activities (art. 30 GDPR). The living document in which, as the data controller, you list which processing activities you carry out, what data you process, what the purpose is, how long you retain it, and with whom you share it. AI-based processing is not a vague bucket: it must appear as a specific, named entry in the record.
- Human intervention in automated decisions with significant effect (art. 22 GDPR). If a client can be affected by a fully automated decision that has legal or similarly significant consequences — for example, an outcome that influences their case — they have the right to request human review. In practice, in a firm this means: the AI proposes, a qualified professional reviews and signs off.
- Breach notification within 72 hours (art. 33 GDPR). If there is a security breach affecting personal data, you have 72 hours to notify the AEPD from the moment you become aware of it. If the breach also puts individuals at risk, you must inform them too (art. 34). AI does not change that deadline.
These nine points are the mould. Most of what the AEPD says about AI in more specific contexts is, essentially, the concrete form that each of these takes in particular cases.
Do I actually need a DPIA?
If there is one question senior partners ask us more often than any other when we start a pilot, it is this. The short answer — in any firm that handles client data — is almost always yes, and it is worth doing even when the law does not strictly require it. The long answer:
Art. 35.1 GDPR mandates a DPIA when processing "is likely to result in a high risk to the rights and freedoms of natural persons". Art. 35.3 lists three automatic triggers:
- Systematic and extensive evaluation of personal aspects based on automated processing, including profiling.
- Large-scale processing of special categories of data (art. 9) or data relating to criminal convictions and offences (art. 10).
- Systematic monitoring of a publicly accessible area on a large scale.
Art. 35.4 says each supervisory authority must publish its own list of processing types that require a mandatory DPIA. The AEPD did this in September 2019 with the document "Lists of types of processing requiring a data protection impact assessment (art. 35.4 GDPR)". That list explicitly includes:
- Large-scale profiling.
- Processing that uses biometric data to identify an individual uniquely.
- Processing involving automated decisions with legal or similarly significant effects.
- Use of special categories of data (arts. 9 and 10 GDPR) with novel technologies.
- Merging databases from operations with different purposes.
Translating that into a professional services day-to-day: if your automation extracts data from contracts, invoices or client files to feed a model that drafts deeds or classifies cases — even if the output is only a proposal for a human to review later — you are in the territory where a DPIA is usually required. A concrete example that cuts across clusters: even a seemingly "harmless" automation like classifying incoming invoices — the kind of workflow we describe in our 2026 VERIFACTU guide — contains personal data of the client on every invoice, and therefore falls within the perimeter of the GDPR and the AEPD. And if your system makes any decision that may influence the client's matter (even as a recommendation), you are squarely inside art. 35.3.a.
One critical nuance: even when your specific use case does not fit the AEPD list to the letter, running the DPIA is the best insurance policy there is. It is the first document you will be asked to produce if the Agency inspects, and it is what protects you from client complaints. The cost of doing one is low; the cost of not having one when needed is very high.
Human intervention, transparency and bias: the three places where most firms fail
Of the nine requirements the AEPD imposes, three are the ones that cause most trouble in inspections and audits. They deserve their own section.
Real human intervention (not rubber-stamping)
Art. 22 GDPR says data subjects have the right not to be subject to a decision based solely on automated processing — including profiling — that produces legal effects concerning them or similarly significantly affects them. There are exceptions (explicit consent, performance of a contract, legal authorisation), but whenever art. 22 is invoked, the decision must be capable of meaningful human review.
Where firms get this wrong: calling a rubber stamp "human intervention". If the professional receives a list of 800 model-generated proposals and approves them in bulk without opening any, that is not human intervention under art. 22. The AEPD has said this explicitly on several occasions: the intervention must be meaningful, competent and with real capacity to change the outcome. If your workflow is "AI proposes, professional signs off", there has to be evidence that the professional read, understood and could have rejected each case.
At Gradion, the principle we apply is that every reviewable automated decision is logged with a timestamp, the reviewer's identity, the result of their review (accepted, modified, rejected) and a short justification whenever a modification changes the original proposal.
Transparency the client can actually read
Arts. 13 and 14 GDPR require information to be given to the data subject. Arts. 22.3 and 22.4 add, for automated decisions, that the information must include "meaningful information about the logic involved" and "the envisaged consequences" of the processing.
Where firms get this wrong: burying the information in the firm's generic privacy policy, halfway through paragraph 14, in legalese. The AEPD's criterion is effective transparency: the client must be able to understand, without technical advice, that automated processing is involved, what it does, and what they can request about it. The sensible approach is to have a separate, dedicated section about automation and AI, in plain language, and to refer to it when it is relevant.
Bias and data quality
Bias is not a philosophical concept — it is a legal matter with consequences. If your model was trained on data that introduces biases by origin, gender, age, nationality or any other protected characteristic, and that bias translates into unequal decisions for people in comparable situations, you have a direct discrimination problem on your hands.
The AEPD, in its AI guidance, explicitly requires the data controller to assess the quality of the training data and the possible sources of bias. This is not a soft recommendation: it forms part of the DPIA when one is required, and it is one of the first questions an inspector will ask. What data was the model trained on? Who reviewed it? What known biases have been identified and how have they been mitigated? If the answer is "I do not know, the provider trains it", the data controller is still you — and you will need to get that information from the provider contractually.
What actually happens in an AEPD inspection (and which documents come first)
The AEPD's investigatory powers sit in art. 58 GDPR. In practice, an inspection almost always begins with a written request: the Agency asks for documentation in writing, with a response deadline. If the response is insufficient or there are signs of infringement, it can escalate into an on-site visit.
For inspections involving AI-based processing, the documents we typically see requested first — drawing on the AEPD's published resolutions in the informes y resoluciones section of aepd.es and on the projects we have accompanied — are the following:
- The record of processing activities (art. 30 GDPR) in its current version, with the specific entry for the AI-based activity.
- The Data Protection Impact Assessment (DPIA) when one is required.
- The data processing agreement with any provider handling the data (art. 28 GDPR).
- The information given to data subjects — privacy policy, specific notices on automated processing, consent forms where applicable.
- The record of the lawful basis applied to the processing and, if it is legitimate interests, the documented balancing test.
- Evidence of the human intervention mechanism when there are automated decisions: workflow, logs, responsible parties.
- Security policies applied to the system (art. 32), including the technical risk assessment and the measures adopted.
- Breach notifications — if there have been any in the past 24 months.
If you look at the list, you will notice that it does not ask for "the AI code" or "the technical architecture". It asks for the administrative file on the processing activity. The AEPD does not inspect your AI — it inspects your folder. Having that folder in good order is, in practice, 80% of coming out of an inspection well.
This ties back to the GDPR and AI-based automation in firms we covered in an earlier post: the five principles we described there are exactly what the inspector will look for, documented one by one, inside this folder.
Recent AEPD fines with an AI angle: three cases worth knowing
The AEPD's sanction history on cases with an AI or large-scale automated processing angle is the best compass for understanding where the line is. Not because you will repeat those cases in your firm, but because the AEPD's legal reasoning will apply to yours as well.
Three reference cases:
- Mercadona and facial recognition in supermarkets. In its proceedings PS/00120/2021, resolved in July 2021, the AEPD fined Mercadona €3.15 million (reduced to €2.52 million after the voluntary payment discount) for deploying a facial recognition system in several stores, with the stated aim of identifying individuals subject to restraining orders. The Agency found violations of art. 6 (lawfulness of processing) and art. 9 (processing of special categories of data, including biometrics) of the GDPR, and argued that the system confused "usefulness" with "necessity": even if it might be useful, it was neither necessary nor proportionate. Essential reading if your firm ever considers any use of biometrics.
- Clearview AI and mass scraping of photographs. The US company has been fined by several European data protection authorities (France, Italy, Greece, United Kingdom) for mass scraping of public photographs from the internet to train its facial recognition database. In Spain, the AEPD initially archived a complaint on the matter; the Spanish National Court (Audiencia Nacional) ruled in 2022 that the AEPD was competent and had to admit the complaint under art. 3.2.b GDPR. The key point of the European reasoning: consent from the individuals cannot be inferred from the fact that the photos were "public" on the internet. The logic applies to anyone training a proprietary model by scraping publicly available data, wherever they are based.
- Worldcoin (Tools for Humanity) and iris scanning. In March 2024 the AEPD ordered a precautionary measure against Tools for Humanity Corporation, which runs the Worldcoin project, requiring the immediate cessation of the processing of biometric data (iris) in Spain and the blocking of the data already collected. The measure was notified on 4 March 2024 with a 72-hour deadline for compliance, had a maximum duration of three months, and was subsequently upheld by the Spanish National Court, which held that the protection of the fundamental right to data protection prevailed over the particular interest of the company. The case is interesting because it shows how quickly the Agency can move: it did not wait until the end of a sanctioning procedure.
All three cases share one thing: they all involve large-scale automated processing of sensitive data without a clear compliance file. None of them applies directly to you if your automation is classifying client files — but the mental framework the AEPD applies certainly does.
Frequently asked questions about the AEPD and AI
Do I need to notify the AEPD before I start using AI in my firm?
There is no general prior-notification duty. However, if your processing requires a DPIA and, after carrying it out, the residual risk is still high, art. 36 GDPR requires prior consultation with the AEPD before the processing begins. In most firm-grade automations the DPIA concludes with low residual risk, which means this consultation is not necessary. But you have to do the DPIA first to find that out.
What exactly is a DPIA, and when is it mandatory for AI?
A DPIA is a documented analysis of how a processing activity affects people's rights and freedoms, what risks it introduces, and what measures you will adopt to mitigate them. It is mandatory whenever the processing entails a high risk (art. 35 GDPR) and, specifically, when it falls within the list published by the AEPD (art. 35.4). For AI-based processing, the DPIA has to cover the training data, known biases and the human intervention mechanisms.
Can the AEPD inspect me without prior notice?
In theory, yes. Art. 58 GDPR gives it the power to access premises and equipment. In practice, inspections usually start with a written information request to which you respond with documentation. On-site visits exist but are less common and tend to be preceded by files with clear signs of infringement.
What is the difference between the AEPD and AESIA?
The AEPD is Spain's data protection authority: it covers anything involving personal data and compliance with the GDPR and LOPD-GDD. It has been doing this since 1994. AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) is the Spanish supervisory body for the EU AI Act, created in late 2023, dealing with prohibited practices, high-risk systems and the specific obligations of the AI Act. In most firm use cases, both apply in parallel: the AEPD for the personal data, AESIA for the AI system itself. The one most likely to inspect you in the first years is the AEPD.
Can I use a US-based AI provider if I sign the DPA?
The DPA (art. 28 GDPR) is a necessary condition, but not a sufficient one. For transfers of personal data to third countries outside the EEA, you also have to comply with Chapter V of the GDPR. For the United States specifically, the European Commission adopted the EU-U.S. Data Privacy Framework on 10 July 2023, which allows transfers to providers certified under the framework; the Commission published its first periodic review in October 2024 and, as of publication of this article, the decision remains in force. The alternatives are standard contractual clauses approved by the Commission, binding corporate rules, or the narrow derogations under art. 49. Before signing anything, it pays to check the current status on the European Commission's adequacy decisions page, not after.
What record do I need to keep of automated decisions?
There is no official format, but to sustain a "meaningful" human intervention under art. 22 GDPR and to be able to respond to an inspection, the reasonable minimum is to log, for each decision: timestamp, case or file identifier, the automated system's proposal, the identity of the professional reviewing it, the outcome of the review (accepted, modified, rejected) and a short justification whenever the final decision departs from the proposal. All of this must be extractable as a legible report in response to a formal request.
What happens if my client invokes art. 22 and asks for human intervention?
You must grant it, unless one of the exceptions in art. 22.2 applies (consent, performance of a contract, legal authorisation), and even then the client retains the right "at least" to contest the decision, to express their point of view and to obtain human intervention. In practice, this means your workflow has to allow, by design, for a decision to be reviewed by a professional with real capacity to change it. If it does not, you are not compliant with art. 22.
How we are solving this at Gradion
In the projects we have worked on with firms, the pattern repeats itself: the team is technically compliant — data is properly encrypted, access is controlled, servers are in the European Union — but when we ask for the documentary evidence of each of those things, it turns up half-finished. The record of processing activities has no specific entry for the AI. The provider agreement was signed but never revisited when the provider released a new feature. The DPIA is a half-written Google Doc that no one completed. The information given to clients is buried in the generic privacy policy, with no dedicated section on automation. Technically compliant; demonstrably not.
That is why, at Gradion, the compliance file is built in parallel with the automation, not afterwards. Every component that goes into production leaves three kinds of auditable trail by design:
- Processing trail. Every operation is logged with the data that went in, the lawful basis applicable to that operation, the specific purpose and the intended retention. The record of processing activities feeds off that, not the other way round. When the firm has to show its record in an inspection, the data is already there.
- Human intervention trail. Every AI-generated proposal that requires review is linked to a reviewer identifier, timestamp, outcome (accepted, modified, rejected) and — where the modification is substantive — the short justification the professional enters. This is not a parallel audit system; it is part of the review workflow itself. The firm has, with no extra effort, the evidence that art. 22 is being met.
- Deployment file. Before an automated workflow goes live, we produce the corresponding dossier: the DPIA where required, the technical risk analysis under art. 32, the signed data processing agreement, the updated client notice, and the entry in the art. 30 record. The firm receives that dossier as a deliverable of the pilot, not as "additional documentation" billed on the side.
Gabriel Naranjo, co-founder and Cloud Architect certified in Azure, AWS, Google Cloud and Oracle — and a Microsoft Certified Trainer in cloud and security — is the technical guarantor of this architecture. The principle is simple: if, when the pilot ends, the senior partner cannot open a folder with the compliance file ready for inspection, the pilot is not finished.
It is a small shift in emphasis with large consequences. The partner stops seeing the AEPD as a vague threat and starts seeing it for what it really is: a regulator that asks for specific documentation, which you can prepare — and which you already have.
Is your team losing 15 hours a week to paperwork?
We can fix that in 10 days with a fixed-price pilot. Data in the EU, DPA signed from day one, AEPD compliance file ready before production.
Tell us about your firm →

