Why this matters now
GDPR Article 22 has been on the books since May 25, 2018 — but for the first five years of GDPR enforcement, most companies treated it as a niche provision aimed at credit-scoring algorithms, not at hiring tools. That changed.
Two things shifted the landscape:
- The Schufa ruling (December 2023). The Court of Justice of the EU held that the automated calculation of a credit score itself constitutes "automated individual decision-making" under Article 22 — even when a human technically makes the final lending decision based on that score. The reasoning extends directly to AI hiring tools that generate scores recruiters then "review."
- The EU AI Act's arrival put hiring AI on every compliance team's radar. Article 22 was already there; teams just hadn't been looking at it.
For HR teams using AI in any part of the EU hiring funnel, Article 22 is now a live obligation, not a theoretical one. This guide explains exactly what it requires, where the "solely automated" line really is, and how to evaluate vendors against it. It is not legal advice — talk to your counsel — but it is a practical map of what's been clearly established.
Article 22 in plain English
The text (GDPR Regulation 2016/679, Article 22(1)):
"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
Two phrases do most of the work:
"Solely automated processing." A decision made entirely by an algorithm with no meaningful human involvement. The exact line is contested (more on that below) but the intent is clear: pure machine output that determines an outcome.
"Legal effects or similarly significantly affects." Hiring decisions are squarely in this bucket. Recital 71 of the GDPR explicitly names "automated refusal of an online credit application or e-recruiting practices without any human intervention" as the kind of thing Article 22 targets. Not getting hired is a "similarly significant" effect.
The practical effect: where Article 22 applies, the candidate has a right to refuse the automated decision and demand human review.
The three exceptions (Article 22(2))
The right not to be subject to a solely-automated decision doesn't apply where the decision is:
- Necessary for entering into or performing a contract between the candidate and the employer (e.g. a candidate-initiated application).
- Authorised by EU or Member State law, with appropriate safeguards.
- Based on the candidate's explicit consent.
Most AI hiring deployments rely on exception (a) — the candidate applied for the job, so screening them is "necessary for entering into a contract." But invoking the exception does not extinguish your obligations. It triggers a different set of duties.
The 'solely automated' trap: where most teams get this wrong
The most common mistake is assuming that any human in the loop takes you out of Article 22 entirely. The European Data Protection Board's WP251rev.01 guidance (the reference text for interpreting Article 22) is explicit: the human involvement must be meaningful.
To take the decision out of "solely automated" territory, the human reviewer must:
- Have actual authority and competence to change the decision.
- Consider all relevant data, not just the algorithm's output.
- Not just rubber-stamp the algorithm.
Three real-world hiring patterns and how they map:
Pattern 1: AI auto-rejects candidates below a score threshold. Solely automated. Article 22 applies. No exception escape.
Pattern 2: AI scores candidates; a recruiter reviews the score and decides. Looks like human-in-the-loop, but in practice often isn't. If the recruiter rejects everyone the AI scored low and advances everyone the AI scored high, without independent assessment, the human is rubber-stamping. The Schufa logic says this is still solely automated for Article 22 purposes — the score is the decision.
Pattern 3: AI produces a structured assessment that the recruiter uses as one input alongside their own evaluation, with documented authority to override either way. This can clear the "meaningful human involvement" bar — but only if the override actually happens with some regularity, and only if the documentation shows independent reasoning.
The takeaway: "we have a recruiter in the loop" is not, by itself, an Article 22 defence. The shape of the human's involvement matters more than the org chart.
The required safeguards
Where Article 22 applies and you're relying on the contract or consent exceptions (most hiring contexts), Article 22(3) requires the data controller to "implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision."
In practice, that means three things you owe every affected candidate:
- The right to a human review that meets the meaningful-involvement bar described above.
- The right to express their view — to provide context the algorithm didn't have.
- The right to contest the decision — through a documented process, not a vague "email us if you have questions."
These rights don't have to be hidden behind an opaque appeals process. The clearest implementations surface them in the candidate-facing notice and the rejection email.
A worked hiring scenario
Consider an EU candidate, Léa, who applies to a remote sales role at a US-based company that uses an AI screening tool.
The AI scores her profile. Her score is below the company's auto-advance threshold. She receives a rejection email two days later.
Article 22 applies — Léa is in the EU, the decision was solely automated (no human reviewed her case), and the effect is significant (she didn't get the job).
What Léa is entitled to do:
- Request that a human review the decision (Article 22(3)).
- Request meaningful information about the logic of the algorithm (Article 13(2)(f) and Article 15(1)(h)).
- Express her point of view — for instance, that her score didn't reflect a relevant gap in her CV (Article 22(3)).
- Contest the decision through the company's documented appeals process.
What the company owes her in response:
- Acknowledgement within a reasonable time (typically within one month under Article 12).
- A genuine human review, not a copy-paste rejection from a different recruiter.
- A written explanation of the algorithm's role and the main elements of the decision (this overlaps with the EU AI Act's Article 86 right to explanation).
What happens if the company ignores her requests:
- A complaint to her national data protection authority (DPA).
- Potential investigation, formal warning, and — depending on severity and pattern — fines under GDPR Article 83.
For most companies, the operational gap isn't the legal obligation — it's having an actual workflow that handles the candidate request when it arrives. Most hiring teams have never received one and have no template ready.
The Schufa ruling and what it broadened
In December 2023, the CJEU ruled in case C-634/21 (SCHUFA Holding) that the automated calculation of a credit score by a credit reference agency itself constitutes "automated individual decision-making" under Article 22 — even though the bank, not the credit agency, technically made the final lending decision based on that score.
The court's reasoning: where the score plays a determining role in a downstream decision, the score itself is the automated decision for Article 22 purposes. The downstream "human" decision-maker is materially constrained by the upstream algorithmic output.
The implications for AI hiring tools are direct. If your AI scoring vendor produces a candidate score that materially determines who advances, the vendor's score is itself an Article 22 decision — regardless of whether your recruiter formally "approves" the outcome. That makes both the vendor (as data processor) and the employer (as data controller) part of the obligation chain.
Vendors who claim "we just provide a score, the customer makes the decision" do not insulate the employer from Article 22, and post-Schufa, they probably don't insulate themselves either.
The related transparency obligations: Articles 13, 14, 15
Article 22 is not the only GDPR provision that bites here. Three related provisions impose disclosure duties even when Article 22 itself doesn't apply:
- Article 13(2)(f). When you collect personal data directly from candidates, you must tell them whether automated decision-making (including profiling under Article 22) is being used, and provide "meaningful information about the logic involved, as well as the significance and the envisaged consequences."
- Article 14(2)(g). Same requirement when you obtain personal data about a candidate from a third party (e.g. a sourcing platform).
- Article 15(1)(h). A candidate can submit a data subject access request (DSAR) at any time and is entitled to receive the same information.
"Meaningful information about the logic involved" doesn't require disclosing trade secrets, but it does require explaining the system in terms a non-technical person can understand: what factors it considers, what kind of output it produces, and how that output influences hiring decisions.
A candidate notice that says "we use AI to evaluate your application" is not enough. A candidate notice that explains "we use an AI voice interview platform that scores your responses across five named criteria — communication clarity, technical depth, problem-solving approach, role-relevant experience, and behavioural fit — and a recruiter reviews the score before any decision is made" is closer to the bar.
How GDPR Article 22 stacks against other regimes
The compliance picture for AI hiring is now genuinely multi-layered. Quick reading:
- vs EU AI Act: The AI Act's Article 86 essentially codifies a GDPR Article 22-style explanation right specifically for high-risk AI decisions. They overlap, but the AI Act adds structural obligations (risk management, documentation, post-market monitoring) that GDPR Article 22 doesn't reach. After August 2026, hiring AI must satisfy both.
- vs NYC Local Law 144: LL144 focuses on aggregate bias-audit disclosure and candidate notice. GDPR Article 22 focuses on individual rights — review, explanation, contestation. Different layers of the same problem; both apply if you hire EU residents in NYC.
- vs the rest of GDPR: Article 22 sits within a broader transparency and rights framework. A vendor who claims "Article 22 doesn't apply to us" needs to also explain how they meet Articles 13, 14, 15, and 5(1)(a) (the lawful, fair, and transparent processing principle).
For a company hiring across the EU + NYC + the rest of the US, the cleanest mental model is: build the strongest single workflow once, then map the artifacts (notices, audits, explanations) into each regulator's required format.
A vendor evaluation framework specific to GDPR Article 22
When evaluating an AI hiring vendor for EU deployment, you should be able to get clean answers to the questions below. (For the broader bias-and-quality checklist, see our audit checklist for AI hiring vendors.)
| # | Question | What a defensible answer looks like |
|---|---|---|
| 1 | Is your scoring output the deciding factor in hiring decisions, or one of several inputs? | Specific answer with documented thresholds, plus an honest read on the Schufa implication for their architecture. |
| 2 | What does the candidate notice template say about automated decision-making? | Includes the existence of automated decision-making, meaningful information about the logic, and the significance/envisaged consequences (Articles 13/14). |
| 3 | What workflow does the platform support for an Article 22(3) human-review request? | A documented intake → assignment → review → response workflow with audit trail. Not "the recruiter's inbox." |
| 4 | What information do you provide to a candidate exercising an Article 15(1)(h) access request? | The categories of data, the logic, the significance, the envisaged consequences — in plain language. |
| 5 | Do you act as a data processor or data controller? | Specific answer with a current Data Processing Agreement covering the standard Article 28 terms. |
| 6 | Where is candidate data stored, and what cross-border transfer mechanisms apply? | EU residency option, or documented transfer mechanism (SCCs, adequacy decision, etc.). |
| 7 | What's your retention policy for candidate data and per-decision logs? | Aligned with the legal retention bar in the relevant jurisdiction, not "indefinite." |
| 8 | What happens to candidate data when a candidate is rejected? | Documented retention period, then deletion or anonymisation. Not "forever." |
A vendor who treats GDPR Article 22 as the EU AI Act's poor cousin is not a vendor you want screening EU candidates in 2026.
How ARIA approaches Article 22
ARIA was designed against this regulatory bar from the start, not retrofitted to meet it. Concretely:
- Hiring teams decide, not the AI. ARIA produces a structured rubric-scored assessment that hiring teams use as one input alongside their own evaluation. The architecture is built so the human review can be — and is documented to be — meaningful in the WP251 sense.
- Per-decision audit logs and transcripts. Every interview produces a transcript, recording, and rubric-scored breakdown — the raw material for a meaningful Article 13(2)(f) explanation, an Article 15(1)(h) access response, and an Article 22(3) human review.
- Candidate notice template with the required disclosures: the existence of automated processing, the logic in plain language, the significance, the right to human review, and the appeals path.
- EU data residency option for customers who need it, with current standard contractual clauses for any cross-border transfers.
- Data Processing Agreement that covers standard Article 28 terms, including assistance with Article 22 requests.
- No facial analysis, no emotion inference. The same architectural choice that satisfies EU AI Act Article 5 also reduces the surface area of the Article 22 explanation we owe candidates. Less black-box scoring, less to defend.
For a compliance-overview summary suitable for procurement and legal teams, see our about page. For a side-by-side with a legacy enterprise vendor's compliance posture, see our HireVue alternative breakdown.
Frequently asked questions
Does GDPR Article 22 apply if my company isn't based in the EU?
Yes, if you process the personal data of candidates located in the EU — even if your company, your servers, and your vendor are all outside the EU. GDPR's territorial scope (Article 3) is extraterritorial. The same Schufa logic applies: where the algorithmic output is determining, the location of the algorithm doesn't matter.
Can I rely on candidate "consent" as my Article 22(2)(c) basis?
Probably not safely. EDPB guidance treats consent in employment-adjacent contexts skeptically because of the imbalance of power between employer and candidate. Most lawful AI hiring deployments rely on Article 22(2)(a) — necessary for entering into a contract — rather than consent.
Is meeting LL144 audit requirements sufficient for Article 22?
No. They cover different layers. LL144 focuses on aggregate selection-rate disclosure and advance candidate notice. Article 22 focuses on individual candidate rights — review, explanation, contestation. A vendor who is LL144-audit-ready is partway there, not all the way. (We cover the practical overlap in our guide to building inclusive hiring processes.)
What if my vendor refuses to support Article 22 requests?
That's a controller-processor problem you need to solve before you sign. Article 28 requires the data processor (your vendor) to assist the controller (you) in fulfilling data subject rights. A vendor who can't or won't assist is structurally unable to be your processor for an EU deployment.
What's a typical Article 22 fine?
GDPR fines are tiered. Article 22 violations fall under the higher tier (up to €20M or 4% of global turnover). National DPAs have wide discretion in assessing severity, scale, and intent, so fines vary significantly. The point is not the maximum — it's that the fine structure is large enough to register on a Fortune 500 P&L, the same as for any other significant GDPR violation.
Need an AI hiring platform that handles Article 22 requests cleanly?
ARIA was built against this regulatory bar from day one — meaningful human-in-the-loop architecture, per-decision audit trails for explanation requests, EU data residency option, and a Data Processing Agreement that actually assists you with candidate rights requests.
Start the 3-day free trial → or talk to our compliance team about your specific EU deployment.



