Why this matters now
The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, with obligations rolling out in phases through 2027. For HR teams using AI in any part of the hiring funnel, three things are now true:
- Hiring AI is explicitly classified as high-risk (Annex III, point 4).
- Some hiring-adjacent uses are flat-out prohibited (Article 5).
- Penalties scale with global turnover — up to €35M or 7% for prohibited-practice violations.
This guide explains exactly which provisions apply to AI hiring tools, what "high-risk" obligations look like in practice, and how to evaluate vendors against the new bar. It is not legal advice — talk to your counsel before relying on any specific interpretation — but it is a practical map of what's been clearly established.
The quick map: AI Act provisions that hit hiring
| Provision | What it covers | Why HR cares |
|---|---|---|
| Article 5 | Prohibited AI practices, including emotion recognition in workplaces | Rules out a class of vendor approaches entirely |
| Annex III, point 4 | Hiring AI as a high-risk system | Triggers the entire high-risk obligation set |
| Articles 9–15 | High-risk system requirements | Risk management, data governance, transparency, human oversight, accuracy |
| Article 26 | Obligations of deployers (i.e. you, the employer) | Even if the vendor builds the tool, you have your own duties |
| Article 86 | Right to explanation for affected persons | Candidates can demand explanations of decisions affecting them |
| Articles 99–101 | Penalties | Up to €35M / 7% of global turnover |
The provider (vendor) and the deployer (you) share obligations — neither can outsource compliance entirely to the other.
Annex III: why hiring AI is "high-risk"
Annex III, point 4 of the AI Act explicitly classifies as high-risk:
"AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates."
It also covers AI systems used to "make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, [or] allocate tasks based on individual behaviour or personal traits or characteristics."
In practice, that net catches:
- Resume parsers and applicant ranking tools
- AI-based programmatic job ad targeting
- Voice or video interview platforms that score candidates
- Skills-assessment platforms that produce hire/no-hire signals
- Workforce-planning tools that recommend who gets promoted, reassigned, or laid off
The provider's obligation is to design and document the system to clear the high-risk bar. The deployer's obligation is to use it within those guardrails — and to verify before deployment that the system is fit for purpose.
Article 5: the emotion recognition prohibition
Article 5 lists AI practices that are flatly prohibited regardless of how well-engineered they are. One of them is directly relevant to a swathe of hiring tools:
"AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons."
This is a narrow exception. "Medical or safety" does not cover "we want to know if the candidate is a confident communicator." Tools that analyze facial expressions to infer emotional states or "engagement" of candidates run head-on into this prohibition when used in hiring contexts in the EU.
This is why several major video-interview platforms have either discontinued or quietly removed emotion-inference features over the past several years. (HireVue discontinued its facial analysis in 2021, well before the AI Act passed — but the AI Act formalizes what regulators were already signalling.)
A vendor that still ships emotion inference for hiring needs a defensible Article 5 analysis. "We just call it engagement scoring" is not a defence.
The seven high-risk obligations, in plain English
Articles 9–15 set out what providers of high-risk AI systems must do. Stripped of legal language:
1. Risk management (Art. 9). Document the foreseeable risks, the mitigations, and the residual risk. Update across the system's lifecycle.
2. Data governance (Art. 10). Training, validation, and test data must be relevant, representative, and as free of errors as possible. Specific obligations on examining for biases.
3. Technical documentation (Art. 11). A complete technical file describing the system, its training, and its performance — sufficient for an auditor to assess compliance.
4. Record-keeping (Art. 12). Automatic logging of events during operation. For hiring AI: enough trace to reconstruct what happened in any given screening decision.
5. Transparency to deployers (Art. 13). Clear instructions for use. Affected persons (candidates) must be informed they're interacting with an AI system.
6. Human oversight (Art. 14). Designed so a human can understand outputs, override the system, and stop it if needed. "AI recommends, humans decide" is the directional answer.
7. Accuracy, robustness, cybersecurity (Art. 15). Performance levels declared and met. Resilient against errors, faults, and adversarial attacks.
For HR specifically, the practical consequences are:
- You should be able to ask a vendor for their technical documentation summary and get an actual answer.
- Every hiring decision should produce a logged trace you could hand to a regulator.
- Candidates need a plain-language notice (in advance) that AI is involved.
- A human reviewer must have meaningful override authority, not just a rubber-stamp role.
When does each obligation kick in?
The Act is phased. Roughly:
| Date | What goes live |
|---|---|
| 2 February 2025 | Article 5 prohibitions enter into force |
| 2 August 2025 | Obligations on general-purpose AI models, governance bodies |
| 2 August 2026 | Most high-risk system obligations apply |
| 2 August 2027 | Full applicability, including remaining high-risk categories |
If you are deploying AI in hiring in the EU today, Article 5 prohibitions are already live and the high-risk obligations that catch most hiring tools land in August 2026. That is not a long runway.
Penalties: what's actually at stake
The Act's fine tiers (Articles 99–101):
- Up to €35M or 7% of global turnover — for violations of Article 5 prohibitions (whichever is higher).
- Up to €15M or 3% of global turnover — for violations of high-risk system obligations.
- Up to €7.5M or 1% of global turnover — for supplying incorrect, incomplete, or misleading information to authorities.
These are upper limits, not standard outcomes. But the structure makes the point: the EU is treating AI compliance the way it treats GDPR compliance, with fines designed to register on a Fortune 500 P&L.
Candidates also have direct rights. Article 86 grants any person affected by a high-risk AI decision the right to obtain "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken." If your AI hiring vendor cannot produce those explanations, neither can you.
A vendor evaluation framework specific to the EU AI Act
When evaluating an AI hiring vendor for EU deployment, here are the questions you specifically should ask. (For a broader bias-and-quality checklist, see our audit checklist for AI hiring vendors.)
| # | Question | What a defensible answer looks like |
|---|---|---|
| 1 | Do you classify your system as a high-risk AI system under Annex III, point 4? | Yes (with the specific rationale documented). Anything else suggests the analysis hasn't been done. |
| 2 | Can you provide your technical documentation in the form required by Annex IV? | Yes — at least a summary covering all eight Annex IV categories. |
| 3 | What does the candidate-facing notice say? Can I see the template? | Plain-language, in the candidate's language, surfaced before the AI interaction begins. |
| 4 | What logs do you produce per scoring decision, and how long do you retain them? | Per-decision logs sufficient to reconstruct the scoring rationale. Retention aligned with legal hold and statute-of-limitations needs. |
| 5 | Does any part of your scoring system infer emotions or analyze facial expressions? | A clear no. If yes, an Article 5 analysis explaining why the deployment is permissible. |
| 6 | What human override mechanism exists, and how does the deployer document its use? | A documented review step the deployer is responsible for, with an audit trail. |
| 7 | What is your conformity assessment status? | Either internal control (Annex VI) or notified-body assessment (Annex VII), as applicable. |
| 8 | What do you do for candidates who exercise their Article 86 right to explanation? | A documented process producing a non-technical written explanation within a defined SLA. |
A vendor who treats these as bureaucratic check-the-box questions is not a vendor you want screening your candidates in 2027.
How ARIA approaches each obligation
ARIA was designed against this regulatory bar from the start, not retrofitted to meet it. Concretely:
- No facial analysis, no emotion inference. Article 5 isn't a constraint we manage around — it's a design principle that made our scoring rubric better, not just compliant. Our voice interview platform uses voice signals mapped to named, interpretable metrics.
- Per-decision audit logs. Every interview produces a transcript, recording, and rubric-scored breakdown. Regulators ask for traceability; we already produce it because hiring teams need it.
- Plain-language candidate notice. Surfaced before the interview begins, in the candidate's language, with their right to human review explicitly stated.
- Human override built in. ARIA recommends, hiring teams decide. The decision step is the human's, every time.
- Documented compliance posture. Available to procurement and legal teams during evaluation — see our about page for the high-level summary.
For a side-by-side look at how this approach compares to a legacy enterprise vendor, see our HireVue alternative breakdown.
Frequently asked questions
Does the EU AI Act apply if my company isn't based in the EU?
Yes, if you place AI hiring decisions on individuals located in the EU — even if your company, your servers, and your vendor are all outside the EU. The Act's scope is extraterritorial in the same way GDPR's is.
What's the difference between a "provider" and a "deployer"?
The provider develops the AI system and puts it on the market (your vendor). The deployer uses the AI system under its own authority (you, the employer). Both have obligations; neither can hand them off entirely to the other.
Are simple resume keyword filters considered high-risk AI?
If the system is materially "intended to be used" to filter applications, Annex III applies. Pure deterministic keyword matching arguably falls outside the AI Act's "AI system" definition (Article 3), but the line is fuzzy and gets fuzzier as filters incorporate any ML or scoring. Treat anything with a learned model as in-scope and document why if you conclude otherwise.
Do I need conformity assessment by a notified body?
For most hiring AI, the provider can use internal controls (Annex VI) rather than third-party notified-body assessment (Annex VII). The provider is responsible — you, as deployer, should ask which path they took.
Do bias-audit obligations under NYC LL144 satisfy the EU AI Act?
They overlap but don't fully satisfy each other. NYC LL144 is narrower (focused on selection-rate bias audits and candidate notice). EU AI Act high-risk obligations cover risk management, data governance, technical documentation, logging, transparency, human oversight, and robustness — a substantially broader set. A vendor who is LL144-audit-ready is partway there, not all the way. (We cover the practical overlap in our guide to building inclusive hiring processes.)
Need an EU AI Act-ready AI hiring platform?
ARIA was built against this regulatory bar from day one — no facial analysis, full per-decision audit trails, candidate-facing transparency, human override built in.
Start the 3-day free trial → or talk to our compliance team about your specific deployment context.


