Compliance

NYC Local Law 144 Explained: A Practical Guide to AEDT Bias Audits

By ARIA TeamMay 9, 202613 min read
NYC Local Law 144 bias audit framework for AI hiring tools

Why this matters now

NYC Local Law 144 (the "Automated Employment Decision Tools" law) became enforceable on July 5, 2023 after a bumpy two-year rulemaking process. It was the first major US bias-audit-and-disclosure law specifically targeting AI hiring tools, and it set the template that other US jurisdictions and the EU AI Act have since echoed.

For HR teams hiring in NYC, three things are now true:

  • If you use an AEDT to screen candidates, you owe an annual bias audit — and the audit summary has to be public on your website before you use the tool.
  • You owe candidates a 10-business-day advance notice before any AEDT runs against them.
  • Each candidate not notified and each day of unaudited use is a separate violation. The fines are individually small but stack quickly.

This guide explains what counts as an AEDT, how the bias audit actually works (with a worked numerical example), what the candidate notice has to say, and how to evaluate vendors against the law. It is not legal advice — work with counsel before relying on any specific reading — but it is a practical map of how LL144 lands in 2026.

The three pillars at a glance

PillarWhat you must doCadence
Bias auditHave an independent auditor compute selection rate and impact ratio across sex and race/ethnicity categoriesAnnual
Public disclosurePost the audit summary and the audit date on your websiteBefore any use, kept current
Candidate noticeTell candidates the AEDT will be used, what it assesses, and what data it collectsAt least 10 business days in advance

LL144 is fundamentally a transparency law, not a substance law. It does not tell you what bias level is acceptable — it makes you compute the number, publish it, and tell candidates. The market handles the rest.

What's an "AEDT"? The coverage question

The hardest part of LL144 in practice is figuring out whether your tool is in scope at all. The DCWP (NYC Department of Consumer and Worker Protection) final rules define an Automated Employment Decision Tool as:

"Any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions."

Two phrases do most of the work:

"Simplified output." A numeric score, a pass/fail classification, a ranked recommendation. Long-form free-text feedback isn't "simplified output."

"Substantially assist or replace." Per DCWP's interpretive rules, this means one of:

  1. The output is the only factor relied on (no other criteria considered).
  2. The output is used as one of a set of criteria and is weighted more than any other criterion.
  3. The output is used to override or veto conclusions reached by other inputs.

That definition is narrower than most people initially assume. A tool that produces a score one of five recruiters glances at, weighted equally with their own judgment, arguably falls outside scope. A tool whose score is the gate to advancing past the screen almost certainly falls inside scope.

Quick coverage triage:

ToolLikely covered?
ML-driven resume ranker that auto-rejects below a score thresholdYes
Voice interview scoring that's the sole gate to round 2Yes
Skills assessment AI whose score outweighs all other criteriaYes
ATS keyword filter with no learned modelProbably no (no ML/stats/analytics)
AI scoring used as one signal among many human-weighted factorsDepends — get legal advice on the weighting test

When in doubt, audit. The cost of an unnecessary audit is far less than the cost of being on the wrong side of a DCWP investigation.

Pillar 1: the bias audit (with a worked example)

The audit must be performed by an independent auditor — defined as a person or organization with no commercial relationship to the AEDT vendor and who did not develop, use, or distribute the tool. Vendor-conducted audits do not count.

The audit computes two numbers per demographic group:

Selection rate = (number selected from group) ÷ (total in group)

Impact ratio = (selection rate of the group) ÷ (selection rate of the most-selected group)

It does this for sex categories, race/ethnicity categories (using the EEO-1 categories), and intersectional combinations (sex × race/ethnicity).

Worked example: an engineering screen

A company runs an AEDT that scores software engineering candidates and auto-advances anyone above a threshold. Over the audit period:

GroupAppliedAdvancedSelection rate
Men60020033.3%
Women4008020.0%

The most-selected group is men, at 33.3%. Impact ratio for women:

Impact ratio (women) = 20.0% / 33.3% = 0.60

LL144 does not say a 0.60 impact ratio is illegal. It says the company must publish that 0.60 number on its website before using the tool again.

The reference threshold most auditors and regulators use is the EEOC's "four-fifths rule" — an impact ratio below 0.80 is generally treated as evidence of disparate impact. A 0.60 is well below that line. The number is now public, candidates can see it, plaintiffs' lawyers can see it, and the company has to decide what to do about it.

What the audit covers

  • All demographic categories with sufficient sample sizes (DCWP rules allow exclusion when categories are too small to be statistically meaningful — but the exclusion must be documented).
  • Both historical data (preferred, when the AEDT has been in use) and test data (when there's not enough historical data, e.g. for a brand-new deployment).
  • Intersectional categories (e.g. Black women, Asian men) — not just single-axis sex or race/ethnicity.

Cadence

Annual. The audit must be no more than one year old at any time the AEDT is in use. Lapse the date and you're in violation even if you previously had a valid audit.

Pillar 2: public disclosure

The audit summary and the audit's date must be posted on the employer's website, in a publicly accessible location, before the AEDT is used.

Required elements of the summary:

  • The selection rates and impact ratios computed in the audit.
  • The number of individuals the AEDT assessed (with appropriate breakdowns).
  • The number of categories (sex, race/ethnicity, intersectional) that were excluded for sample-size reasons.
  • The date of the most recent bias audit.
  • Either the date the AEDT was first used, or "the AEDT distributor" if relevant.

The disclosure does not have to be on the homepage — but it does have to be findable by a candidate without unreasonable effort. A buried PDF behind three menus is a regulatory risk.

Pillar 3: candidate notice

The notice has to be delivered at least 10 business days before the AEDT is used to assess the candidate. It can be delivered via:

  • Notice posted in the job listing
  • Direct mail
  • Email
  • Notice posted on the employer's website (with the candidate told where to find it)

The notice must tell candidates:

  1. That an AEDT will be used to assess their application or candidacy.
  2. The job qualifications and characteristics the AEDT will assess.
  3. (On request, within 30 days) the type of data collected, the source of the data, and the employer's data retention policy.

Crucially: candidates can request an alternative selection process or accommodation. The law does not require employers to grant the alternative — but the request right exists, and ignoring requests is its own enforcement risk.

Penalties and enforcement

Per the statute (NYC Administrative Code § 20-873):

  • First violation: $500
  • Subsequent violations: $500 to $1,500 each

The accumulation matters more than the per-violation number:

  • Each day the AEDT is used without a valid bias audit = a separate violation.
  • Each candidate not given the required notice = a separate violation.

A company using an AEDT for 30 days without a current audit, screening 500 candidates without notice, is theoretically looking at thousands of stacked violations.

Enforcement so far has been more measured than the theoretical ceiling suggests. As of early 2026, DCWP has focused on technical compliance investigations rather than blockbuster fines — but the law has been enforceable for less than three years and the enforcement program is still maturing. The federal iTutorGroup EEOC settlement ($365K, 2023) was over alleged AI-driven age discrimination and is the closest "AI hiring tool faces enforcement" precedent — though it was federal age-discrimination law, not LL144 specifically.

How LL144 stacks against EU AI Act, GDPR, and federal law

LL144 sits in a broader regulatory mosaic. Quick reading:

  • vs EU AI Act: LL144 is narrower (bias audits + notice + disclosure), the EU AI Act is broader (risk management, technical documentation, logging, transparency, human oversight, robustness, plus Article 5 prohibitions). A vendor that's LL144-audit-ready is partway to EU AI Act compliance, not all the way.
  • vs GDPR Article 22: Article 22 gives EU candidates the right not to be subject to a solely-automated decision and the right to human review. LL144 doesn't grant equivalent individual rights — it focuses on aggregate disclosure and notice.
  • vs federal EEOC / Title VII: Federal anti-discrimination law applies to AI hiring tools regardless of location. The EEOC has issued technical assistance on AI/algorithmic decision-making. LL144 is largely consistent with the EEOC's four-fifths framework but adds the disclosure and notice obligations.

For a company hiring across multiple jurisdictions, the right mental model is: build to the strictest standard once, then map the artifacts (audits, notices, documentation) into each regulator's required format.

A vendor evaluation framework specific to LL144

When evaluating an AEDT vendor for NYC deployment, here are the questions you should be able to get clean answers to. (For the broader bias-and-quality checklist, see our audit checklist for AI hiring vendors.)

#QuestionWhat a defensible answer looks like
1Does your tool meet DCWP's definition of an AEDT?Either yes (with the rationale) or no (with the rationale tied to the DCWP "substantially assist or replace" test).
2Have you facilitated a bias audit by an independent auditor?Yes — with a sample audit summary you can review during evaluation.
3Can your platform produce the data exports my independent auditor will need?Per-decision data with demographic fields, anonymized as appropriate, in a format an auditor can analyze.
4Do you provide template language for the candidate notice?Yes — including the data-on-request element (30-day response).
5What demographic data do you collect or infer, and how is it stored?Explicit list. Demographic data should be self-reported by candidates, not inferred from name/voice/face.
6If our audit reveals a low impact ratio, what's the path to remediation?A defined remediation playbook — model retraining, threshold adjustment, criteria review — not "that's outside our scope."
7How do you handle small-sample exclusions in audit reports?Documented threshold consistent with DCWP rules (typically minimum sample sizes per category).
8Can a candidate request an alternative selection process, and how does your platform support that?Documented alternative-process workflow the deployer can route requests through.

A vendor that hasn't thought about LL144 specifically — or who can't produce the data your auditor needs — will become your problem, not theirs, the moment a candidate asks where the audit summary is.

How ARIA approaches LL144

ARIA was designed for audit-readiness from the start, not retrofitted. Concretely:

  • Per-decision audit logs capture every interview, transcript, and rubric-scored breakdown — exactly the dataset an independent auditor needs to compute selection rates and impact ratios.
  • Self-reported demographics, not inferred. ARIA does not infer race, gender, or age from voice. Demographic data used in bias audits comes from candidate-disclosed fields, never from algorithmic guesses.
  • Candidate notice templates with the required disclosures, the alternative-process request path, and the 30-day data-on-request workflow.
  • Annual third-party bias audit support — we facilitate the audit, but the auditor is genuinely independent (no commercial relationship with us).
  • Voice-only scoring with no facial analysis — the same architectural choice that satisfies EU AI Act Article 5 also makes our LL144 audit posture cleaner. See our voice interview platform page for the technical detail, or our HireVue alternative breakdown for a side-by-side comparison.

For HR teams building inclusive hiring practices around an audited AI layer, our seven-step framework covers the human side of the workflow.

Frequently asked questions

Does LL144 apply if my company isn't based in NYC?

It applies if the candidate or employee is in NYC, regardless of where the employer is. A San Francisco company hiring a remote worker located in Brooklyn is in scope.

Does LL144 apply if I'm only hiring for one or two roles in NYC?

Yes. There is no minimum-headcount carve-out. A single AEDT-driven decision against a NYC candidate triggers the obligations.

Who counts as an "independent auditor"?

Per DCWP rules, anyone who has no employment, contractual, or financial relationship with the AEDT vendor and who didn't develop, use, or distribute the tool. In practice, most LL144 audits are run by specialty firms (some bias-audit boutiques exist now specifically because of LL144) or by audit/consulting firms with a labor-economics practice.

What if my AEDT vendor's bias audit covers their tool — do I still need my own?

The vendor's audit can sometimes serve as the audit of record if it covers the actual data and use case for your deployment. Often it doesn't — vendor audits use aggregate cross-customer data that may not reflect your specific candidate pool. Get specific guidance on whether the vendor's audit satisfies your obligation; default to commissioning your own when in doubt.

Does a low impact ratio in my audit mean I have to stop using the tool?

No. LL144 doesn't prohibit any specific impact ratio. It requires you to publish the number. Whether continued use is legally defensible under federal Title VII or state anti-discrimination law is a separate analysis — the EEOC's four-fifths rule (impact ratio < 0.80) is a starting point, not a definitive cutoff.


Need an AEDT that's audit-ready out of the box?

ARIA was built against this regulatory bar from day one — per-decision audit logs, self-reported demographics, candidate-notice templates, and annual third-party bias audit support.

Start the 3-day free trial → or talk to our compliance team about your specific NYC deployment.

Ready to Transform Your Hiring Process?

Start automating your interviews with ARIA's AI-powered platform. Get started with our free pilot program today.

Start Free Demo
#nyc-ll144#compliance#regulation#bias-audit#aedt#ai-hiring-law

Related Articles