AI hiring disclaimer from ADP, Part 1/2
This is the first time I am seeing a company discloses this information. This is great.
The file is here:
A quick summary (using ChatGPT):
1. Artificial Intelligence Transparency Notice
Summary:
ADP uses AI systems—including generative and traditional machine learning—to provide insights, generate responses, draft job descriptions, and make predictions based on both company-specific and general knowledge. These AI systems are constrained to defined use cases, operate in a non-public environment, and are subject to human oversight to ensure privacy, security, bias mitigation, and result accuracy. AI is not universally deployed to all customers. Employers must validate AI-generated content for accuracy and completeness before use.
Pros:
- Clear disclosure of AI involvement and its scope.
- Emphasis on privacy, security, and bias safeguards.
- Requires human validation before application.
Cons:
- No technical detail on bias mitigation methods.
- Lack of transparency about AI model architecture or data sources.
Flaws/Challenges:
Does not address potential AI hallucinations or outdated data risks.
“Rigorous methods” for safeguarding privacy are vaguely described.
No quantitative bias audit details here (only in later sections).
2. Candidate Relevancy Overview & Scoring Method
Summary:
Candidate Relevancy and Profile Relevance tools use AI/ML to match candidate resumes with job descriptions based on education, skills, and experience. They produce three weighted sub-scores aggregated into a final score (1–100) or category (High/Medium/Low). Weights vary by job sector and are empirically derived. The system is meant to be one of many hiring tools, without cut-off scores, and does not use demographic or protected information. Employers see all applications regardless of score.
Pros:
- Transparent on matching methodology (three-component model).
- Excludes demographic/protected data from scoring.
- Allows all applicants visibility to employers.
Cons:
- Weights are proprietary, with no clear rationale per job beyond “empirical” determination.
- Limited explanation of sector-specific variation.
Flaws/Challenges:
- Potential misalignment if job descriptions are poorly written or incomplete.
- Reliance on historical data could encode past biases indirectly.
- No disclosure on algorithm retraining frequency.
3. Compliance with NYC Local Law 144
Summary:
The FAQ states ADP does not believe Candidate Relevancy qualifies as an “automated employment decision tool” under NYC’s Local Law 144, as it is not intended to substantially assist or replace human decision-making, is not weighted more than other factors, and does not overrule human conclusions. Employers are instructed to use it only as one source of information, not as the sole hiring criterion.
Pros:
- Clear legal positioning to avoid regulatory classification.
- Explicitly prevents over-reliance on the tool.
Cons:
- “Intended use” may differ from real-world employer practice.
- Relies on employer compliance with intended usage.
Flaws/Challenges:
- No enforcement mechanism to ensure employers don’t misuse scores.
- Risk that some users may unintentionally give disproportionate weight to scores.
- Lacks clarity on handling regulatory changes or broader legal definitions.
4. Bias Audit Results – Candidate Relevancy
Summary:
Independent auditors (BLDS, LLC) in April 2024 found no statistically valid evidence of bias by sex, race/ethnicity, or intersectional categories. Data tables show scoring rates and impact ratios, with small-population categories excluded per NYC Ordinance. Adjustments for Simpson’s Paradox were made.
Pros:
- Independent third-party audit increases credibility.
- Transparent publication of demographic scoring data.
Cons:
- Limited explanation of audit methodology and statistical thresholds.
- Exclusion of <1% groups could hide biases affecting small communities.
Flaws/Challenges:
- “No valid statistical evidence” does not mean no bias exists—small effect sizes may be present.
- Potential year-to-year variation not addressed.
- Does not test bias in real-world hiring outcomes, only in scoring outputs.
5. Bias Audit Results – Profile Relevance
Summary:
Similar to Candidate Relevancy but displays categorical ratings instead of numeric scores. BLDS audit also found no statistical evidence of bias. Selection rates and impact ratios are provided for “High” and “High or Medium” classifications by demographic group. Small groups (<1% of applicants) excluded from impact ratio calculations.
Pros:
- Consistent independent review approach.
- Publishes detailed demographic breakdowns.
Cons:
- Same methodological gaps as Candidate Relevancy audit.
- Category thresholds (“High/Medium”) are not transparently defined.
Flaws/Challenges:
- Translation of numeric to categorical scores could introduce hidden bias.
- Risk that “Medium” candidates may be deprioritized despite lack of bias evidence in “High” group.
6. Opt-Out Policy
Summary:
Applicants may opt out of AI scoring for a specific job, in which case their score is listed as “Not Available.” This also occurs if technical issues prevent scoring. All applicants remain visible to recruiters.
Pros:
- Preserves applicant choice.
- Ensures no one is excluded from visibility due to opting out.
Cons:
- Opt-out is job-specific, requiring multiple actions for multi-application seekers.
- No transparency on whether opting out impacts recruiter behavior.
Flaws/Challenges:
- Potential recruiter bias against “Not Available” scores.
- Ambiguity on how technical issues are communicated to applicants.
In part 2, I am going to dive into those numbers and have some analysis.