2023 - Volume 2 - Summer - Flipbook - Page 11
-Artificial Intelligence: Continued from page 10-
ceived “cultural fit” based on their performance on a game or traditional employment
test.
Second, the EEOC guidance reiterates a key point
from its prior ADA guidance—that employers can be
liable for the use of AI tools even if they are designed
or administered by a third party (such as a software
vendor), even if the vendor represents that use of its
tool does not result in disparate impact.
Third, the guidance explains how employers can
assess their AI tools for disparate or adverse impact.
The EEOC recommends that employers determine
whether any AI-assisted selection procedure causes a
“selection rate” for members of a protected class that
is “substantially” lower than individuals of another
group. If this is the case, the tool would thereby violate
Title VII’s protections.
“Selection rate” refers to “the proportion of applicants or candidates who are hired, promoted, or otherwise selected,” and it is calculated by dividing the
number of persons hired, promoted, or otherwise selected from the group by the total number of candidates in that group. To determine if the selection rate
for a particular group is “substantially lower,” the
EEOC recommends that employers utilize the “fourfifths” rule. This rule states that one selection rate is
“substantially” different than another if the ratio is less
than four-fifths (or 80%).
The EEOC provides the following example:
[S]uppose that 80 White individuals and 40
Black individuals take a personality test that is
scored using an algorithm as part of a job application, and 48 of the White applicants and
12 of the Black applicants advance to the next
round of the selection process. Based on these
results, the selection rate for Whites is 48/80
(equivalent to 60%), and the selection rate for
Blacks is 12/40 (equivalent to 30%).
The ratio of the two rates is thus 30/60 (or 50%).
Because 30/60 (or 50%) is lower than 4/5 (or 80%),
the four-fifths rule would hold that the selection rate
for Black applicants is substantially different than the
selection rate for White applicants, which could be
evidence of disparate impact discrimination against
Black applicants and thus a violation of Title VII.
Lastly, the EEOC notes that although this guidance
does not address other stages of the Title VII disparate
impact analysis, including “whether a tool is a valid
measure of job-related traits or characteristics,” employers should consider evaluating their use of AI tools
in this area as well. This could be the subject of additional guidance in the future.
Takeaway
Though the guidance is non-binding, it does indicate how the EEOC is thinking about Title VII enforcement going forward. With that in mind, there are a few
things employers should consider to reduce potential
liability.
Employers should take stock of the AI tools that
they use. The EEOC is taking an expansive approach to
enforcement in this area. Many employers may unwittingly rely on numerous tools that use AI technology.
Though the use of the technology is itself not an issue,
it can—as the guidance demonstrates—inadvertently
create risk.
Employers should understand that they will not be
able to shift liability for inadvertent violations of Title
VII to the vendor or purveyor of any AI tools that they
use. Even if a vendor states that their tools do not result
in disparate impact discrimination, employers can still
be subject to enforcement actions for any violation.
Employers must understand how the tools they rely on
work. Employers should ask the vendor what steps
have been taken to evaluate whether the use of the tool
causes a substantially lower selection rate for individuals of a protected class. Most of all, employers should
determine whether criteria used by the tool is “job related and consistent with business necessity” or whether alternatives with less possibility of disparate impact
exist, as these are the most reliable ways to reduce risk.
Time is of the essence. As the increasing federal
attention (and the growing patchwork of state laws not
covered here) show, regulators are moving almost as
fast as the technology in this space. Waiting for more
law or clearer instruction risks a lawsuit or an enforcement action. In this area a little bit of foresight can go a
long way.
DATA PRIVACY
For many businesses, before AI was the buzzword
of the day there was “data privacy.” Data privacy essentially means the bundle of rights, establish by laws
-Continued on page 12-
11