Sapia.ai, the global leader in ethical AI for assessment, today released its independent Impact of Inequality Report conducted by BLDS, LLC, a nationally recognized statistics and business firm.
BLDS professionals are often tasked with analyzing the impact of planned management actions on the workforce in order to minimize liability risk.
Referring to the Sapia.ai chat interview scoring tool, BLDS concluded that “using the standardized mean difference (SMD) and adverse impact ratio (AIR) BLDS, there is no evidence of practically significant differential impact” for the assessed Gender or race/ethnicity being evaluated found in the United States or Canada.
Sapia’s unique innovation is a text chat interview and personalized feedback for each candidate. Candidates respond in their own time, eliminating the pressure associated with a traditional interview. Client use of the Sapia assessment has consistently resulted in near-total elimination of bias and faster diversity outcomes.
The exam process
BLDS employed a wide range of tools and metrics to assess differential impact and used two common metrics to determine if there is evidence of practically significant differential impact: the standardized mean difference, or SMD, and the negative impact ratio, or AIR.
“Using the SMD, BLDS performed a total of 23 protected group tests on the North American models. As part of the SMD test, BLDS found no evidence of practically significant differential effects for a protected group assessed in the United States or Canada.
“With the AIR, BLDS performed a total of 49 protected group tests on the North American models. First, the use case where applicants advanced based on a ‘yes’ recommendation was tested with the AIR. Second, the use case where applicants who received a “Yes” or “Maybe” recommendation were tested with the AIR. None of these tests indicated practically significant differential effects on a protected group in the United States or Canada.”
Barb Hyman, Founder and CEO of Sapia.ai, says the audit results show Sapia.ai can make hiring equality a reality.
“This audit provides independent validation that our intelligent chat is fair to all groups,” Hyman said.
“We’ve always believed that transparency is the key to trusting AI, and that’s why we’ve published our peer-reviewed research in respected peer-reviewed journals.
“We also published the FAIRTM Framework (short for Fair AI for Recruitment) and were the first in the market to publicly introduce our own system for monitoring and mitigating bias in AI.
“That’s also why we don’t use an AI component to analyze video, audio, or resume data or data from the web; therefore we use explainable rule-based models and not classic machine learning models to evaluate candidates.”