As companies evaluate recruitment software, it is essential to understand a vendor’s approach to diversity data and support for DEI initiatives. This FAQ highlights Findem’s approach to diversity data and DEI analysis.
Our approach to DEI significantly enhances the methodologies used by sources like the US Census Bureau. Findem does not label any candidates with ethnicity and gender identification. While the Census primarily relies on surname-based distributions — which are about 70% accurate for gender and 55% accurate for ethnicity and race — Findem augments this data by incorporating additional datasets to build a probabilistic distribution similar to Census data.
These datasets leverage information from a person’s experiences, professional associations, social media footprints, and other relevant sources to help Findem achieve 95% accuracy in gender identification and 85% accuracy in ethnic identification.
Our goal is to enable organizations to build inclusive pipelines by providing the aggregate data to the right stakeholders across the funnel. For example, empowering hiring managers and recruiters to build inclusive searches by providing market maps and giving leaders the rich analytics to ensure funnel conversions are inclusive and devoid of bias.
We are transparent about the predictive nature of these methodologies and acknowledge their limitations and potential for errors. With this knowledge, Findem strongly encourages organizations to use this data for aggregate analysis and market mapping to help build inclusive talent pipelines. By combining diverse data sources and advanced analytical techniques, Findem offers a more comprehensive and accurate picture of diversity to support organizations in their efforts to foster a more inclusive environment.
No, Findem is not a candidate evaluation tool that automatically advances or rejects applicants. Findem's Talent Data Cloud is a talent acquisition and recruitment platform that uses data enrichment and AI to assist talent organizations with searching and matching. Deploying our solution should not create bias or discrimination concerns related to AI-based searching or matching.
Findem aggregates publicly available people data, including US Census data, which is verified and triangulated across multiple sources, for the purpose of recording and learning attributes. An attribute is a distinguishing skill, experience, or characteristic that is an inherent part of someone’s person. Attributes help diversify the talent pool, offering a more comprehensive and accurate picture of diversity and supporting organizations in their efforts to foster a more inclusive environment.
We have a team of dedicated data researchers who are constantly curating and improving the quality of data to increase our accuracy across the following categories:
Our models are trained on verified 3D data that is balanced for ethnicity and gender to minimize bias in the talent acquisition process. Diversity of a talent pool is determined through a probabilistic analysis, leveraging a variety of data sources, starting with Census data. We use Census data as a baseline, augmenting and classifying with location, name, resume, education, and work history data.
Public image classification is used to help estimate aggregate market diversity but never used to label a single candidate.
Findem only categorizes pre-applicant ethnicity when our models are 85% or higher in terms of certainty. If the models are below 85%, we do not classify ethnicity. This should be compared to Census data, which is 70% accurate for gender and 55% accurate for ethnicity and race. In general, we do observe our accuracy to be 95% when we are able to classify ethnicity.
Post-applicant data is based on candidate-declared data, which is factual.
Findem’s approach is to enable talent teams to be data informed and eliminate bias. The Findem platform provides aggregate analysis and market mapping in real time for every recruiting project. Organizations are encouraged to use these insights during the intake process to set expectations and hold productive conversations with hiring managers.
To ensure fair representation, we take the following steps with regard to diversity data:
AI technologies wield significant power, potentially amplifying biases or perpetuating injustices if not carefully designed and implemented. Ensuring clean, accurate data is key. Findem uses a holistic, probabilistic analysis vs identifying diversity attributes on an individual basis.
On the AI development team, a range of perspectives and experiences are considered throughout the design process. Robust testing and validation protocols are used to detect and mitigate bias in AI systems before deployment. Additionally, ongoing monitoring and accountability mechanisms help address biases that emerge post-implementation. Deploying our solution should not create bias or discrimination concerns, as the solution does not rely on AI for searching or matching.
Findem is committed to the support of diversity, equity, inclusion, and belonging in our own workforce, and we support DEIB efforts for our customers. This is a snapshot of how we treat diversity data with care and consideration. Please contact us for more details.