RESPONSIBLE AI

AI and Employment Fairness Policy

Findem is committed to ensuring that AI enhances fairness, transparency, and inclusivity in employment decisions. Our platform is designed to assist — not replace — human judgment, supporting equitable hiring through bias audits, objective evaluation criteria, and continuous oversight to meet evolving global standards.
Why Findem?
Man with coffee smiling in front of computer

How does Findem ensure ethical and responsible use of AI in hiring including to support diverse candidate slates?

Findem ensures ethical and responsible use of AI in a number of ways. 

First, Findem and its products and services, including its AI tools, do not impermissibly discriminate based on any characteristic protected by law, such as race, color, religion, sex, or national origin. Findem engages an independent, qualified third-party auditor to conduct bias audits on Findem AI tools to ensure such tools do not, expressly or through proxy, impermissibly provide a preference to any candidate based on any characteristic protected by law. 

Second, Findem focuses on helping employers evaluate candidates based on objective job qualifications so that candidates of all backgrounds, whatever those backgrounds might be, have a fair shot at getting the job. This focus on diversity through a review of objective job-related standards drives our business.  By surfacing candidates based on objective criteria rather than requiring them to self-select into individual roles, the tool can widen the aperture by discovering new talent. For additional insights related to this topic, please see Slow Thinking Fast: How AI Trumped Human Bias. 

Third, Findem continuously evaluates its platform to support fairness and inclusivity in hiring. These evaluations show that the platform helps teams surface diverse candidate slates while finding the best talent by supporting evidence-based selection by humans. Additional research has shown that reducing unconscious bias in early screening — such as in skills-based evaluation — can significantly improve representation. The platform is designed with similar goals in mind, using AI to focus on job-relevant attributes, helping teams engage a broader, more qualified candidate pool by mitigating the potential for biases which may be evoked by demographic signals. 

Fourth, the Findem platform enables customers to track pipeline diversity and measure how their hiring efforts align with inclusion goals, consistent with applicable law, including legal employer reporting obligations such as under the Equal Employment Opportunity Commission rules. All customers are provided aggregate-level diversity data based on a probabilistic approach — drawing on and enhancing methodologies similar to those used by the U.S. Census Bureau — to provide insights into the overall diversity of candidate pipelines. These insights help recruiters and hiring teams build more inclusive searches and track diversity across the funnel. When EEOC data is provided via ATS integration, that information is used in place of the probabilistic data and displayed within the product for that customer only.

Can you confirm that your use of AI is not prohibited under the EU AI Act? Is Findem AI considered “high-risk” under the EU AI Act or US laws, such as the Colorado AI Act?

Findem services do not include any of the prohibited AI practices laid down in the EU AI Act. In particular, Findem does not conduct emotion recognition in the workplace, does not engage in social scoring, does not rely on deceptive or manipulative techniques to distort the behavior of individuals, and does not assess the risks of individuals committing criminal offenses. Findem continues to evaluate how evolving AI regulations may apply to its products, in particular rules on high-risk AI systems. In making these evaluations, Findem does not process any biometric information. AI is not used to directly match or sort job candidates. Findem shows all applications, and AI is not used to rank candidates based on their enriched profiles or to exclude any candidates from the selection process. Findem also does not place targeted job ads. The company supports transparency, customer control, and responsible AI use, and regularly reviews its practices in light of emerging laws such as the EU AI Act and the Colorado AI Act.

How are Findem AI tools different from tools governed by AI laws which monitor disparate impact on protected classes such as NYC LL 144 or tools at issue in recent AI in hiring litigation?

Findem is designed to assist recruiters and hiring teams by organizing and presenting candidate information more effectively — not to make decisions or replace human judgment. For clarity, the Findem platform is not intended to be used as an Automated Employment Decision Tool (AEDT) under NY Local Law 144. The platform does not use AI or algorithm-based systems to automatically reject, rank, or advance candidates. Instead, Findem AI tools help users apply their own search criteria and preferences to navigate large candidate pools. With Findem, customers remain in control of how the tool is configured and how AI-generated suggestions are applied, ensuring that hiring decisions are always human-led.

How does Findem comply with California’s Fair Employment and House Act (FEHA)?

As noted above, Findem and its products and services, including any of its AI tools, do not impermissibly discriminate based on any characteristic protected by law, such as race, color, religion, sex, or national origin. Findem engages an independent, qualified third-party auditor to conduct bias audits on Findem AI tools to ensure such tools do not, expressly or through proxy, impermissibly provide a preference to any candidate based on any characteristic protected by law. This helps Findem and its customers avoid liability under applicable anti-discrimination laws, such as California’s Fair Employment and Housing Act.

Has Findem conducted a third-party bias audit? How often will Findem conduct such audits?

Yes, Findem has conducted a third-party bias audit, the results can be accessed at: https://trust.warden-ai.com/findem with questions directed to: trust@findem.ai

Findem will conduct a third-party bias audit at least on an annual basis.

What are some other ways that Findem prioritizes responsible AI?

Findem prioritizes privacy, security, transparency, and fairness in the development of our AI-powered infrastructure:

  • Findem prioritizes human-centered design with AI-assisted workflows, automations, and features that support human capabilities.
  • Findem prioritizes robustness and safety by designing to minimize data risk, maximize data availability, and maintain data integrity.
  • Findem supports transparent and explainable decision-making processes through built into workflows, dashboards, and planning tools that enhance visibility and accountability.
  • Findem promotes fairness and avoidance of bias by taking subjectivity out of the search process, using attributes and enriched profiles instead of keywords and resumes.

Reviewed by the Responsible AI Governance Team at Findem, October 15, 2025.

Get the Findem Advantage

Turn your talent acquisition strategy inside out with Findem
Request a demo