
Findem ensures ethical and responsible use of AI in a number of ways.
First, Findem and its products and services, including its AI tools, do not impermissibly discriminate based on any characteristic protected by law, such as race, color, religion, sex, or national origin. Findem engages an independent, qualified third-party auditor to conduct bias audits on Findem AI tools to ensure such tools do not, expressly or through proxy, impermissibly provide a preference to any candidate based on any characteristic protected by law.
Second, Findem focuses on helping employers evaluate candidates based on objective job qualifications so that candidates of all backgrounds, whatever those backgrounds might be, have a fair shot at getting the job. This focus on diversity through a review of objective job-related standards drives our business. By surfacing candidates based on objective criteria rather than requiring them to self-select into individual roles, the tool can widen the aperture by discovering new talent. For additional insights related to this topic, please see Slow Thinking Fast: How AI Trumped Human Bias.
Third, Findem continuously evaluates its platform to support fairness and inclusivity in hiring. These evaluations show that the platform helps teams surface diverse candidate slates while finding the best talent by supporting evidence-based selection by humans. Additional research has shown that reducing unconscious bias in early screening — such as in skills-based evaluation — can significantly improve representation. The platform is designed with similar goals in mind, using AI to focus on job-relevant attributes, helping teams engage a broader, more qualified candidate pool by mitigating the potential for biases which may be evoked by demographic signals.
Fourth, the Findem platform enables customers to track pipeline diversity and measure how their hiring efforts align with inclusion goals, consistent with applicable law, including legal employer reporting obligations such as under the Equal Employment Opportunity Commission rules. All customers are provided aggregate-level diversity data based on a probabilistic approach — drawing on and enhancing methodologies similar to those used by the U.S. Census Bureau — to provide insights into the overall diversity of candidate pipelines. These insights help recruiters and hiring teams build more inclusive searches and track diversity across the funnel. When EEOC data is provided via ATS integration, that information is used in place of the probabilistic data and displayed within the product for that customer only.
Findem services do not include any of the prohibited AI practices laid down in the EU AI Act. In particular, Findem does not conduct emotion recognition in the workplace, does not engage in social scoring, does not rely on deceptive or manipulative techniques to distort the behavior of individuals, and does not assess the risks of individuals committing criminal offenses. Findem continues to evaluate how evolving AI regulations may apply to its products, in particular rules on high-risk AI systems. In making these evaluations, Findem does not process any biometric information. AI is not used to directly match or sort job candidates. Findem shows all applications, and AI is not used to rank candidates based on their enriched profiles or to exclude any candidates from the selection process. Findem also does not place targeted job ads. The company supports transparency, customer control, and responsible AI use, and regularly reviews its practices in light of emerging laws such as the EU AI Act and the Colorado AI Act.
Findem is designed to assist recruiters and hiring teams by organizing and presenting candidate information more effectively — not to make decisions or replace human judgment. For clarity, the Findem platform is not intended to be used as an Automated Employment Decision Tool (AEDT) under NY Local Law 144. The platform does not use AI or algorithm-based systems to automatically reject, rank, or advance candidates. Instead, Findem AI tools help users apply their own search criteria and preferences to navigate large candidate pools. With Findem, customers remain in control of how the tool is configured and how AI-generated suggestions are applied, ensuring that hiring decisions are always human-led.
As noted above, Findem and its products and services, including any of its AI tools, do not impermissibly discriminate based on any characteristic protected by law, such as race, color, religion, sex, or national origin. Findem engages an independent, qualified third-party auditor to conduct bias audits on Findem AI tools to ensure such tools do not, expressly or through proxy, impermissibly provide a preference to any candidate based on any characteristic protected by law. This helps Findem and its customers avoid liability under applicable anti-discrimination laws, such as California’s Fair Employment and Housing Act.
Yes, Findem has conducted a third-party bias audit, the results can be accessed at: https://trust.warden-ai.com/findem with questions directed to: trust@findem.ai.
Findem will conduct a third-party bias audit at least on an annual basis.
Findem prioritizes privacy, security, transparency, and fairness in the development of our AI-powered infrastructure:
Reviewed by the Responsible AI Governance Team at Findem, October 15, 2025.
