San Francisco-based AI startup Checkr is targeting government contracts to assist with identity verification in welfare and benefit programs, including Medicare. Daniel Yanisse, CEO of Checkr, said the company seeks to help reduce fraud and waste by verifying eligibility for government assistance. While Checkr is currently focused on screening new employees for clients like Uber and Lyft, Yanisse envisions extending their AI-powered background checks to public benefit systems, hoping to create a more efficient and accurate verification process.
Checkr uses artificial intelligence to generate background checks by reviewing criminal records, motor vehicle reports, and other data. The company achieved a valuation above $5.7 billion after securing $120 million in funding in 2022 and reported revenues exceeding $800 million in 2025, serving over 120,000 customers. Despite these successes, the company has not yet released a product specifically for government use and described discussions as still conceptual.
Yanisse noted significant challenges in government verification, especially concerning fraud within programs like Medicare and Social Security. Improper payments, which may include fraudulent claims or cases of unverified income, were estimated at $28.83 billion in 2025 for Medicare alone, according to government data. Checkr highlighted findings such as a study by Middesk, revealing over half a billion dollars in Medicaid payments going to providers barred for criminal activity or misconduct.
However, experts express skepticism about using AI for such critical government functions. Stuart Russell, a computer science professor at UC Berkeley, pointed out that current AI systems, including large language models, cannot reliably explain their decisions, which complicates the challenge of contesting errors. He also referenced European regulations that prohibit fully automated decisions with substantial legal impacts.
Political scientist Baobao Zhang of Syracuse University emphasized the importance of cautious evaluation before deploying AI for welfare fraud detection, citing historic failures such as Indiana’s abandoned IBM contract, which was terminated after the state sued for $1.3 billion due to errors causing wrongful benefit denials. Similarly, Australia’s Robodebt system, which used flawed algorithms to demand repayments from welfare recipients, was ruled illegal and linked to serious harm, including several suicides.
Emory University’s Ifeoma Ajunwa recommended establishing advisory councils including technologists, social scientists, and affected communities to guide AI integration in government settings. Ajunwa stressed the need for safeguards to protect citizens while pursuing efficiency and cost reductions offered by AI technologies.
As concerns about fraud and waste persist, Checkr remains focused on potential collaboration with government agencies to improve verification processes, while acknowledging the technical, ethical, and legal complexities involved in automating welfare eligibility decisions.








