Anthropic, the developer of the AI assistant Claude, has introduced an identity verification process for a limited number of users. This measure is being implemented in response to instances of potentially fraudulent or abusive activity. Users flagged by the system will be asked to submit government-issued identification and a live selfie to confirm their identity.
The verification process is managed by Persona Identities, a third-party startup specializing in identity verification services. Persona collects and securely stores the submitted identification data. Anthropic retains control over how this personal information is handled, limiting its use strictly to supporting verification and fraud prevention.
An Anthropic spokesperson explained that verification requests occur only when there is detected behavior that violates the company’s usage policies. Such violations include repeated breaches of the policy, account creation from unsupported geographical locations, breaking the terms of service, and underage user activity.
Accounts found to be in violation through this process may be banned. However, anthophic provides an appeals option for users who believe their accounts were unfairly restricted.
The rollout has prompted mixed reactions among Claude users. Some have shared screenshots of the verification request, which describes the process as a “quick identity check” expected to take approximately two minutes, requiring users to submit an ID and grant mobile camera access. Upon completion, users receive confirmation of their verified status.
Some users have expressed concerns on social media platforms, criticizing the introduction of ID verification and drawing parallels to privacy issues. In response, Anthropic has clarified several points in its help documentation, emphasizing that the company does not train its AI models on the verification data and does not share ID information beyond Persona Identities, except when legally compelled. The scope of data collected is kept minimal, limited only to what is necessary for verification.
Anthropic’s move to introduce identity verification reflects increasing efforts among AI service providers to combat misuse and ensure compliance with ethical and legal standards. The company continues to monitor and refine its policies to balance user security and privacy with accessibility.








