Websites now rely heavily on age verification processes to keep users safe and compliant with regulations. These checks aim to prevent minors from accessing inappropriate content or engaging in risky online activities. While the intention is noble, the methods employed often create a significant security risk for users – a treasure trove of personal information ripe for exploitation by hackers.
Age verification can take several forms: AI analysis of uploaded photos to estimate age, submission of photo IDs like driver’s licenses or passports, and even verified credit card details. While these methods may seem stringent, they inadvertently expose individuals to substantial privacy breaches.
This risk has become painfully evident in recent high-profile cases. In October 2025, Discord, a popular platform among gamers, suffered a security breach that exposed the personal data of 70,000 users globally. The hackers gained access through a third-party service provider used for age verification – though the precise method remains unclear.
Similarly, in July 2025, Tea, an app designed to help women share dating safety information anonymously, was also hacked. This breach exposed not only user selfies and photo IDs but also their private messages and shared content. These incidents illustrate a troubling trend: age verification practices, often outsourced to third parties, are becoming increasingly vulnerable to cyberattacks.
The repercussions of these breaches can be devastating. Leaked selfies and photo IDs can fuel identity theft, fraud, and even more insidious crimes facilitated by deepfake technology and advanced AI tools. The very data collected for safety becomes the weapon used against users.
While regulations like the UK’s Online Safety Act aim to enhance user protection by mandating robust age verification methods, they face a critical loophole: enforcement of data deletion practices. Discord’s own website previously stated that it wouldn’t permanently store identity documents or selfie videos after age confirmation, but these assurances ring hollow in light of recent events.
The UK Department of Science, Innovation and Technology has issued guidance emphasizing the need for platforms to minimize data collection during age verification processes, aligning with EU GDPR regulations. However, incidents like those involving Tea and Discord demonstrate that mere guidelines are insufficient.
This issue demands a more proactive approach. Regulators must strengthen enforcement mechanisms, ensuring third-party providers adhere to strict data security and deletion protocols – especially when these providers operate outside the UK’s jurisdiction. Only through stricter oversight and demonstrable action can age verification truly protect users instead of inadvertently exposing them to greater harm.















































