The rise of artificial intelligence is rapidly reshaping social media, but a new CNET survey reveals that most Americans struggle to distinguish between authentic content and AI-generated fakes. Despite widespread awareness of AI’s presence online, only 44% of US adults who use social media are confident in their ability to identify AI-created images and videos. This gap between perception and reality highlights a growing challenge for individuals navigating an increasingly synthetic digital landscape.
The Ubiquity of AI-Generated Content
Nearly all US adults (94%) who use social media believe they encounter content created or altered by AI. This includes hyperrealistic images, bizarre videos, and text that mimics human writing with unsettling accuracy. Tools like OpenAI’s Sora and Google’s Nano Banana are making it easier than ever to produce convincing deepfakes, eroding trust in visual and textual information.
The problem isn’t simply that AI exists; it’s that most people can’t reliably detect it. A quarter (25%) of US adults admit they lack the confidence to differentiate between real and fake media, with older generations (Boomers, 40%; Gen X, 28%) expressing the lowest levels of certainty.
How Users React to AI-Generated Media
The survey also explored how people are responding to the proliferation of AI-generated content:
- Verification Attempts: 72% of US adults take action to verify content when suspicious, with Gen Z (84%) leading the way. The most common method is visually inspecting for artifacts (60%), though newer AI models are becoming more sophisticated at avoiding obvious errors.
- Labeling Concerns: Half of respondents (51%) believe better labeling of AI-generated content is crucial, particularly among Millennials and Gen Z (56% and 55% respectively).
- Calls for Restriction: 21% of US adults advocate for a complete ban on AI-generated content, with Gen Z being the most vocal at 25%. A further 36% support strict regulation.
- Perceived Value: Only 11% find AI content useful, informative, or entertaining, while 28% view it as having little to no value.
The Limits of Current Solutions
While some platforms are introducing tools to filter AI-generated content (Pinterest being one example), the survey suggests a broader systemic problem. The increasing realism of AI makes simple visual checks less effective. Alternative methods, such as checking for labels (30%) or searching for content elsewhere (25%), are becoming more important, but even these have limitations.
A concerning 25% of US adults take no action at all to verify content, particularly among Boomers (36%) and Gen Xers (29%). This inaction is dangerous given the potential for AI to be weaponized for fraud, manipulation, or disinformation.
What Can Be Done?
Combating AI-generated misinformation requires a multi-faceted approach:
- Improved Detection Tools: More sophisticated AI detection technologies are needed, but they must stay ahead of the rapidly evolving capabilities of generative AI.
- Enhanced Labeling Standards: Clear and consistent labeling of AI-generated content is essential, though enforcement remains a challenge.
- Media Literacy Education: Raising public awareness about the risks of deepfakes and AI manipulation is critical.
- Platform Responsibility: Social media platforms must take greater responsibility for identifying and mitigating the spread of synthetic content.
The reality is that AI-generated content isn’t going away. Until effective countermeasures are implemented, individuals must remain vigilant, skeptical, and proactive in verifying the information they encounter online. The survey underscores that trust in what we see and read is eroding, and restoring it will require a collective effort from technology companies, policymakers, and the public alike.





















