AOA FOCUS logo

‘Living an episode of Black Mirror:’ AI deepfakes target optometrist

September 22, 2025

Your eyes deceive you. The rise in AI-generated deepfake and “slop” video content emerges as a significant threat to public health as scammers impersonate trusted medical professionals.

Tag(s): Clinical Eye Care, Public Health


Key Takeaways

  • Deepfakes and AI slop videos can be increasingly convincing, impersonating trusted medical professionals to spread dangerous health information.
  • Social media platforms struggle to identify and remove spoofed content, making users’ vigilance the first line of defense.  
  • Help patients understand to be skeptical of too-good-to-be-true health claims and products in what they encounter online. 
  • Find helpful tips and tricks for identifying deepfake media content here. 

Chances are you—or your patients—know Joseph Allen, O.D.  

Be it the conference lecture scene or any of his social channels, @DoctorEyeHealth, Dr. Allen is a recognizable face not just in optometry but, importantly, a voice of sound eye health and vision care on social media. 

His hit YouTube channel is replete with millions of views on content ranging from contact lenses for beginners to explaining chalazion and styes. He’s educated on floaters to dry eye, and he’s a trusted source of information for comprehensive eye care. 

So, imagine Dr. Allen’s confusion when he encountered an Instagram direct message: “Hey, is this really you?” Curious, he clicked the post. 

“It looked like me.” 

“It sounded like me.”  

Dr. Allen was incredulous; it was one of those “for three easy payments of $19.99, you, too, can reverse your nearsightedness with just a few eye drops a day” videos. But what Dr. Allen discovered was worse—it wasn’t just one video. And it was not only on Instagram. 

“There were all these videos, of me, or what looks and sounds like me, with a slightly different handle,” he says. “And they’re making outrageous claims.” 


Dr. Allen adds: “In some of these, they’re telling you not to see your eye care provider and that our entire profession is a scam. It wasn’t just some impersonator on Instagram or TikTok, these were deepfakes. It’s scary.”

The rise of deepfakes and AI ‘slop’ videos 

To the heart of it: what is artificial intelligence (AI) and how is it being used? Depending on dispositions, AI is here either as the next socio-economic shift, akin to the Industrial Revolution, or to bring about an end.  

Ask the large-language model (LLM) ChatGPT and it says: “I’m here to help you—whether that’s answering questions, solving problems, brainstorming ideas, or just having a conversation. You get to decide what that means.” 

Well, inevitably, some have decided this means using the high-tech tool for a time-old profession: scamming. 

But first, the terminology. No longer is AI bound to text-based responses, chat bots or digital assistants; AI now creates realistic-looking images and videos. Also known as generative AI, this synthetic content creation has a dark side, including: 

  • Deepfakes. AI-manipulated or fabricated content that impersonates a real person for the purposes of spreading malicious or false information.  
  • AI slop video. AI-produced video content at scale that exploits social media’s engagement algorithms and floods platforms like YouTube, TikTok or Instagram, edging out original content creators. 

Coming full-circle, AI-generated deepfake and slop videos are increasingly targeting real-world doctors, impersonating them to create low-effort, algorithm-driven content that can spread dangerous health misinformation. This increase in synthetic, yet authentic-looking content is making it increasingly difficult for patients and the public to distinguish between credible medical advice and deceptive content. 

In August, a CBS News investigation reported finding dozens of social accounts and over 100 videos depicting fictitious doctors, “some using the identities of real physicians,” giving advice or selling products. Some were viewed millions of times. 

“Deep fakes pose an especially significant threat in the medical field because it deals with human lives, and any mistake or error could lead to a chain of terrible events,” note authors of a 2023 article in Tech Science Press, “Deep Fakes in Healthcare: How Deep Learning Can Help to Detect Forgeries.” 

So, what can be done about this manipulated content? 

Uncharted territory 

Unfortunately, not much at present. 

Major social media platforms are aware of the rise in deepfakes and AI slop video content, but enforcement often starts with a user-generated report. Some platforms label AI-generated content, while others have tweaked use policies to mitigate "inauthentic” content, such as the case on YouTube.  

But again, it often comes down to a suspicious user flagging the content first. That’s how Dr. Allen was able to challenge the deepfakes targeting his channel. He even went as far to secure legal counsel for the purposes of drafting a cease-and-desist letter. However, reporting the content is just the beginning of a very long process. 

“Sure, we can get the account deplatformed, but it took us a full month just to get a hold of TikTok, then when we did, it took us three weeks to prove we are who we say we are,” Dr. Allen says. “All this effort, then within two or three hours, these scammers have made a new account and reuploaded all the previous content.” 

“It’s futile,” he adds. 

All the while the supplement itself, tracked back to a Chinese company, is pushing millions of dollars in product on Amazon, Dr. Allen says. Whether it’s the company or some unaffiliated actor marketing the product, Dr. Allen and counsel aren’t sure. But it certainly begs the question, “what’s actually in these supplements” that Dr. Allen’s likeness has been stolen to market? 

Further, what’s the legal recourse for someone harmed by a product that a fake Dr. Allen purportedly recommends? 

“That’s why they’re targeting eye care,” he thinks. “The people who are most vulnerable are people who have poor eyesight; they’re not looking at the fine details of these videos.” 

“It’s like we’re living an episode of ‘Black Mirror’ right now,” Dr. Allen adds, referencing the streaming series about near-future dystopias spurred by technology run amok.  

AI-generated deepfakes have caught Congress's attention, resulting most recently in a law to ban nonconsensual intimate imagery, but advocates argue more can and should be done to curb it. Some have proposed using AI to catch AI, i.e., using AI models to detect deepfake discrepancies. 

How to spot AI-generated videos 

For the time being, Dr. Allen recommends looking for these tell-tale signs for AI-generated videos—the caveat being AI is ever-progressing with some of these signs likely to be addressed in the months, even weeks, ahead: 

  1. Look closely at the video content for odd editing, e.g., do the subject’s lips match up with the vocals and audio, or do they blink too much or too little? 
  2. If it’s selling a too-good-to-be-true product, then it probably is and make sure to use a discerning eye when it comes to such content.
  3. Does it feel natural or not? Deepfakes still have difficulty portraying natural lighting and can make the subject feel unnatural due to a mismatch in lighting conditions or reflections. 

Looking for more information about spotting AI-generated images or deepfakes? The Massachusetts Institute of Technology Media Lab offers this helpful resource. 

Unfortunately, Dr. Allen’s tale still has no resolution. Nearly 10 months into fighting deepfakes, he says he feels little farther along than where he started. He’s posted for his viewers to be aware of the spoofs—one follower was brave enough to call out a blatantly fake post. Yet, he still feels ambivalent about the future ahead. 

"It’s difficult to understand where all this is coming from, and now nobody knows who to trust,” he says. “I don’t know where this is going.” 

The antidote isn’t a simple one. As AI blurs the lines between reality and fiction, the responsibility falls on individual critical thinking and the very human act of asking: Is this real?