Technology can create highly realistic videos, images, and audio, becoming increasingly popular not only for celebrity impersonations or influencing public opinion but also for identity theft and various scams. On social media platforms like TikTok and Instagram, the spread of deepfakes, along with their potential to cause harm, is particularly concerning. Researchers from ESET in Latin America recently identified a campaign on these platforms where AI-generated avatars posed as gynecologists, nutritionists, and other healthcare professionals promoting supplements and wellness products. These videos, often exceptionally polished and convincing, appeared as medical advice, misleading unsuspecting users into making questionable—and potentially dangerous—purchases. Each video follows a similar pattern: a talking avatar, usually placed in one corner of the screen, offers health or beauty advice with an air of scientific authority, subtly guiding viewers toward specific products for sale. Impersonating specialists, these deepfake avatars exploit public trust in the medical field to boost sales—a tactic both unethical and effective. In one case, a ‘doctor’ promotes a ‘natural remedy’ as a superior alternative to Ozempic, a well-known weight-loss drug. The video promises dramatic results and directs viewers to an Amazon page where the product is described simply as ‘relaxation drops’ or ‘edema aids,’ with no connection to the exaggerated benefits advertised. Other videos promote unapproved drugs or false treatments for serious illnesses, sometimes using forged images or videos of real doctors. These videos are created with legitimate AI tools that allow anyone to upload short material and turn it into a targeted avatar. While this technology offers opportunities for influencers looking to expand their online presence, it can also be used to spread misleading claims and deceive the public. Tools intended for marketing can easily become vehicles for misinformation. ‘We have identified over 20 accounts on TikTok and Instagram using fake doctors to promote their products,’ says Martina López and Tomáš Foltýn from ESET’s global security software team. ‘In one case, an account pretended to be a gynecologist with 13 years of experience, while in reality, the avatar was traced directly to the app’s library. Although such misuse violates the terms of use of most AI tools, it highlights how easily they can be turned into misinformation channels.’ The consequences can be severe, as these deepfakes may erode trust in online health advice, promoting harmful ‘treatments’ and delaying proper medical care. As AI becomes more accessible, detecting impersonations like deepfake videos grows harder, even for tech-savvy individuals. However, there are signs that can help identify them: poor lip-syncing or unnatural facial expressions, visual glitches like blurry edges or sudden lighting changes, robotic or overly smooth voices, and new profiles with few followers or no history should raise suspicions. Be wary of exaggerated claims like ‘miracle cures,’ ‘guaranteed results,’ or ‘doctors hate this trick’ without credible sources. Always verify claims through trusted medical or scientific sources, avoid sharing suspicious content, and report misleading videos to the platform where you find them. As AI tools evolve, distinguishing between authentic and fake content will become harder. This threat underscores the need for both technological safety mechanisms and improved digital literacy to protect us from misinformation and scams that could negatively impact our health and financial well-being.
Fake Doctors Promote False Treatments on TikTok Using Deepfake Technology
—
in Technology