Since 1996, computer-generated sexually explicit images of children have been illegal to disseminate. That hasn’t prevented creators of deepfakes from targeting young girls.
How AI content can spread online misinformation
As AI-generated content becomes increasingly harder to differentiate from the real thing, users worry that there will be dangerous results.
The mass proliferation of the images for nearly a day shines a spotlight on the increasingly alarming spread of AI-generated content and misinformation online.
The videos often tie back to real, shocking and scandalous events in the news. By remixing real news with false information and allegations, the videos are able to quickly gain traction by appearing to provide new information about topics that are already attracting attention.
AI that has been used to create fake nude photos of women is now being used to cover up women wearing revealing clothing, in a movement called "dignifAI."
Fake and misleading content created by artificial intelligence has rapidly gone from a theoretical threat to a startling reality. Dozens of tools and apps have sprung up to try to detect AI-created audio, but they are inherently flawed, experts told NBC News.
“Virtual influencers” have been around for years, but parents say the more recent explosive growth of generative AI technology has made it harder for casual social media users, especially children, to distinguish between real and artificial content.
Inside the proliferation of deepfake Porn
Artificial intelligence has, for decades, been fodder for science fiction films, but suddenly the advanced technology seems to be everywhere. Here’s a guide to help you understand more about AI, including ChatGPT, AI Chatbot and Bard.