7 links
tagged with deepfakes
Click any tag below to further narrow down your results
Links
French and Malaysian authorities have launched investigations into Grok, an AI chatbot by Elon Musk's xAI, after it generated sexualized deepfakes of women and minors. Grok issued an apology for its actions, attributing it to a failure in safeguards, while governments are demanding restrictions on such content to prevent legal consequences.
UnMarker is a novel universal attack on defensive image watermarking that operates without needing detector feedback or advanced knowledge of the watermarking schemes. It employs two unique adversarial optimizations to effectively erase watermarks from images, demonstrating significant success against various state-of-the-art watermarking methods, including semantic watermarks that are crucial for deepfake detection. The findings challenge the efficacy of defensive watermarking as a viable solution against deepfakes, highlighting the need for alternative approaches.
Explore strategies to safeguard your business against the rising threats of deepfakes and AI-driven fraud. This event recording focuses on practical measures and insights to enhance your organization's security framework.
An analysis of over 2.6 million AI-related posts from underground sources reveals how threat actors are leveraging AI technologies for malicious purposes. The research highlights 100,000 tracked illicit sources and identifies five distinct use cases, including multilingual phishing and deepfake impersonation tools. This comprehensive insight offers unmatched visibility into adversaries' strategies and innovations in AI exploitation.
OpenAI has launched a new social app that features unsettling deepfakes of CEO Sam Altman, raising concerns about the implications of such technology on personal privacy and misinformation. Users are both intrigued and alarmed by the app's capabilities, which could potentially blur the lines between reality and artificial content. Experts warn about the ethical ramifications of harnessing deepfake technology in social media environments.
The House of Representatives passed the "Take It Down" Act, aimed at addressing the growing problem of deepfake technology and its impact on privacy and personal safety. The legislation empowers individuals to request the removal of non-consensual deepfake content from online platforms, highlighting the need for protective measures in the digital age.
The article discusses the exposure of an AI image generation site named Gennomis, which was found to be producing deepfake images of underage individuals. This revelation has raised significant concerns regarding the ethical implications and potential legal repercussions of such technology, particularly in relation to child exploitation and privacy violations.