Click any tag below to further narrow down your results
Links
This article explores how large language models (LLMs) can be used for both defensive and offensive purposes in cybersecurity, highlighting the rise of malicious models like WormGPT and WormGPT 4. These tools bypass ethical constraints, making cybercrime more accessible for less skilled attackers. The piece details their capabilities, including generating phishing content and malware, and discusses the implications for the threat landscape.
An analysis of over 2.6 million AI-related posts from underground sources reveals how threat actors are leveraging AI technologies for malicious purposes. The research highlights 100,000 tracked illicit sources and identifies five distinct use cases, including multilingual phishing and deepfake impersonation tools. This comprehensive insight offers unmatched visibility into adversaries' strategies and innovations in AI exploitation.