A study has uncovered over 3000 dark web posts discussing the illicit use of ChatGPT and other Large Language Models (LLMs) for various illegal activities.
These threat actors engage in schemes ranging from creating malicious versions of the chatbot to jailbreaking the original, according to a report from Kaspersky’s Digital Footprint Intelligence service.
The study revealed that stolen ChatGPT accounts and services offering their automated creation are being traded on the dark web, further escalating cybersecurity concerns.
Throughout 2023, Kaspersky’s Digital Footprint Intelligence detected a surge in dark web discussions related to the unlawful use of ChatGPT and other AI tools. The discussions peaked in March but have persisted, highlighting an ongoing interest in exploiting AI technologies for malicious purposes, according to Alisa Kulishenko, a digital footprint analyst at Kaspersky.
Threat actors, the Kaspersky study reveals, are actively exploring various schemes to implement ChatGPT and AI. Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond.
The popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums. In addition, threat actors tend to share jailbreaks via various dark web channels – special sets of prompts that can unlock additional functionality – and devise ways to exploit legitimate tools, such as those for pentesting, based on models for malicious purposes.
Beyond ChatGPT, attention is also being drawn to alternative projects such as XXXGPT and FraudGPT which are being marketed as advanced alternatives to ChatGPT boasting enhanced functionality without original limitations of ChatGPT.
Another significant threat identified in the Kaspersky’s report is the market for the accounts of the paid version of ChatGPT. In 2023, an additional 3000 dark web posts were discovered advertising ChatGPT accounts for sale across the dark web and some shadow telegram channels. These posts were discovered to have been either distributing stolen accounts or promoting auto-registration services that mass-create accounts on request.
Kaspersky’s Kulishenko says that while AI tools themselves are not inherently dangerous, cybercriminals are trying to come up with efficient ways of using language models, thereby fueling a trend of lowering the entry barrier into cybercrime and, in some cases, potentially increasing the number of cyberattacks.
However, it’s unlikely that generative AI and chatbots will revolutionise the attack landscape – at least in 2024. The automated nature of cyberattacks often means automated defenses. Nonetheless, staying informed about attackers’ activities is crucial to being ahead of adversaries in terms of corporate cybersecurity.
Kaspersky’s report shows the importance of continuous vigilance in the face of evolving cyber threats, as cybercriminals adapt and experiment with technologies in order to exploit vulnerabilities and conduct illicit activities on the dark web.
Source of Article