Generative AI Hacking Tools and What They Mean for Defenders

Generative AI Hacking Tools and What They Mean for Defenders

Generative AI Hacking Tools and What They Mean for Defenders

Among all the talk and hype about AI in cybersecurity, it’s generative AI that perhaps has the most potential to impact the industry. These large language models may prove to be a double-edged sword in cybersecurity, with benefits for defenders but also the emergence of generative AI hacking tools for threat actors to use in attacks. This blog delves into GenAI hacking tools and highlights some things to bear in mind about their impact on your cyber defenses. 

The Emergence and Threat of Gen AI Hacking Tools

Generative AI describes a class of artificial intelligence that uses large language models to generate human-like text, images, music, or other outputs by learning from vast amounts of existing data. Advances in the capabilities of these models in 2022 saw them skyrocket into public awareness, with ChatGPT gaining a million users within 5 days of launching. Salesforce reports that 45% of surveyed Americans now use generative AI. 

While the companies that develop generative AI tools try to use guardrails and controls to protect against potential malicious use, opportunistic threat actors and cyber crime operators are creating their own custom-crafted tools. Here are just a few examples circulating in dark web marketplaces and being discussed on cyber crime forums.

WormGPT 

Based on the open-source GPT-J language model, WormGPT helps threat actors launch business email compromise (BEC) scams and other phishing attacks. The success of these attacks often depends on creating highly convincing messages, which take time and require proficient native language skills. WormGPT automates the process and lowers the barriers to entry for novices who might not be fluent in the target recipient’s language. The tool works similarly to ChatGPT but without the restrictions that would normally see requests to write phishing emails or BEC scams being declined. 

FraudGPT

FraudGPT emerged in mid-2023, around the same time as WormGPT. However, this tool is a more overtly malicious one that not only writes phishing emails but also creates fake landing pages, writes malicious code, and finds vulnerabilities in systems. The tool’s wider range of malicious capabilities may well appeal to a broader range of hackers and scammers. Like WormGPT, users can subscribe monthly or annually, although FraudGPT is pricier. 

XXXGPT

XXXGPT is another adversarial AI tool that came on the black hat scene shortly after WormGPT and FraudGPT. This tool focuses more on facilitating technical, code-based hacks by writing remote access trojans (RATs), keyloggers, infostealers, cryptostealers, and more. 

How Might GenAI Affect Cybersecurity Defense?

Robust cybersecurity defense is about constant adaptation. Honing and refining your strategies and tactics in the face of emerging and changing threats is essential for risk reduction. With the rise of generative AI hacking tools, here are some things to consider. 

Preparation to face higher volumes of advanced hacking techniques

Adversarial generative AI tools make sophisticated hacking techniques more accessible to less skilled people. Tools like XXXGPT provide capabilities that previously required deep technical knowledge, such as writing complex malware codes. FraudGPT conducts advanced social engineering tactics without much need for social engineering prowess by analyzing large datasets. This democratization is sure to increase the volume of advanced threats you face. It’s not even that the attacks will be particularly novel, but higher volumes of more advanced threats may prove hard for security teams to fend off.

The need to bolster email verification and account security measures

The ability of these tools to generate convincing messages tailored to individual victims based on their personal and professional data makes it trickier for staff outside of cybersecurity teams to spot phishing and BEC scams. In light of this, there’s a more pressing need to bolster email verification and account security measures with things like:

  • Multi-factor authentication, especially the more adaptive forms of this technology that asks users for extra evidence to prove their identities during high-risk scenarios. 

  • Anomaly detection systems that use AI in your favor to monitor for unusual activity within an account, such as accessing emails from a new location or attempting to transfer large sums of money. 

  • Encouraging internal policies where direct verification through alternative communication channels (like phone calls or in-person verification for unusual financial requests) reduces the risk of falling victim to BEC scams. This human element ensures a higher level of verification for requests involving sensitive actions or sums of money.

The growing value of dark web threat intel

While often seen as a shady corner of the Internet, the dark web is an increasingly valuable source of valuable threat intel in a world of GenAI hacks. The value of gaining preemptive insights from the dark web—where many of these generative AI tools and jailbreaks are discussed, developed, and traded—becomes crucial. Dark web monitoring allows your security teams to gain a deeper understanding of attacker behavior, including what types of tools are in demand, how they are being developed, and the specific vulnerabilities they target. These insights can help in anticipating possible attack vectors and understanding the strategic intent behind the development of future Gen AI hacking tools.

More practice in controlled, simulated environments

With AI being used to enhance the speed, stealth, and sophistication of attacks, your incident response and security operations team must similarly adapt their skills. Security pros need to practice in environments where they can experience these complexities first-hand, which ideally allows them to develop the agility and strategic thinking they need to tackle AI-modified threats. Simulated environments also enable responders to practice rapid detection, containment, and mitigation strategies to keep up with the speed of AI-driven attacks.

Cloud Range’s live-fire cyber ranges prepare and equip your defenders to tackle threats from generative AI hacking tools. You get controlled, simulated real-world environments for cybersecurity professionals to train in and practice responding to dynamic and complex cyber attacks, and a large and ever-growing catalog of simulated exercises to choose from. Our cyber ranges facilitate team-based training simulations essential for effective cyber defense, especially when dealing with coordinated AI-driven attacks. Your teams can practice communication and collaborative problem-solving to manage complex and fast-evolving threats. 

Request a demo here. 

Previous
Previous

Emerging Trends in Cyber Offense and Defense for ICS/OT Systems

Next
Next

Unifying IT and ICS/OT Security Ecosystems