The Evolution of Cyberthreats: Preparing for AI-Powered Attacks
Jessica Slendak

The latest arsenal of tools that cybercriminals will harness for their illicit activities is increasingly being powered by artificial intelligence (AI). The transition to AI-driven cybercrime is already underway. Traditionally, cybercriminals have relied on a plethora of tools acquired from the dark web to penetrate their targets. These off-the-shelf tools have allowed cybercriminals to gather the information they need effortlessly. Instead of developing code specifically for a target, attackers have been able to purchase these tools through underground networks and dark web marketplaces. Such tools permit them to tailor malware or compile exploits from pre-packaged kits. As long as profits continue to soar, the underground market for cybercrime services thrives, fueled by robust demand for illicit tools and the ongoing supply by clandestine developers.

 

This underground ecosystem has been operating for years and shows no signs of diminishing. Yet, with the advent of highly sophisticated large language models such as ChatGPT, the tools available to wrongdoers are transforming. GPT stands for “Generative Pre-trained Transformer.” It is an artificial intelligence model designed to generate text by predicting subsequent words in a sentence based on the preceding words. This technology, developed by OpenAI, uses deep learning algorithms and a large amount of data to understand and generate human-like text. We are witnessing the emergence of unregulated AI models like FraudGPT and WormGPT. These models function without the ethical constraints usually imposed on publicly available versions like ChatGPT. Offered via subscription services in the more obscure regions of the internet, they harness stolen or open-source training datasets to power private GPT systems that assist cybercriminals indiscriminately, completely disregarding the legality or ethics of their facilitation.

 

AdobeStock_566449333-640x411.jpeg


Reports indicate that some of these emerging systems possess the alarming capability to generate undetectable malware. They can orchestrate comprehensive phishing operations, crafting the deceitful messages and the code for the malicious landing pages designed to harvest credentials. With such technology at their disposal, the scope of advanced criminal activities is limited only by the perpetrator’s creativity—or perhaps not even that, should the AI itself suggest ways to refine their malicious endeavors.

 

Facing these sophisticated threats, institutions and individuals must remain vigilant and proactive in safeguarding their operations and personal information. While these advancements pose a significant concern for the security of our data in the era of AI, there are measures that can be taken to protect your institution from these threats. Almost all of these attacks rely on human error to grant bad actors access to your environment. Regular training and awareness programs are the first line of defense. Employees should be consistently educated on the most recent cyber threats and the critical nature of security best practices to avoid successful phishing and social engineering attacks.

 

Beyond improved training programs, hardening your processes and technology can establish a more secure infrastructure. Implementing multi-factor authentication (MFA) and biometrics for identity verification and performing routine internal security audits to ensure all systems are current and free of vulnerabilities are essential steps. Opting for the right technology security solutions, such as Managed Detection and Response (MDR), can automate the threat identification process and significantly diminish the likelihood of breaches.

 

By focusing on people, process, and technology, institutions can construct a formidable defense against the complex realm of AI-enabled cybercrime, thereby securing their assets and maintaining the confidence of their clientele. Moving forward, institutions will need to focus on evolving their security solutions to stay ahead of the quickly changing threat landscape.

 

By: Brett Gilsinger, Endeavor IT CTO