David Warshavski, VP of Enterprise Security at Sygnia

Opinion
ChatGPT and its perilous use as a "Force Multiplier" for cyberattacks

The new AI tool can significantly lower the bar for threat actors to carry out sophisticated attacks, potentially setting the industry back several years, writes David Warshavski, VP of Enterprise Security at Sygnia

As a form of OpenAI technology, ChatGPT has the ability to mimic natural language and human interaction with remarkable efficiency. However, from a cybersecurity perspective, this also means it can be used in a variety of ways to lower the bar for threat actors.
One key method is the ability for ChatGPT to draft cunning phishing emails en masse. By feeding ChatGPT with minimal information, it can create content and entire emails that will lure unsuspecting victims to provide their passwords. With the right API setup, thousands of unique, tailored, and sophisticated phishing emails can be sent almost simultaneously.

1 View gallery
David Warshavski Sygnia
David Warshavski Sygnia
David Warshavski, VP of Enterprise Security at Sygnia
(Photo: Guy Lahav)

Another interesting capability of ChatGPT is the ability to write malicious code. While OpenAI has put some controls in place to prevent ChatGPT from creating malware, it is possible to convince ChatGPT to create ransomware and other forms of malware as code that can be copied and pasted into an integrated development environment (IDE) and used to compile actual malware. ChatGPT can also be used to identify vulnerabilities in code segments and reverse engineer applications.
ChatGPT will expedite a trend that is already wreaking havoc across sectors – lowering the bar for less sophisticated threat actors, enabling them to conduct attacks while evading security controls and bypassing advanced detection mechanisms. And currently, there is not much that organizations can do about it. ChatGPT represents a technological marvel that will usher in a new era, not just for the cybersecurity space.
It will take time for cybersecurity vendors to adapt technologies like ChatGPT to leverage it to automate many of the day-to-day tasks carried out by an already overwhelmed workforce in charge of securing organizations’ networks. One such example to consider is that if ChatGPT can easily create a phishing email, can it not just as easily detect one?
Overall, there are more opportunities than risks here, but as is usually the case, the risks will manifest before we reap the rewards. AI can act as a "force multiplier" for cyber attacks, significantly lowering the bar for threat actors to conduct sophisticated attacks and potentially setting the industry back years, endangering even mature, resilient organizations across the public and private sectors.

David Warshavski is the VP of Enterprise Security at Sygnia, a cyber technology and services company