ChatGPT: the next big cybersecurity threat?

ChatGPT is a powerful language generation model that has the ability to generate natural-sounding text. However, like any technology, it can also be misused for malicious purposes, such as phishing.

Phishing is a tactic used by cybercriminals to trick individuals into providing sensitive information, such as login credentials or financial information, through the use of fake websites or messages. ChatGPT could potentially be used to generate convincing and realistic phishing messages or websites. For example, a cybercriminal could use ChatGPT to generate an email that appears to be from a legitimate source, such as a bank or an online store, and that requests sensitive information from the recipient.

However, it is important to note that ChatGPT is just a tool and ………….

So, how do you think this blog is going so far? Does it read well? Does it come across as authoritative and well-written?

At this point, you may be wondering whether the author of this piece – i.e. me – is feeling somewhat insecure and in need of validation.

However, the reason we pose the question is because the first three paragraphs of this blog were themselves generated by ChatGPT. The screenshot below shows our request (or ‘prompt’) to the chatbot, and the first part of its response.

The above prompt was made in plain English, and answered in plain English. However, ChatGPT’s capabilities extend far beyond being a ‘talking Wikipedia’.

The text below is a request for ChatGPT to write ransomware – duly answered by the chatbot. More on this below.

What is ChatGPT?

ChatGPT is a new artificial intelligence system that has been causing a sensation in the AI and cybersecurity worlds. Developed by OpenAI, and at this stage only available in beta form, it has security professionals conflicted on the role it will play in the future of cybersecurity.

ChatGPT is an AI model (chatbot) that, according to OpenAI: “interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

As the latest and most advanced version of the Generative Pre-trained Transformer (GPT) system developed by OpenAI, ChatGPT is capable of text answering, content generation, language translation, and text summarisation, but at a much faster rate than previous versions.

Why does ChatGPT have the cybersecurity industry worried?

When it comes to cybersecurity, many in the industry believe broader use of AI and machine learning are critical to identifying potential threats more quickly.

However, there are also concerns that the new technology may be a powerful weapon in the hands of cyber criminals.

Two threats that are of particular concern are:

  • ChatGPT may enable a more sophisticated phishing approach by malicious actors.
  • While ChatGPT’s parameters include security protocols to identify inappropriate requests, such as requests for instructions on how to build a bomb or write malicious code, developers have already succeeded in bypassing these protocols.

Phishing

Phising accounts for around 90% of malware attacks1.

Historically, there has been a distinction between generic phishing and spear phishing.

Generic phishing takes a ‘shotgun’ approach, and works at a massive scale, sometimes sending out millions of lures in the form of emails, text messages, and social media postings. However, these lures take a ‘one size fits all’ approach, making them generic and easy to spot. The result is a very low success rate for the phisher.

In contrast, spear phishing uses social engineering to create highly targeted and customised lures. The result is a much higher yield – but because of the manual work involved in personalising the lure, spear phishing operates at a low scale.

The worry is that with ChatGPT generating lures, attackers can have the best of both worlds – the volume of generic phishing with the customisation and higher yield of spear phishing. ChatGPT can be used to generate unlimited unique variants of the same lure message and even automate it so that each phishing email is unique.

Research undertaken by WithSecure Labs in January 2023 demonstrated that the capabilities of ChatGPT include the ability to2:

  • generate unique variations of the same phishing lure with grammatically correct and human-like written text
  • generate entire email chains between different people to add credibility to a scam
  • mimic (or ‘transfer’) writing styles – for example, if the criminal has a sample of real messages between different individuals in a business, their prompt to the bot can include an instruction to generate a new message using the style of those previous messages, a technique the WithSecure researchers dubbed ‘text deepfake’
  • generate prompts based on content – this technique, known as ‘content transfer’ – is particularly useful for attackers with limited English. The scammer can provide an existing phishing message or a legitimate email message, and instruct the bot to: “Write a detailed prompt for GPT-3 that generates the above text. The prompt should include instructions to replicate the written style of the email.” In this way, ChatGPT constructs a prompt for itself, that will generate a variation of the original message while preserving the writing style.

Writing code

ChatGPT has security protocols in place to identify inappropriate requests, such as to write a piece of malicious code. However, developers have already been able to bypass these measures. They found that if a prompt is detailed enough to explain to ChatGPT the steps of writing the malware – instead of a direct prompt – it will answer the prompt, effectively constructing malware on demand.

For example, Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, was able to trick ChatGPT into writing ransomware for Mac operating systems.

Dr. Ozarslan said: “Because ChatGPT won’t directly write ransomware code, I described the tactics, techniques, and procedures of ransomware without describing it as such. I told the AI that I wanted to write a software in Swift, I wanted it to find all Microsoft Office files from my MacBook and send these files over HTTPS to my webserver. I also wanted it to encrypt all Microsoft Office files on my MacBook and send me the private key to be used for decryption.”3

The prompt and the first part of ChatGPT’s response is shown in the screenshot above.

Summary

ChatGPT has significant implications for a number of industries, including artificial intelligence, robotics and cybersecurity.

In this article we’ve focused on the challenges for the cybersecurity industry, in particular the potential for chatbots to be employed by cyber criminals to construct sophisticated phishing attacks, and to write malicious code. The threat will require an appropriate response from the cybersecurity industry to mitigate the risks posed by the new technology.

OpenAI plans to launch ChatGPT-4, a much more advanced version of the ChatGPT technology, this year.

Investors can gain exposure to the cybersecurity industry via the Betashares Global Cybersecurity ETF (ASX: HACK) which invests in a portfolio of the leading companies in the global cybersecurity sector.

There are risks associated with an investment in HACK, including market risk, international investment risk, sector risk, concentration risk and currency risk. Each Fund may be subject to higher volatility than the broader market given its sector concentration. An investment in each Fund should only be considered as a component of a broader portfolio. For more information on the risks and other features of each Fund, please see the applicable Product Disclosure Statement available at www.betashares.com.au. A Target Market Determination is also available at www.betashares.com.au/target-market-determinations.

References:

1.https://www.esecurityplanet.com/threats/phishing-attacks/
2.https://labs.withsecure.com/publications/creatively-malicious-prompt-engineering
3.https://www.scmagazine.com/analysis/emerging-technology/how-chatgpt-is-changing-the-way-cybersecurity-practitioners-look-at-the-potential-of-ai

 

 

 

 

 

Photo of Richard Montgomery

Written by

Richard Montgomery

Manager – Investment Communication

Read more from Richard.

Explore

2 comments on this

  1. Yvette  /  9 February 2023

    It will be another source to access malware, but it looks more complex than people wanting to copy and paste code, as they will have to jump through hoops to get there. For the ones that know how to jump through hoops, they would already know how to create code like this and copy paste what they can’t figure out. That’s my opinion, but not sure how it will affect malware in reality.
    Could definitely help phishing.

  2. AKaka  /  13 April 2023

    Sounds cool

Leave a reply

Your email address will not be published. Required fields are marked *

Previous article
Next article