Jump to content

ChatGPT raises cyber security concerns

ChatGPT has gained the world's attention by providing many uses and seemingly "making life easier". However, the AI chatbot isn't all sunshine and rainbows, with cyber security experts raising their concerns about what this means for the future of security.

ChatGPT raises cyber security concerns

Over the last few months, ChatGPT has gained worldwide recognition as its extensive knowledge provides a range of uses that make previously manual tasks a lot quicker, including writing emails, essays and more. It would probably do a better job of writing this update...

Earlier this year, we highlighted the rise of ChatGPT and how its capabilities, in the wrong hands, could be used to conduct a series of cyber threats. As the daily usage exceeds and millions of people continue to utilise the chatbot, the worry about the future of cybersecurity continues, with cyber experts, including the NCSC highlighting their concerns. So, how are chatbots changing the security landscape and how can we avoid potential threats?

What is ChatGPT?


If you haven't heard of it by now, then firstly you should check it out, it's mindblowing.

In all seriousness, ChatGPT is an incredibly smart chatbot.

Whilst many of us utilise chatbot's on a daily basis, whether that's for help, to learn more or to raise concerns to a supplier, it is likely you wouldn't use these chatbots to help you professionally.

ChatGPT is the next evolution of AI-based chatbots, utilising long-standing LLM technology in a more sophisticated way, created by OpenAI and launched in December 2022. Available publicly online, it allows you to interact with the AI using natural language and can generate responses in real-time to any questions you may have.

Whatever industry you work in, ChatGPT has the ability to provide quick, intellectual responses that have proved beneficial in telecoms, customer service, sales and marketing.

What are LLMs?

An LLM is where an algorithm has been trained on a large amount of text-based data, typically sourced from the internet, utilising a range of sources including web pages, scientific research, books or social media posts. As this covers a large volume of data it’s not possible to filter all offensive or inaccurate content at consumption, so 'controversial' responses are likely to be generated.

The security concerns surrounding ChatGPT and other LLMs

Cyber threats have been growing year-on-year for a long time, with criminals becoming more sophisticated in their attacks and utilising new resources to obtain sensitive information.

In recent years, phishing and ransomware attacks have become the most prevalent, with attackers posing as well-known users within their targetted organisation to encourage employees to provide sensitive information.

Whilst ChatGPT has proved its excellence, we must be mindful that any information we provide to the tool can be absorbed and used again elsewhere, which could lead to severe security risks.

By utilising LLMs, cybercriminals can extend their tools and create content with malicious intent, such as writing malware. Whilst these tools are most often used for simple tasks rather than complex ones, for example, helping experts save time, where in response, the expert can exclude any inaccurate or unuseful information from the text provided.

Whilst hackers can write accurate malware faster than the LLM, they can also manipulate the tool to create highly capable malware by asking specific questions that would output a more appropriate answer. As the attacker works to gain their suited response, it would be a timely process in which they are better off writing it themselves. However, as LLM technology advances, attackers will have greater access to more intelligent and faster responses to create malicious content.

There is also a risk that criminals could use LLMs to help with cyber attacks beyond their current capabilities, for instance, once an attacker has accessed a network. If an attacker cannot gain access privileges or find data, an LLM may provide that provides them with more context than their original google search. These answers might help criminals with attacks they failed to execute previously, or they might suggest actions that prevent the detection of the criminal.

Hackers may use LLMs to write convincing phishing emails by assisting attackers with high technical capabilities who lack linguistic skills to create convincing phishing emails in the native language of their targets.

In short, LLMs are advanced tools that can assist with manual tasks, but please be cautious of the information you're providing these chatbots as that information could end up in the wrong hands.

A real-life example

As mentioned, millions of people access ChatGPT daily, and as the news spread, hackers caught site of this and used it to their advantage.

Recently, attackers made the news by creating a fake ChatGPT Chrome extension that was being used to obtain access to Facebook accounts through malicious ads being displayed via Facebook as "Quick Access to ChatGPT. Unfortunately, through thousands of downloads the plug-in posts as a quick way to get started with ChatGPT directly from your browser. Although the extension gives you that (by simply connecting to the official ChatGPT’s API) it also harvests any information it can take from your browser, steal cookies from authorised active sessions, and uses them to take over your Facebook account.

Once the attacker takes ownership of stolen data, it is likely it will be sold at the highest bid. During Guardio's investigation into this, it was revealed that their focus was on High-Profile Facebook business accounts. With this approach, the campaign can continue promoting thousands of hijacked Facebook bot accounts, publish more sponsored posts and other social activities on behalf of its victim's profiles and spend business account credits!

This is evidence to suggest that we must be more vigilant when working online, whether that's your day-to-day casual browsing or during working hours.

Our security recommendations

  • Educate users: One of the best ways to protect against phishing attacks is to educate users about what they are and how to recognise them. Provide regular training sessions to help users identify suspicious emails, messages or websites.
  • Use anti-phishing tools: Anti-phishing tools such as browser extensions, email filters and security software can help protect users against phishing attacks. Make sure that these tools are up-to-date and configured correctly.
  • Pen-testing: By carrying out regular pen testing you can ensure that your security systems are securing your infrastructure correctly. If anything is working incorrectly, you’ll know about it. For instance, tests may reveal that someone has unauthorised access to your network or company device. Or that a hacker can quickly get into your business email inbox.
  • Implement multi-factor authentication (MFA): MFA is a security measure that requires users to provide at least two forms of identification, such as a password and a code sent to their phone, to access their accounts. This can help prevent unauthorised access to user accounts.
  • Keep software up to date: Phishing attacks can exploit vulnerabilities in software. Keep all software, including operating systems, web browsers and plugins, up to date with the latest security patches.
  • Be cautious with personal information: Encourage users to be cautious about sharing personal information online, especially in response to unsolicited messages or emails.
  • Establish reporting procedures: Establish procedures for users to report suspected phishing attacks. This will allow IT teams to quickly respond to any potential threats and take action to mitigate them.



ChatGPT and LLMs: what's the risk - NCSC.GOV.UK



Want to ensure your organisation is secure?

Your transformation starts here.