A few short months ago AI chatbots and ChatGPT were all the tech and mainstream news seemed to be talking about! But of course, the threat actors were reading and listening to all that news and have been developing attacks that will exploit the victims new love of AI.
UK cybersecurity agency warns of chatbot ‘prompt injection’ attacks | The Guardian
A “prompt injection” attack is one where a chatbot is manipulated by a threat actor, so the chatbot responds to a user’s request in an unforeseen, malicious manner.
The National Cyber Security Centre (NCSC) has reported there are growing cyber security risks for ordinary chatbot users through such manipulations and attacks. These chatbot systems are increasingly being used to pass user information onto third party systems, so increasing the security risk.
Just another thing to think about when it comes to ChatGPT and other LLM AI systems. What gets put in, can come back out again…
Which brings me to my point. What are your team putting into these AI systems that is your intellectual property?
Clive Catton MSc (Cyber Security) – by-line and other articles