Be careful what you give to ChatGPT – a new type of intellectual property leak.

The artificial intelligence (AI) system ChatGPT has been in the recently – a lot – for producing essays, academic papers, poetry (I am sure someone used it for Valentine’s Day), homework, hacking etc.

But here is a new cyber security risk and data leak, that I am sure no one has really considered.

Before we start you need to know that AI systems use input and responses from previous instances of use, to formulate future responses to new problems. OK so what, you say.

Well if the use of the AI is to refine code, and the use inputs proprietary code into the AI system, that code is then available to the AI to use when someone else asks “suggest code to me that will do this”. Hence Amazon company secrets turning up in unexpected places thanks to ChatGPT.

Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT (futurism.com)

Is it time for a new policy and procedure to be added to your Cyber Security Master Document prohibiting the use of AI systems that are outside of your control, before your intellectual property turns up in the public domain.

Clive Catton MSc (Cyber Security) – by-line and other articles

Further Reading

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it (theconversation.com)