ChatGPT has been used for homework, writing radio programmes, academic papers, articles, coding, hacking etc. etc. etc.. Here are some of the most recent articles examining privacy and cyber crime using artificial intelligence, of which the adapting, convincing email scams is probably the issue that organisations need to think about as their filtering systems struggle to deal with the human looking emails.
Brace Yourself for a Tidal Wave of ChatGPT Email Scams | WIRED
Of course there is going to be a Chinese version:
AI: China tech giant Alibaba to roll out ChatGPT rival – BBC News
If governments hate TikTok they will really hate Tongyi Qianwen.
Then there is your privacy:
But it all comes back to cyber crime in the end. Threat actors are always looking for that advantage or weakness that gives them an opportunity to exploit someone or something and if ChatGPT can assit with this, then they will use it…
Cybercrime: be careful what you tell your chatbot helper… | Chatbots | The Guardian
…and you need to keep up.
Training your team to recognise social engineering attacks, whether human or machine generated is a good place to start.
Clive Catton MSc (Cyber Security) – by-line and other articles