We all love to make fun of politicians and other people in the public eye, don’t we? Personally, I am very partial to some digging via Spitting Image puppets, or a clever impersonator, but things tipped over for me when shows featured look-a-likes up to nefarious deeds being filmed by a “hidden” camera. It was only one step away from actually spying on people, and since then of course we have had the police scanning people’s faces as they walk down the street as well as the infamous Hancock hug.
Online of course, things have gone much further. Easily available software can be used to study somebody’s voice patterns so that existing videos of people can be doctored and new words synced to their filmed mouth movements. Or their face can be superimposed on somebody else’s body. The technology is now so advanced that mere humans cannot tell the difference, and artificial intelligence (AI) combined with machine learning (ML) has to be used to detect the fake. So we have the unedifying scenario of AI and ML being used to both deploy the deep fake and detect it! What chance do we have?
As always, be on your guard. It is only time before deep fake tech is used as commonly as whale phishing to target your accounts department and create a convincing voice or video message from your CEO getting them to pay an urgent invoice to a crook.
In the States a federal bill was introduced in 2018, called the Malicious Deep Fake Technology Act, and in California and Texas legislation prohibiting deep fakes during elections has been passed, although it is difficult to see how these can be policed.
So, all you can do is use your common sense to decide whether somebody is acting in character. Make sure that staff are trained, especially new staff who are most likely to be victims of any kind of social engineering as they are unfamiliar with the company, its employees and its procedures.
Diana Catton MBA – by line and other articles