“CAN you comment about AI-generated deepfakes and fraudulent content?”
This is a common question I am asked by the media.
This is followed up with “How do the layman smartphone users spot them?”
Generally speaking, deepfakes are created data reproductions that have been digitally manipulated to deceptively and convincingly replace one person’s or an object’s likeness with that of another. They are powerful tools available to criminals to perpetuate disinformation, scams, cyberbullying, election rigging, revolutions, political interferences and other online harms and threats against information security.
Today very convincing deepfakes can be created using artificial intelligence. Generative AI with big data sets can produce deepfakes that cannot be distinguished by most of the population and even the best experts are unable to detect them without some form of digital forensics, which will require time.
The case involving a Hong Kong company scammed of USD25 million because a deepfake video conference call was created by scammers posing as the company’s CFO to convince a finance worker to transfer the money to the scammers, just shows you how convincing and effective deepfakes are when used in the wrong hands. There have been many cases of deepfakes using voice in scam kidnappings, and email deepfake scams have shown the sophistication of this technology to fool nearly anyone.
Unregulated, this technology can, in evil hands, interfere in elections, instigate riots and threaten peace, stability and harmony of any nation, without exception.
There are several types of deepfakes involving video, pictures, texts and voice. These range from the shoddy amateur jobs with shadow errors, lack of seamless joints, unnatural looks or broken voices, poor sentences, grammar spellings and shoddy letterheads, to the highly sophisticated. Suffice to say with AI available, the standard of deepfakes will evolve where it will be impossible for the average person to tell from the video, picture or voice alone, that it is a deepfake.
This then brings us the issue as to how do people protect themselves. Like combating scams, a multi-functional approach is required. This approach requires government intervention to ensure legal responsibility is fairly allocated between those who own the technology and profit the most from it, and those who use it.
Technology alone is insufficient to effectively combat deepfakes. It must be matched with robust risk, liability and loss adjustment policies.
The following matters should be considered to protect the public and governments against criminal activity using deepfakes.
1. Raise awareness among the population that anything digital can be intercepted and manipulated. So, in the digital space you cannot totally believe what you see, hear or read anymore, but you need to process the information you read and see with critical thinking and analysis (easier said than done) and most importantly independent verification. The same SOP you apply for online scams should be adopted, and these have been previously written about, including an appreciation of the 4 essential elements of scams: (i) Anonymity (ii) Access to a telecommunication network, (iii) Access to a payment account or payment system (iv)Targeting Information.
2. All companies or persons that provide AI services that can be used to generate deepfakes must be required by law to register and identify those who use the AI service. In addition, the provider must ensure that all AI outputs generated from their system are watermarked, and they must also provide the necessary tools to detect any output generated from their AI system. Presently one AI services company has publicly confirmed that it had built a text watermarking method to detect its own AI output, at least for written content. The tool is able to add a pattern to how the large language model (LLM) writes its output which is unnoticeable to humans and therefore does not affect the output quality of the text, yet can be detected. This will enable any AI generated text, and ultimately pictures/video and voice, to be identified by the system owner as a likely deepfake generated from their system. Failure to do so must result in substantial financial penalties.
3. All platforms that host AI generated material must ensure that all deepfakes or AI generated output is clearly marked with a notification that the output is AI generated. The companies that provide AI services must provide the platforms with the tools to detect their own watermarks, or a means or contractual legal framework to compel the users of their services to do so. This will require a legal obligation in the contractual terms and conditions of use of an AI service to disclose that their output was facilitated through AI services, as well as other terms and conditions to prevent misuse.
4. AI tools to detect the probability of deepfakes which have been developed, must be made available to all governments, and ultimately to the public so that their smartphones are now “smart” to warn a user of the probability that a video or text is a deepfake. Access to AI detection tools is not available to the public now, but the recent admission by an AI services company that their tool can detect another AI services company’s output means it probably can be done.
5. AI use must be legally regulated like OTT platforms and be subject to the same licensing controls in so far as cybersecurity is concerned.
6. The public must adopt the same measures as for dealing with scams, including a mandatory 24 or 48 hour cooling off period, and the imposition of digital insurance (which the service provider must bear), to protect against scams using deepfakes.
7. Secure communications technology should be used as a protocol for independent verification. For example a finance worker, despite getting an audio and visual instruction to transfer millions, will not do so unless he receives a further call or code on a secure communications platform.
8. Always independently verify, and preferably on different devices, using official contact information. For example, for business or corporations, internal financial payment policy should require that where payment instructions are given digitally over certain amounts there must be an independent code or keyword communicated through an established secure communications platform. These platforms and secure communications applications are available. For the individual member of public, a simple calling back the person you thought you were talking to or video conferencing with, on their number which you know is theirs or independently verified as theirs, using preferably a different device, if possible, will help reduce spoof identity scams which may use deepfake voice. Again, the same applies to email or video deepfakes. This alone may not be enough, so asking questions that you expect the other person to know will help a lot for you to determine if you are dealing with deepfakes.
9. To enhance copyright and data protection laws so as to ensure that personal data belongs to its owner. Its misuse or unauthorised use should carry substantial penalties and the right to legal civil claims against those responsible for misusing it, as well as against the platforms that carry it, where they knowingly or recklessly allow it, or do not take adequate steps to address the mischief after complaints have been made. This would include the situation where once a deepfake has been detected after a complaint has been made, to promptly remove the deepfake and to inform and to take measures to notify the public of the deepfake. This is extremely important in cases involving public order or elections.
10. Like OTT regulation, AI regulation and the control of deepfakes must be enhanced in effectiveness by regional cooperation and a common stand in Asean. – August 20, 2024
Cybersecurity Law expert Derek Fernandez is also a Malaysian Communications and Multimedia (MCMC) Commissioner