KUALA LUMPUR – Tackling the risks posed by artificial intelligence (AI) technologies will be made infinitely more difficult if industry players and regulators fail to develop an understanding of the technology itself.
BlackBerry Cybersecurity’s strategic technical solutions senior director Jonathan Jackson said this was because the challenges in combating issues caused by wrongful use of AI cannot be comprehensively addressed without utilising the technology itself.
“Humans are too slow. We cannot rely on humans alone to mitigate the risks (presented by AI). We need to use AI to fight AI,” Jackson said when speaking as a panellist at the 2024 International Regulatory Conference (IRC), here today.
“If you haven’t started to use AI to mitigate cyber threats, you’re at least three to five years behind the rest of the world. There’s no way you’ll be able to keep up with the risks, so you effectively need to use AI to fight (its own threats).”
Jackson, whose specialty centres on studying the impacts of advanced adversarial technology, pointed out that some of the dangers presented by unscrupulous use of AI include advanced phishing and deepfake attacks.
Deepfake is the term used to describe modified media which have been digitally manipulated to believably replace one subject’s likeliness with that of another, and can also refer to computer-generated images of individuals in situations which did not occur in real life.
“By using AI, the barrier to entry for cyber crimes has never been easier. Now, you don’t need to be particularly good at coding to be able to create highly sophisticated attacks which we have never seen before,” Jackson said.
He also said that while the general public might assume that AI first emerged in 2022 when AI chatbot ChatGPT was launched, cyber-related industries had actually been using the technology for a long time prior to that.
Jackson was part of a session on shaping the future of AI, which was moderated by Malaysian Communications and Multimedia Commission (MCMC) chief technology and innovation officer Shamsul Izhan Abdul Majid.
Meanwhile, Jackson’s fellow panellist, Singapore’s Infocomm Media Development Authority’s (IMDA) AI governance assistant director Vanessa Wilfred, stressed that an AI regulation ecosystem based on trust is vital as it will facilitate efforts to promote innovation.
“If people are suspicious of technology and think that it will not benefit them, then they’re less likely to adopt it,” she said, adding that IMDA views it as a priority to go beyond just guidelines and frameworks when it comes to developing AI regulations.
She added that while industry figures are able to agree on principle that it’s important to be transparent, accountable and fair, it remains “a bit of a question mark” as to how to implement such principles.
Also part of the panel was GSM Association’s (GSMA) Asia Pacific head Julian Gorman, YTL Communications chief executive officer Wing K. Lee and Telekom Malaysia chief network officer Mohamed Tajul Mohamed Sultan.
Commenting on public awareness of AI technology, Tajul said there is a pressing need to address the public on how while AI can be misused to impose fraudulent schemes on unsuspecting individuals, it can also be used to better society through digital advancements.
The role of imparting such information to the public should be shouldered by all players within the digital industry so that no one can claim ignorance on the ethics behind AI usage, he added.
Organised by MCMC, the two-day IRC conference held at the St Regis Hotel here featured international speakers who shared insights and engaged in dialogues on regulatory frameworks and emerging technologies. – May 7, 2024