KUALA LUMPUR – Authorities in Putrajaya have been urged to extend their efforts beyond mere licensing but also to actively encourage social media platforms to harness artificial intelligence (AI) in the fight against online fraud, scams, cyberbullying, and sexual crimes committed by individual users against consumers.
Speaking to Scoop, Federation of Malaysian Consumers Associations vice president Datuk Indrani Thuraisingham emphasised that this approach would also help restore consumer trust in social media platforms in the years to come.
Indrani’s comments followed Putrajaya’s recent move through the Malaysian Communications and Multimedia Commission (MCMC) to require licensing for social media platforms and instant messaging services, effective January 1.
On August 1, MCMC announced it was in the process of holding stakeholder sessions to finalise a regulatory framework for the required annual licences – aimed at curbing cyberbullying, scams, and other illegal online activities.
Communications Minister Fahmi Fadzil has clarified that the regulation will apply only to service providers, not individual users.
“AI can automatically detect and remove harmful content such as hate speech, cyberbullying, and sexually explicit material involving minors. These systems can learn to identify patterns and quickly flag inappropriate behaviour.
“In addition, AI can analyse user behaviour and transaction patterns to detect and prevent fraudulent activities. For example, unusual account activity or rapid changes in account behaviour can trigger alerts.
“Natural Language Processing can be used to monitor and analyse text-based communications for signs of cyberbullying, grooming, or other harmful behaviours.
“This technology can help identify and intervene in real time. By analysing the sentiment of messages, platforms can detect negative or threatening language and take appropriate action to protect users.
“Blockchain technology can also be used to create transparent and immutable records of transactions and interactions. This can help verify the authenticity of accounts and reduce the risk of fraud.
“Moreover, blockchain-based identity solutions can provide users with more control over their personal information and help prevent identity theft and online scams,” Indrani said when contacted.
Indrani, who is also the National Consumer Complaints Centre (NCCC) chief executive officer, further highlighted the importance of security safety campaigns by social media platforms to educate users – particularly those venturing into business to create a safer environment for consumers.
“Implementing biometric authentication – such as facial recognition or fingerprint scanning – can add an extra layer of security, making it harder for unauthorised users to access accounts.
“End-to-end encryption ensures that user data is protected from unauthorised access, which is crucial for safeguarding sensitive information and preventing data breaches.
“Platforms can offer robust parental control tools that allow parents to monitor their children’s online activities, set usage limits, and block inappropriate content.
“Social media platforms can run educational campaigns to inform users about online safety, recognise scams, and report inappropriate behaviour through easy-to-use reporting channels.
“Both relevant agencies and platforms should provide interactive and engaging training modules on topics like cybersecurity, online etiquette, and personal safety, empowering users to protect themselves and others.
“Platforms should also employ behavioural analysis systems to detect unusual user behaviour that may indicate fraudulent activity or potential grooming behaviour.
“Anomalies, such as sudden changes in messaging patterns or friend requests, should trigger further investigation.
“By integrating these technologies and solutions, social media platforms can create a safer online environment, protecting users from fraud, cyberbullying, and sexual crimes against children while maintaining user trust and engagement,” Indrani added.
In 2023, 34,495 online fraud cases were reported, according to Bukit Aman’s commercial crime investigation department (CCID).
In the last four years, the CCID also reported a staggering RM1.4 billion in losses due to online fraud cases.
Besides online scams, MCMC also received 9,483 reports of cyberbullying from January 2022 until June this year.
From now until December 31, MCMC will be holding engagement sessions with stakeholders – including social media and messaging system providers – to establish a code of conduct and outline actions for non-compliance with the new licensing requirements.
Failure to comply with the licensing directive could result in charges under Section 126 of the Communications and Multimedia Act 1998, which carries a maximum penalty of a RM500,000 fine, five years in prison, or both.
Providers could also be fined RM1,000 for each day the offence continues. – August 15, 2024