KUALA LUMPUR – In a major move to protect election integrity, the European Union (EU) wielded its newly enacted digital law to address potential threats posed by AI, including deepfakes, on popular platforms such as TikTok and Facebook.
Under the EU’s Digital Services Act (DSA), the European Commission initiated inquiries into the measures undertaken by TikTok, Facebook, Instagram, X, Google, YouTube, Snapchat, and Bing to mitigate the risks associated with AI, particularly concerning its potential to manipulate elections through techniques like deepfakes, the AFP reported.
Simultaneously, the commission announced the commencement of a formal investigation into AliExpress, a Chinese internet retailer, for suspected violations of the DSA, including the sale of illegal products and failure to prevent minors from accessing explicit content.
Furthermore, Brussels directed inquiries towards Microsoft’s professional network, LinkedIn, seeking clarification on the utilisation of users’ personal data for targeted advertising purposes.
The focus on generative AI and its potential impact on electoral processes stems from concerns regarding its ability to disseminate false information, create deepfakes, and manipulate services to influence voters, as highlighted in a statement by the commission.
While the information requests made to the aforementioned platforms and LinkedIn are preliminary and do not imply imminent actions, the formal probe initiated against AliExpress grants the EU the authority to delve into the platform’s internal operations and processes, potentially resulting in significant penalties, including fines up to six percent of global turnover or suspension in severe cases.
EU officials emphasised the urgency of addressing AI risks ahead of the upcoming European Parliament elections, emphasising the need for both regulatory measures and platform readiness to counter potential threats.
This move follows the recent adoption of the Artificial Intelligence Act by the European Parliament, which aims to ensure the safety, compliance with fundamental rights, and innovation of AI technologies.
The regulation, which received overwhelming support from MEPs, prohibits certain AI applications that pose threats to citizens’ rights, such as biometric categorization systems based on sensitive characteristics and the untargeted collection of facial images for recognition databases.
Moreover, the legislation establishes stringent obligations for high-risk AI systems across various sectors, including critical infrastructure, education, healthcare, and law enforcement, to mitigate potential harms and ensure transparency and accountability.
Key provisions include the prohibition of AI systems that manipulate human behaviour or exploit vulnerabilities, clear labelling requirements for manipulated media content (deepfakes), and the establishment of regulatory sandboxes for testing innovative AI solutions.
With the AI Act set to be formally adopted pending final checks and endorsement by EU member states, its implementation marks a significant step towards safeguarding democratic processes, protecting fundamental rights, and promoting responsible AI innovation within the EU. – March 14, 2024