Chinese social media companies have begun requiring users to classify AI generated content that is uploaded to their services in order to comply with new government legislation. By law, the sites and services now need to apply a watermark or explicit indicator of AI content for users, as well as include metadata for web crawling algorithms to make it clear what was generated by a human and what was not, according to SCMP.
Countries and companies the world over have been grappling with how to deal with AI generated content since the explosive growth of popular AI tools like ChatGPT, Midjourney, and Dall-E. After drafting the new law in March, China has now implemented it, taking the lead in increasing oversight and curtailing rampant use with its new labeling law making social media companies more responsible for the content on their platforms.
Chinese officials claim the law is designed to help combat AI misinformation and fraud, and is applicable to all the major social media firms. That includes Tencent Holdings’ WeChat – a Chinese WhatsApp equivalent – which has over 1.4 billion users, and Bytedance’s TikTok alternative, Douyin, which has around a billion users of its own. Social media platform Weibo, with its 500 million plus active monthly users, is also impacted, as is social media and ecommerce platform, Rednote.
Each of them posted a message in recent days highlighting to users that anyone submitting AI generated content will be required under law to label it as such. They also include options for users to flag AI generated content that is not correctly labelled, and reserve the right to delete anything that is uploaded without appropriate labeling.
The Cyberspace Administration of China (CAC) governing body has also announced undisclosed “penalties,” for those found to be using AI to disseminate misinformation or manipulate public opinion, with particular scrutiny said to be placed on paid online commentators.
Although China is the first major country to implement an AI content labeling system through legislation, similar conventions are being considered elsewhere, too. Just last week the, Internet Engineering Task Force proposed a new AI header field which would use metadata to disclose if content was AI generated or not. That wouldn’t necessarily make it easier for humans to tell the difference, but it would give the algorithms a heads up that what it’s viewing may not be human-crafted.
Google‘s new Pixel 10 phones also implement C2PA credentials for the camera, which can help users to know if an image was edited with AI or not. Although there are already numerous reports of users circumventing these safeguards, they are becoming more common.
With China now implementing stricter AI controls, it’s perhaps not long until we see something similar in Western countries.
Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!