Meta to Label AI-Generated Content as “Imagined with AI”
Meta’s president for global affairs, Nick Clegg, announced that the company will soon implement labels on Facebook, Instagram, and Threads for content generated with AI, aiming to distinguish it from human-generated content.
Is it AI? The line between AI and human-made content can be blurry, but labeling it shouldn’t be. We’re rolling out industry-leading practices that will identify AI-generated images across @facebook, @instagram and @threadsapp__.https://t.co/CCsMmNh40m pic.twitter.com/hGcsaBwSeW
— Meta Newsroom (@MetaNewsroom) February 6, 2024
This move intends to provide users with more transparency about the content they engage with. Additionally, Meta is collaborating with industry partners to establish technical standards for identifying AI-generated or deepfake content, potentially involving metadata or invisible watermarks.
These measures may prompt other social media platforms to develop similar tools to combat deepfakes and ensure content authenticity.
The Environmental Impact of AI’s Rising Demand
As the demand for AI surges, concerns about its environmental sustainability grow. Experts warn that maintaining electricity demands for AI could perpetuate reliance on coal power, conflicting with global efforts to transition to net zero. Despite investments in green energy, the electricity demands of AI, particularly generative AI serving millions worldwide, pose a challenge.
With AI becoming ubiquitous, energy consumption has skyrocketed, as evidenced by the substantial energy requirements for training models like GPT-3 and BLOOM.
For instance, GPT-3 alone needs a daily estimated 564 MWh to compute answers to user prompts, highlighting the significant energy footprint of AI technologies.
This consumption equates to powering tens of thousands of households or millions of kilometres driven by electric vehicles, underscoring the urgent need for energy-efficient AI solutions.
UK Government’s Response to AI Regulation Consultation
The UK government has released its response to consultations on AI innovation and regulation, following a 12-week consultation involving international stakeholders, including OpenAI.
Michelle Donelan, Secretary of State for Science, Innovation, and Technology, emphasises the potential of AI to revolutionise public services. The UK’s approach focuses on context-based regulations tailored to specific AI applications, contrasting with the EU’s fixed risk-based framework. Regulatory bodies are already implementing principles outlined in the white paper, such as the CMA’s review of foundation models and the ICO’s updated guidance on fairness in data protection laws for AI systems.
Additionally, the government is investing in AI skills and talent initiatives and addressing concerns about copyright protections and trust in online AI-generated content.