Tools & Platforms
Microsoft to sign EU AI code while Meta refuses compliance

Microsoft president Brad Smith confirmed on July 18 that the technology company will likely sign the European Union’s voluntary code of practice for general-purpose artificial intelligence models, while Meta Platforms announced it would refuse participation in the compliance framework.
According to Smith’s remarks to Reuters, Microsoft sees the voluntary guidelines as an opportunity for industry engagement with regulators. “I think it’s likely we will sign. We need to read the documents,” Smith stated during an interview on July 18. The Microsoft president emphasized his company’s goal of finding “a way to be supportive” while welcoming “the direct engagement by the AI Office with industry.”
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
The divergent responses highlight the technology sector’s fractured approach to European artificial intelligence regulation. The code of practice, published on July 10 by the European Commission, aims to provide legal certainty for companies developing general-purpose AI models ahead of mandatory enforcement beginning August 2, 2025.
The comprehensive framework addresses three primary areas: transparency obligations, copyright compliance, and safety measures for AI systems. Model providers must implement detailed documentation requirements throughout the development lifecycle, establish copyright compliance policies, and conduct systemic risk assessments for models exceeding computational thresholds.
According to the code’s technical specifications, general-purpose AI models are defined as systems capable of performing various tasks across multiple domains. The framework establishes specific computational thresholds measured in floating-point operations to determine which models fall under regulatory scope. Models requiring more than 10²³ FLOPs during training must comply with basic obligations, while systems exceeding 10²⁵ FLOPs face enhanced systemic risk requirements.
Documentation obligations require providers to maintain current technical information for downstream providers and regulatory authorities. The code specifies that providers must “publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law.”
The framework’s Safety and Security chapter establishes protocols for incident reporting and risk mitigation. Providers must implement continuous monitoring systems to identify potential malfunctions or security vulnerabilities. The code defines serious incidents as malfunctions leading to events specified in Article 3(49) of the AI Act, requiring immediate notification to regulatory authorities.
Meta’s refusal to participate signals broader industry concerns about regulatory overreach. Chief global affairs officer Joel Kaplan criticized the code in a LinkedIn post on July 18, stating that Meta “won’t be signing it” due to “legal uncertainties for model developers” and measures that “go far beyond the scope of the AI Act.”
Kaplan cited support from 45 European companies sharing similar concerns about regulatory impact on AI development. “We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,” he wrote.
The Meta executive’s statement extends warnings made earlier in 2025, when the company’s policy chief Joel Kaplan indicated Meta would seek Trump administration intervention if Brussels imposed unfair penalties. These tensions reflect broader transatlantic technology policy disputes as companies navigate competing regulatory frameworks.
OpenAI and Mistral have already signed the voluntary code, demonstrating varied industry approaches to European compliance. The disparate responses suggest technology companies are adopting different strategies for managing regulatory relationships across global markets.
The AI Act’s implementation schedule creates graduated compliance requirements for different market participants. According to Commission guidance, enforcement becomes applicable “one year later as regards new models and two years later as regards existing models” placed on the market before August 2025.
The framework includes specific exemptions for models released under free and open-source licenses meeting Article 53(2) conditions. However, these exemptions do not apply to general-purpose AI models with systemic risk capabilities, ensuring continued oversight of the most capable systems.
Commission documentation indicates that providers implementing adequate codes of practice demonstrate compliance with Articles 53(1) and 55(1) obligations. The Commission may approve codes through implementing acts, providing general validity within the Union and streamlined compliance pathways for signatories.
Transitional provisions require existing model providers to take “necessary steps” for compliance by August 2, 2027. This extended timeline allows companies to adapt current systems while ensuring new models meet regulatory requirements from the August 2025 effective date.
Marketing implications shape industry adaptation
The regulatory framework creates significant implications for marketing technology providers and agencies utilizing AI-powered tools. Documentation requirements enable better assessment of model capabilities when selecting systems for campaign optimization, content generation, and audience targeting applications.
Compliance pathways developed through the code of practice provide marketing teams with clearer guidelines for evaluating AI tool selection. The transparency obligations require model providers to disclose training data sources and system capabilities, enabling more informed technology adoption decisions.
Copyright compliance measures particularly impact content creation workflows. The framework requires providers to implement policies addressing Union copyright law throughout model lifecycles, potentially affecting AI-generated marketing materials and campaign content.
Risk assessment requirements for high-capability models may influence enterprise software selection. Companies deploying AI systems for customer data analysis, personalization engines, or automated decision-making must consider enhanced compliance obligations when evaluating technology partners.
The enforcement timeline provides marketing organizations with adjustment periods to evaluate current AI tool usage and develop compliant workflows. The graduated implementation schedule allows companies to assess existing technology partnerships and plan transitions if necessary.
International coordination challenges emerge
The code of practice represents one component of broader European efforts to establish global AI governance frameworks. The multi-stakeholder development process involved nearly 1,000 participants, demonstrating the complexity of achieving consensus across diverse industry interests.
Member States and the Commission will assess the code’s adequacy in coming weeks, with potential approval through implementing acts providing “general validity within the Union.” The AI Office retains authority to develop common implementation rules if the code proves inadequate or cannot be finalized by required deadlines.
The framework’s voluntary nature during initial phases provides companies with opportunities to influence regulatory development through participation. However, mandatory enforcement beginning in 2025 ensures eventual compliance regardless of voluntary code adoption.
International coordination efforts extend beyond European initiatives. The framework aligns with broader global AI governance developments, including the G7 Hiroshima AI Process and various national AI strategies, though the Expert Group argues current efforts remain insufficient given the challenge scale.
The divergent company responses to the voluntary code foreshadow potential compliance challenges as mandatory requirements take effect. Technology companies must balance innovation objectives with regulatory obligations across multiple jurisdictions, creating complex strategic considerations for global AI development programs.
Timeline
- July 10, 2025: European Commission officially receives final General-Purpose AI Code of Practice
- July 17, 2025: AI Office invites providers to sign the GPAI Code of Practice
- July 18, 2025: Microsoft president indicates likely signing while Meta announces refusal to participate
- August 1, 2025: Signatories publicly listed ahead of mandatory enforcement
- August 2, 2025: AI Act obligations for general-purpose AI models take effect
- August 2026: Enforcement becomes applicable for new models
- August 2027: Enforcement becomes applicable for existing models placed on market before August 2025
Key Terms Explained
Campaign Optimization
Campaign optimization refers to the systematic process of improving advertising performance through data analysis and strategic adjustments. In the context of AI regulation, campaign optimization tools must now comply with transparency requirements that affect how algorithms process user data and make targeting decisions. The documentation obligations under the EU framework require marketing teams to understand the underlying AI models powering their optimization platforms, enabling more informed decisions about tool selection and compliance strategies.
Content Generation
Content generation encompasses the automated creation of marketing materials using artificial intelligence systems. The EU code of practice directly impacts this area through copyright compliance requirements, as AI models used for content creation must implement policies addressing Union copyright law throughout their operational lifecycle. Marketing teams utilizing AI-generated content must now consider the training data sources and potential copyright implications of their chosen platforms.
Audience Targeting
Audience targeting involves the strategic selection and segmentation of potential customers based on behavioral, demographic, or psychographic data. The regulatory framework affects audience targeting through enhanced transparency obligations that require AI model providers to disclose system capabilities and data processing methods. This transparency enables marketing professionals to better evaluate targeting tool effectiveness while ensuring compliance with evolving privacy and AI governance requirements.
Risk Assessment
Risk assessment in marketing contexts involves evaluating potential negative outcomes from AI-powered marketing activities. The EU framework establishes specific risk assessment requirements for high-capability AI models, particularly those exceeding computational thresholds that trigger systemic risk obligations. Marketing organizations must now consider these enhanced compliance requirements when selecting AI tools for customer data analysis and automated decision-making processes.
Compliance Pathways
Compliance pathways represent the structured approaches organizations can take to meet regulatory requirements while maintaining operational effectiveness. The voluntary code of practice creates multiple compliance pathways for AI model providers, which indirectly affects marketing teams through tool availability and feature sets. Understanding these pathways helps marketing professionals anticipate changes in AI tool capabilities and plan technology adoption strategies accordingly.
Technology Adoption
Technology adoption describes the process by which organizations integrate new technological solutions into their operational workflows. The graduated enforcement timeline for AI regulation affects technology adoption decisions by providing adjustment periods for evaluating current AI tool usage and developing compliant workflows. Marketing teams must balance innovation opportunities with compliance obligations when adopting new AI-powered marketing technologies.
Model Capabilities
Model capabilities refer to the specific functions and performance characteristics of AI systems used in marketing applications. The documentation requirements under the EU framework mandate disclosure of model capabilities, enabling marketing professionals to make more informed decisions about tool selection for specific use cases. Understanding model capabilities becomes crucial for assessing whether AI tools meet both performance requirements and compliance obligations.
Regulatory Frameworks
Regulatory frameworks encompass the comprehensive set of rules, guidelines, and enforcement mechanisms governing AI development and deployment. The EU AI Act represents a landmark regulatory framework that affects marketing technology through compliance requirements for general-purpose AI models. Marketing organizations must navigate these frameworks to ensure their AI-powered activities remain compliant while maximizing business effectiveness.
Data Processing
Data processing involves the collection, analysis, and utilization of information for marketing purposes using AI-powered systems. The EU code of practice affects data processing through enhanced transparency and documentation requirements that govern how AI models handle training data and user information. Marketing teams must understand these requirements to ensure their data processing activities align with regulatory expectations.
System Integration
System integration refers to the process of combining different AI tools and platforms to create cohesive marketing technology stacks. The regulatory framework affects system integration through requirements for downstream AI system compliance and provider accountability. Marketing organizations must consider how integration decisions impact overall compliance posture and ensure that combined systems meet regulatory obligations across the entire technology stack.
Subscribe the PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Microsoft president Brad Smith signals likely company participation in EU voluntary AI compliance framework while Meta Platforms chief global affairs officer Joel Kaplan announces refusal to sign code of practice.
What: The European Commission’s voluntary General-Purpose AI Code of Practice establishes transparency, copyright, and safety obligations for AI model providers ahead of mandatory AI Act enforcement.
When: Smith’s comments came July 18, 2025, one day after the AI Office invited providers to sign the code published July 10, with mandatory compliance beginning August 2, 2025.
Where: The framework applies to providers placing general-purpose AI models on the European Union market regardless of provider location, with particular focus on high-capability systems.
Why: The code provides voluntary compliance pathways ahead of mandatory enforcement while addressing transparency needs, copyright protection, and systemic risk management for increasingly capable AI systems.
Tools & Platforms
White House AI Task Force Positions AI as Top Education Priority

When Trump administration officials met with ed-tech leaders at the White House last week to discuss the nation’s vision for artificial intelligence in American life, they repeatedly underscored one central message: Education must be at the heart of the nation’s AI strategy.
Established by President Trump’s April 2025 executive order, the White House Task Force on AI Education is chaired by director of science and technology policy Michael Kratsios, and is tasked with promoting AI literacy and proficiency among America’s youth and educators, organizing a nationwide AI challenge and forging public-private partnerships to provide AI education resources to K-12 students.
“The robots are here. Our future is no longer science fiction,” First Lady Melania Trump said in opening remarks. “But, as leaders and parents, we must manage AI’s growth responsibly. During this primitive stage, it is our duty to treat AI as we would our own children: empowering but with watchful guidance.”
MAINTAINING U.S. COMPETITIVENESS
In a recording of the meeting Sept. 4, multiple speakers, including Department of Agriculture Secretary Brooke Rollins and Special Advisor for AI and Crypto David Sacks, stressed that AI will define the future of U.S. work and international competitiveness, with explicit framing against rivals like China.
“The United States will lead the world in artificial intelligence, period, full stop, not China, not any of our other foreign adversaries, but America,” Rollins said in the recording. “We are making sure that our young people are ready to win that race.”
In order to do so, though, Sacks noted that K-12 and higher education systems must adapt quickly.
“AI is going to be the ultimate boost for our workers,” Sacks said. “And it is important that they learn from an early age how to use AI.”
The Department of Education signaled that federal funding will also shift to incentivize schools’ adoption of AI. Secretary Linda McMahon said applications that include AI-based solutions will be “more strongly considered” and could receive “bonus points” in the review process.
EMBRACING CHANGE MANAGEMENT
Several officials at the meeting urged schools and communities not to view AI as a threat, but as a tool for growth.
“It’s not one of those things to be afraid of,” McMahon said. “Let’s embrace it. Let’s develop AI-based solutions to real-world problems and cultivate an AI-informed, future-ready workforce.”
Secretary Chris Wright of the Department of Energy linked the success of AI adoption to larger infrastructure challenges.
“We will not win in AI if we don’t massively grow our electricity production,” he said. “Perhaps the killer app, the most important use of AI, is for education and to fix one of the greatest American shortcomings, our K-12 education system.”
WORKFORCE DEVELOPMENT
Workforce training and reskilling emerged as another priority, with Labor Secretary Lori Chavez-DeRemer describing apprenticeships and on-the-job training as essential to preparing workers for an AI-driven economy.
“On-the-job training programs will help build the mortgage-paying jobs that AI will create while also enhancing the unique skills required to succeed in various industries,” Chavez-DeRemer said. She tied these efforts to the president’s goal of 1 million new apprenticeships nationwide.
Alex Kotran, chief executive officer of the education nonprofit aiEDU, told Government Technology that members of the task force spent a notable amount of time discussing rural schools and the importance of reaching underserved students, especially in regard to preparing rural students for the modern workforce.
PRIVATE-SECTOR COMMITMENTS
In addition to White House officials, attendees included high-level technology executives and entrepreneurs committed to expanding U.S. AI education.
During the recorded meeting, IBM CEO Arvind Krishna pledged to train 2 million American workers in AI skills over the next three years, noting that “no organization can do it alone.” Similarly, Google CEO Sundar Pichai highlighted efforts to use AI to personalize learning worldwide, envisioning a future “where every student, regardless of their background or location, can learn anything in the world in a way that works best for them.”
In a recent co-authored blog post on Microsoft’s website, the company’s Vice Chair and President Brad Smith and LinkedIn CEO Ryan Roslansky said that empowering teachers and students with modern-day AI tools, continuously developing AI skills and creating economic opportunity by connecting new skills to jobs are the top priorities in U.S. AI education.
“We believe delivering on the real promise of AI depends on how broadly it’s diffused,” they wrote. “This requires investment and innovation in AI education, training, and job certification.”
In its efforts to increase exposure to educational AI tools, Microsoft committed to providing a year’s subscription to Copilot for college students free of charge, expanding access to Microsoft AI tools in schools, $1.25 million in educator grants for teachers pioneering AI-powered learning, free LinkedIn Learning AI courses, and AI training for job seekers and certifications for community colleges.
LOOKING AHEAD
In a phone call with Government Technology last week, Kotran expressed excitement following the task force meeting, which he was invited to, stating he was heartened that education appears to be taking center stage at our nation’s capital.
“The White House Task Force meeting today, I think, represents an opening to actually harness the power of the White House,” he said. “But also the federal government to just motivate all the other actors that are part of the education system to make the change that’s going to be required.”
But, he emphasized, the private sector must support educators and school leaders in their adoption of AI, considering recent cuts to education funding. The measure of whether the task force is successful, according to Kotran, will depend on if the private sector supports states in AI tools and implementation.
“It’s not going to be enough for a school to have one elective class called ‘introduction to AI,’” Kotran said. “The only chance we have to make progress on AI readiness is for companies, the private sector, philanthropies, to put resources on the table.”
Tools & Platforms
AI Horizons summit: Pa. must invest in energy production, embrace AI

This week’s second annual AI Horizons summit brought together academics, politicians, and leaders from storied Pittsburgh institutions and upstart startups.
“We have to combine the forces and the resources of our old and new leaders in energy industry and AI to all row in the same direction,” said Joanna Doven, executive director of AI Strike Team, which hosted the gathering. “Now is the time to radically merge.”
The two-day event unfolded at Bakery Square, the anchor for a stretch of Penn Avenue that is home to more than a dozen technology and AI companies including Google and the Pittsburgh-based Duolingo. Developers and AI enthusiasts have termed the mile-long corridor “AI Avenue.”
It was the second AI-related summit held in Pittsburgh in as many months; U.S. Sen. Dave McCormick’s inaugural Pennsylvania Energy and Innovation Summit debuted at Carnegie Mellon University in July with high-profile guests including President Donald Trump. That event touted roughly $90 billion worth of energy- and AI-related investment in the state (though a sizable chunk of that spending was already in place).
This week’s AI Horizons summit seemingly sought to forge more immediate connections between the companies, venture capitalists, and researchers in attendance, albeit at a smaller scale. On Thursday, BNY and CMU jointly announced the financial services company will invest $10 million in an AI lab at the university over the next five years.
The investment aims to “make sure that we are going to be at the very forefront of the research of how AI can apply to our firm and our industry,” said BNY CEO Robin Vince.
“Artificial Intelligence has emerged as one of the single most important intellectual developments of our time, and it is rapidly expanding into every sector of our economy,” CMU president Farnam Jahanian said in a statement. “Carnegie Mellon University is thrilled to collaborate with BNY — a global financial services powerhouse — to responsibly develop and scale emerging AI technologies and democratize their impact for the benefit of industry and society at large.”
And speakers said western Pennsylvania is well positioned to facilitate the AI boom, thanks to the expertise and skilled labor force being created by local universities including CMU and the University of Pittsburgh. The region’s industrial history and proximity to open land and natural resources needed for massive AI data centers could also help.
“Power and the ability to consume it is going to be one of the biggest challenges we face” when expanding the use of AI, said Toby Rice, president and CEO of EQT Corporation, the largest natural gas producer in the U.S.
Data centers have, as one speaker put it, an “insatiable appetite for energy,” and need vast amounts of power both to run the computers and to keep them cool.
A recent analysis from the federal Energy Information Administration predicts electricity used for commercial computing will increase faster than any other use in buildings over the next 25 years, sparking fears that the added stress on the power grid could spike rates for everyday Pennsylvanians.
“The only way to keep those prices down,” said Republican state Sen. Devlin Robinson, “is to open up the gas fields, make sure that the nuclear power plants are online, make sure that we’re cultivating the renewable energy so that we have a full grid that is able to sustain all of the energy needs of the Commonwealth and especially the region where these data centers are gonna go up.”
Democratic state Senate leader Jay Costa too encouraged an “all-of-the-above approach” to energy generation, but cautioned that “ the costs cannot be solely borne by the ratepayers.
“We have to have some balance and some guardrails in place” to protect consumers, he said.
But the focus on natural gas at the summit drew criticism from local environmental groups who decried the absence of robust discussion about renewable energy like solar and wind.
The Clean Air Council’s Larisa Mednis said reliance on fossil fuels contributes to worsening climate change.
“If this technology and AI is a sign of progress or a sign of innovation, why are we relying on antiquated forms of energy use … that we know are not serving people and are not going to help sustain the planet?” Mednis asked.
Critics say investments in gas-powered data centers rarely generate long-term economic or job growth.
“Data centers are very automated, highly capital-intensive projects that can soak up billion-dollar investments like a sponge and leave next to nothing for surrounding communities,” said Joanne Kilgour, executive director of the Appalachian think tank Ohio River Valley Institute.
The environmental costs of AI drew little discussion at the summit.
Many of the conversations mirrored those that took place at the July event. Indeed, the two events shared a number of speakers, including Sen. McCormick and Pennsylvania Governor Josh Shapiro, and similar talking points.
Shapiro once again touted the state as a leader in the “AI revolution,” likening it to previous technological upheavals like the agricultural and industrial revolutions.
“ We were the epicenter of growth and development and revolution because of the coal under our ground, because of the steel that we’ve made here, because of the ingenuity of our farmers,” he said. “This is the next chapter in our innovative growth as a commonwealth, which is gonna fuel growth in this country, fuel growth around the globe. And it happens because of AI.”
During a panel discussion with BNY CEO Vince and Westinghouse interim CEO Dan Sumner moderated by CMU president Jahanian, Shapiro said his administration will expand a generative AI pilot program for state employees launched in 2024.
“ I view AI not as a job replacer, but a job enhancer,” he said. “ We can streamline our processes and make things work more effectively.”
“ We can do big things with these tools, and we are showing how to deploy them in a responsible way,” he added.
Still, leaders in government and business need to take bigger swings to get ahead in the AI arms race, said U.S. Senator Dave McCormick. Pennsylvania isn’t just competing with neighboring states to attract data centers and AI companies, he said, but also with countries that are AI powerhouses in their own right, like China.
“We’re not gonna win with incrementalism,” he said. “This has to be disruptive. We’re gonna get disrupted one way or the other. The question is whether we’re gonna be the disruptor or the disruptee.”
AI is already changing the ways people do business, McCormick added, and the amount of money being poured into the industry far exceeds that which was spent on past innovations.
“This is not something that we’re gonna be able to slow down. It is something we can guide,” McCormick said.
He urged officials in both parties to accelerate AI and energy production, and establish Pennsylvania as a leader in the industries.
“ I think we have a good hand to play, but we don’t have a royal flush,” he said. “So we gotta … make the effort to improve the things that are lacking.”
Tools & Platforms
Lancaster County schools use AI detection system to increase security – WCNC
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi