Tools & Platforms
The ICO’s role in balancing AI development

The Innovation Platform spoke with Sophia Ignatidou, Group Manager, AI Policy at the Information Commissioner’s Office, about its role in regulating the UK’s AI sector, balancing innovation and economic growth with robust data protection measures.
Technology is evolving rapidly, and as artificial intelligence (AI) becomes more integrated into various aspects of our lives and industries, the role of regulatory bodies like the Information Commissioner’s Office (ICO) becomes crucial.
To explore the ICO’s role in the AI regulatory landscape, Sophia Ignatidou, Group Manager of AI Policy at the ICO, elaborates on the office’s comprehensive approach to managing AI development in the UK, emphasising the opportunities AI presents for economic growth, the inherent risks associated with its deployment, as well as the ethical considerations organisations must address.
What is the role of the Information Commissioner’s Office (ICO) in the UK’s AI landscape, and how does it enforce and raise awareness of AI legislation?
The ICO is the UK’s independent data protection authority and a horizontal regulator, meaning our remit spans both the public and private sectors, including government. We regulate the processing of personal data across the AI value chain: from data collection to model training and deployment. Since personal data underpins most AI systems that interact with people, our work is wide-ranging, covering everything from fraud detection in the public sector to targeted advertising on social media.
Our approach combines proactive engagement and regulatory enforcement. On the engagement side, we work closely with industry through our Enterprise and Innovation teams, and with the public sector via our Public Affairs colleagues. We also provide innovation services to support responsible AI development, with enforcement reserved for serious breaches. We also focus on public awareness, including commissioning research into public attitudes and engaging with civil society.
What opportunities for innovation and economic growth does AI present, and how can these be balanced with robust data protection?
AI offers significant potential to drive efficiency, reduce administrative burdens, and accelerate decision-making by identifying patterns and automating processes. However, these benefits will only be realised if AI addresses real-world problems rather than being a “solution in search of a problem.”
The UK is home to world-class AI talent and continues to attract leading minds. We believe that a multidisciplinary approach, combining technical expertise with insights from social sciences and economics, is essential to ensure AI development reflects the complexity of human experience.
Crucially, we do not see data protection as a barrier to innovation. On the contrary, strong data protection is fundamental to sustainable innovation and economic growth. Just as seatbelts enabled the safe expansion of the automotive industry, robust data protection builds trust and confidence in AI.
What are the potential risks associated with AI, and how does the ICO assess and mitigate them?
AI is not a single technology but an umbrella term for a range of statistical models with varying complexity, accuracy, and data requirements. The risks depend on the context and purpose of deployment.
When we identify a high-risk AI use case, we typically require the organisation, whether developer or deployer, to conduct a Data Protection Impact Assessment (DPIA). This document should outline the risks and the measures in place to mitigate them. The ICO assesses the adequacy of these DPIAs, focusing on the severity and likelihood of harm. Failure to provide an adequate DPIA can lead to regulatory action, as seen in our preliminary enforcement notice against Snap in 2023.
On a similar note, how could emerging technologies like blockchain or federated learning help resolve data protection issues?
Emerging technologies such as federated learning can help address data protection challenges by reducing the amount of personal information processed and improving security. Federated learning allows models to be trained without centralising raw data, which lowers the risk of large-scale breaches and limits exposure of personal information. When combined with other privacy-enhancing technologies, it further mitigates the risk of attackers inferring sensitive data.
Blockchain, when implemented carefully, can strengthen integrity and accountability through tamper-evident records, though it must be designed to avoid unnecessary on-chain disclosure. Our detailed guidance on blockchain will be published soon and can be tracked via the ICO’s technology guidance pipeline.
What ethical concerns are associated with AI, and how should organisations address them? What is the ICO’s strategic approach?
Data protection law embeds ethical principles through its seven core principles: lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; security; and accountability. Under the UK GDPR’s “data protection by design and by default” requirement, organisations must integrate these principles into AI systems from the outset.
Our recently announced AI and Biometrics Strategy sets out four priority areas: scrutiny of automated decision-making in government and recruitment, oversight of generative AI foundation model training, regulation of facial recognition technology in law enforcement and development of a statutory code of practice on AI and automated decision-making. This strategy builds on our existing guidance and aims to protect individuals’ rights while providing clarity for innovators.
How can the UK keep pace with emerging AI technologies and their implications for data protection?
The UK government’s AI Opportunities Plan rightly emphasises the need to strengthen regulators’ capacity to supervise AI. Building expertise and resources across the regulatory landscape is essential to keep pace with rapid technological change.
How does the ICO engage internationally on AI regulation, and how influential are other countries’ policies on the UK’s approach?
AI supply chains are global, so international collaboration is vital. We maintain active relationships with counterparts through forums such as the G7, OECD, Global Privacy Assembly, and the European Commission. We closely monitor developments like the EU AI Act, while remaining confident in the UK’s approach of empowering sector regulators rather than creating a single AI regulator.
What is the Data (Use and Access) Act, and what impact will it have on AI policy?
The Data (Use and Access) Act requires the ICO to develop a statutory Code of Practice on AI and automated decision-making. This will build on our existing non-statutory guidance and incorporate recent positions, such as our expectations for generative AI and joint guidance on AI procurement. The code will provide greater clarity on issues such as research provisions and accountability in complex supply chains.
How can the UK position itself as a global leader in AI, and what challenges does the ICO anticipate?
The UK already plays a leading role in global AI regulation discussions. For example, the Digital Regulation Cooperation Forum, bringing together the ICO, Ofcom, CMA and FCA, has been replicated internationally. The ICO was also the first data protection authority to provide clarity on generative AI.
Looking ahead, our main challenges include recruiting and retaining AI specialists, providing regulatory clarity amid rapid technical and legislative change, and ensuring our capacity matches the scale of AI adoption.
Please note, this article will also appear in the 23rd edition of our quarterly publication.
Tools & Platforms
White House AI Task Force Positions AI as Top Education Priority

When Trump administration officials met with ed-tech leaders at the White House last week to discuss the nation’s vision for artificial intelligence in American life, they repeatedly underscored one central message: Education must be at the heart of the nation’s AI strategy.
Established by President Trump’s April 2025 executive order, the White House Task Force on AI Education is chaired by director of science and technology policy Michael Kratsios, and is tasked with promoting AI literacy and proficiency among America’s youth and educators, organizing a nationwide AI challenge and forging public-private partnerships to provide AI education resources to K-12 students.
“The robots are here. Our future is no longer science fiction,” First Lady Melania Trump said in opening remarks. “But, as leaders and parents, we must manage AI’s growth responsibly. During this primitive stage, it is our duty to treat AI as we would our own children: empowering but with watchful guidance.”
MAINTAINING U.S. COMPETITIVENESS
In a recording of the meeting Sept. 4, multiple speakers, including Department of Agriculture Secretary Brooke Rollins and Special Advisor for AI and Crypto David Sacks, stressed that AI will define the future of U.S. work and international competitiveness, with explicit framing against rivals like China.
“The United States will lead the world in artificial intelligence, period, full stop, not China, not any of our other foreign adversaries, but America,” Rollins said in the recording. “We are making sure that our young people are ready to win that race.”
In order to do so, though, Sacks noted that K-12 and higher education systems must adapt quickly.
“AI is going to be the ultimate boost for our workers,” Sacks said. “And it is important that they learn from an early age how to use AI.”
The Department of Education signaled that federal funding will also shift to incentivize schools’ adoption of AI. Secretary Linda McMahon said applications that include AI-based solutions will be “more strongly considered” and could receive “bonus points” in the review process.
EMBRACING CHANGE MANAGEMENT
Several officials at the meeting urged schools and communities not to view AI as a threat, but as a tool for growth.
“It’s not one of those things to be afraid of,” McMahon said. “Let’s embrace it. Let’s develop AI-based solutions to real-world problems and cultivate an AI-informed, future-ready workforce.”
Secretary Chris Wright of the Department of Energy linked the success of AI adoption to larger infrastructure challenges.
“We will not win in AI if we don’t massively grow our electricity production,” he said. “Perhaps the killer app, the most important use of AI, is for education and to fix one of the greatest American shortcomings, our K-12 education system.”
WORKFORCE DEVELOPMENT
Workforce training and reskilling emerged as another priority, with Labor Secretary Lori Chavez-DeRemer describing apprenticeships and on-the-job training as essential to preparing workers for an AI-driven economy.
“On-the-job training programs will help build the mortgage-paying jobs that AI will create while also enhancing the unique skills required to succeed in various industries,” Chavez-DeRemer said. She tied these efforts to the president’s goal of 1 million new apprenticeships nationwide.
Alex Kotran, chief executive officer of the education nonprofit aiEDU, told Government Technology that members of the task force spent a notable amount of time discussing rural schools and the importance of reaching underserved students, especially in regard to preparing rural students for the modern workforce.
PRIVATE-SECTOR COMMITMENTS
In addition to White House officials, attendees included high-level technology executives and entrepreneurs committed to expanding U.S. AI education.
During the recorded meeting, IBM CEO Arvind Krishna pledged to train 2 million American workers in AI skills over the next three years, noting that “no organization can do it alone.” Similarly, Google CEO Sundar Pichai highlighted efforts to use AI to personalize learning worldwide, envisioning a future “where every student, regardless of their background or location, can learn anything in the world in a way that works best for them.”
In a recent co-authored blog post on Microsoft’s website, the company’s Vice Chair and President Brad Smith and LinkedIn CEO Ryan Roslansky said that empowering teachers and students with modern-day AI tools, continuously developing AI skills and creating economic opportunity by connecting new skills to jobs are the top priorities in U.S. AI education.
“We believe delivering on the real promise of AI depends on how broadly it’s diffused,” they wrote. “This requires investment and innovation in AI education, training, and job certification.”
In its efforts to increase exposure to educational AI tools, Microsoft committed to providing a year’s subscription to Copilot for college students free of charge, expanding access to Microsoft AI tools in schools, $1.25 million in educator grants for teachers pioneering AI-powered learning, free LinkedIn Learning AI courses, and AI training for job seekers and certifications for community colleges.
LOOKING AHEAD
In a phone call with Government Technology last week, Kotran expressed excitement following the task force meeting, which he was invited to, stating he was heartened that education appears to be taking center stage at our nation’s capital.
“The White House Task Force meeting today, I think, represents an opening to actually harness the power of the White House,” he said. “But also the federal government to just motivate all the other actors that are part of the education system to make the change that’s going to be required.”
But, he emphasized, the private sector must support educators and school leaders in their adoption of AI, considering recent cuts to education funding. The measure of whether the task force is successful, according to Kotran, will depend on if the private sector supports states in AI tools and implementation.
“It’s not going to be enough for a school to have one elective class called ‘introduction to AI,’” Kotran said. “The only chance we have to make progress on AI readiness is for companies, the private sector, philanthropies, to put resources on the table.”
Tools & Platforms
AI Horizons summit: Pa. must invest in energy production, embrace AI

This week’s second annual AI Horizons summit brought together academics, politicians, and leaders from storied Pittsburgh institutions and upstart startups.
“We have to combine the forces and the resources of our old and new leaders in energy industry and AI to all row in the same direction,” said Joanna Doven, executive director of AI Strike Team, which hosted the gathering. “Now is the time to radically merge.”
The two-day event unfolded at Bakery Square, the anchor for a stretch of Penn Avenue that is home to more than a dozen technology and AI companies including Google and the Pittsburgh-based Duolingo. Developers and AI enthusiasts have termed the mile-long corridor “AI Avenue.”
It was the second AI-related summit held in Pittsburgh in as many months; U.S. Sen. Dave McCormick’s inaugural Pennsylvania Energy and Innovation Summit debuted at Carnegie Mellon University in July with high-profile guests including President Donald Trump. That event touted roughly $90 billion worth of energy- and AI-related investment in the state (though a sizable chunk of that spending was already in place).
This week’s AI Horizons summit seemingly sought to forge more immediate connections between the companies, venture capitalists, and researchers in attendance, albeit at a smaller scale. On Thursday, BNY and CMU jointly announced the financial services company will invest $10 million in an AI lab at the university over the next five years.
The investment aims to “make sure that we are going to be at the very forefront of the research of how AI can apply to our firm and our industry,” said BNY CEO Robin Vince.
“Artificial Intelligence has emerged as one of the single most important intellectual developments of our time, and it is rapidly expanding into every sector of our economy,” CMU president Farnam Jahanian said in a statement. “Carnegie Mellon University is thrilled to collaborate with BNY — a global financial services powerhouse — to responsibly develop and scale emerging AI technologies and democratize their impact for the benefit of industry and society at large.”
And speakers said western Pennsylvania is well positioned to facilitate the AI boom, thanks to the expertise and skilled labor force being created by local universities including CMU and the University of Pittsburgh. The region’s industrial history and proximity to open land and natural resources needed for massive AI data centers could also help.
“Power and the ability to consume it is going to be one of the biggest challenges we face” when expanding the use of AI, said Toby Rice, president and CEO of EQT Corporation, the largest natural gas producer in the U.S.
Data centers have, as one speaker put it, an “insatiable appetite for energy,” and need vast amounts of power both to run the computers and to keep them cool.
A recent analysis from the federal Energy Information Administration predicts electricity used for commercial computing will increase faster than any other use in buildings over the next 25 years, sparking fears that the added stress on the power grid could spike rates for everyday Pennsylvanians.
“The only way to keep those prices down,” said Republican state Sen. Devlin Robinson, “is to open up the gas fields, make sure that the nuclear power plants are online, make sure that we’re cultivating the renewable energy so that we have a full grid that is able to sustain all of the energy needs of the Commonwealth and especially the region where these data centers are gonna go up.”
Democratic state Senate leader Jay Costa too encouraged an “all-of-the-above approach” to energy generation, but cautioned that “ the costs cannot be solely borne by the ratepayers.
“We have to have some balance and some guardrails in place” to protect consumers, he said.
But the focus on natural gas at the summit drew criticism from local environmental groups who decried the absence of robust discussion about renewable energy like solar and wind.
The Clean Air Council’s Larisa Mednis said reliance on fossil fuels contributes to worsening climate change.
“If this technology and AI is a sign of progress or a sign of innovation, why are we relying on antiquated forms of energy use … that we know are not serving people and are not going to help sustain the planet?” Mednis asked.
Critics say investments in gas-powered data centers rarely generate long-term economic or job growth.
“Data centers are very automated, highly capital-intensive projects that can soak up billion-dollar investments like a sponge and leave next to nothing for surrounding communities,” said Joanne Kilgour, executive director of the Appalachian think tank Ohio River Valley Institute.
The environmental costs of AI drew little discussion at the summit.
Many of the conversations mirrored those that took place at the July event. Indeed, the two events shared a number of speakers, including Sen. McCormick and Pennsylvania Governor Josh Shapiro, and similar talking points.
Shapiro once again touted the state as a leader in the “AI revolution,” likening it to previous technological upheavals like the agricultural and industrial revolutions.
“ We were the epicenter of growth and development and revolution because of the coal under our ground, because of the steel that we’ve made here, because of the ingenuity of our farmers,” he said. “This is the next chapter in our innovative growth as a commonwealth, which is gonna fuel growth in this country, fuel growth around the globe. And it happens because of AI.”
During a panel discussion with BNY CEO Vince and Westinghouse interim CEO Dan Sumner moderated by CMU president Jahanian, Shapiro said his administration will expand a generative AI pilot program for state employees launched in 2024.
“ I view AI not as a job replacer, but a job enhancer,” he said. “ We can streamline our processes and make things work more effectively.”
“ We can do big things with these tools, and we are showing how to deploy them in a responsible way,” he added.
Still, leaders in government and business need to take bigger swings to get ahead in the AI arms race, said U.S. Senator Dave McCormick. Pennsylvania isn’t just competing with neighboring states to attract data centers and AI companies, he said, but also with countries that are AI powerhouses in their own right, like China.
“We’re not gonna win with incrementalism,” he said. “This has to be disruptive. We’re gonna get disrupted one way or the other. The question is whether we’re gonna be the disruptor or the disruptee.”
AI is already changing the ways people do business, McCormick added, and the amount of money being poured into the industry far exceeds that which was spent on past innovations.
“This is not something that we’re gonna be able to slow down. It is something we can guide,” McCormick said.
He urged officials in both parties to accelerate AI and energy production, and establish Pennsylvania as a leader in the industries.
“ I think we have a good hand to play, but we don’t have a royal flush,” he said. “So we gotta … make the effort to improve the things that are lacking.”
Tools & Platforms
Lancaster County schools use AI detection system to increase security – WCNC
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi