AI Research
Bending it like Beckham: Soccer’s New Upstart Artificial Intelligence Tool Could Change Sports Forever

Sports analytics have revolutionized the way games are understood, evolving from simple box score statistics in baseball to detailed pass attempt charts in football. In continuous invasion sports like soccer and hockey, where play is fast and decisions are made in real time, much of the most valuable data remains limited to proprietary systems within professional organizations.
Researchers at the University of Waterloo have developed an AI-based method to make complex tracking data more accessible to scientists, teams, and fans.
Simulating the Game
The project was led by Dr. David Radke, a recent Waterloo PhD graduate and current senior research scientist with the NHL’s Chicago Blackhawks, and PhD student Kyle Tilbury. The team utilized Google Research Football, a platform that enables AI systems to play simulated soccer matches. With this tool, the team trained virtual players to move, pass, and adjust tactics across thousands of simulated matches, generating a dataset that reflects the flow of real games for analysis and strategy development.
To demonstrate this method, the researchers generated tracking data from 3,000 simulated games, recording information such as passes, goals, and player movements. In this context, tracking data refers to precise records of each player’s position and actions at every moment in the simulation. Although the AI players do not perform at the level of professional athletes, the datasets are detailed enough to support research in sports analytics.
“While researchers have access to a lot of data about episodic sports like baseball, continuous invasion-game sports like soccer and hockey are much more difficult to analyze,” Radke said. “While the AI-generated players might not exactly play like Lionel Messi, the simulated datasets they generate are still useful for developing sports analysis tools.”
Equal Opportunities
Professional teams heavily invest in advanced analytics systems to track every movement and decision made by players while on the field. Due to the complex analytical capabilities of these systems, the technology is often prohibitively expensive for smaller organizations, universities, or independent analysts.
By simulating matches with AI, Radke and Tilbury’s method produces open-access sports data, enabling broader participation in analytics research beyond professional organizations.
“Enabling researchers to have this data will open up all kinds of opportunities,” Tilbury said. “It’s a democratization of access to this kind of sports analytics data.”
Beyond the Box Score
Soccer, hockey, and other invasion sports are difficult to analyze because play is continuous and athletes are always in motion. Invasion sports are those where teams compete to gain possession and move into the opponent’s territory to score. Unlike baseball, where each pitch is a separate event, soccer involves a constant flow of decisions such as where to pass, when to shoot, and how to defend. Without advanced tracking systems, it has been hard to capture and study the complexity of these movements.
These datasets are important because they allow researchers to develop and test new tools for player evaluation, strategy modeling, and outcome prediction. By increasing the availability of this technology to universities and smaller organizations, the project may encourage broader innovation in the analytical study of invasion sports.
Implications for Future AI Research
Beyond sports, the work contributes to the development of AI itself. In Radke’s words:
“At its core, invasion-game sports analytics is about understanding complex multiagent systems. The better we are at modeling the complexity of human behaviour in a sporting situation, the more useful that is for AI research. In turn, more advanced multiagent systems will help us better understand invasion-game sports.”
By utilizing soccer matches as platforms to study cooperation, competition, and decision-making, this research enhances AI’s ability to model dynamic multiagent systems. Real-world application of data collected from tracking player movement can also inspire AI developments in areas such as self-driving cars, robotics, and collaborative problem-solving in complex environments.
Opening the Playbook
The research team emphasizes that widespread access to tracking data is essential for advancing the field of sports analytics. Their simulated datasets are designed to support further research and provide students and scientists with opportunities to develop models without relying on restricted data provided by professional leagues.
The study was presented at the 24th International Conference on Autonomous Agents and Multiagent Systems. For Radke and Tilbury, it’s not just about soccer, it’s about giving more people the tools to innovate.
Open access to sports data will not only accelerate advances in analytics and AI research but also empower a wider community to tackle complex challenges in modeling human behavior and decision-making, shaping the future of both sports and technology.
Austin Burgess is a writer and researcher with a background in sales, marketing, and data analytics. He holds a Master of Business Administration and a Bachelor of Science in Business Administration, along with a certification in Data Analytics. His work combines analytical training with a focus on emerging science, aerospace, and astronomical research.
AI Research
What is artificial intelligence’s greatest risk? – Opinion

Risk dominates current discussions on AI governance. This July, Geoffrey Hinton, a Nobel and Turing laureate, addressed the World Artificial Intelligence Conference in Shanghai. His speech bore the title he has used almost exclusively since leaving Google in 2023: “Will Digital Intelligence Replace Biological Intelligence?” He stressed, once again, that AI might soon surpass humanity and threaten our survival.
Scientists and policymakers from China, the United States, European countries and elsewhere, nodded gravely in response. Yet this apparent consensus masks a profound paradox in AI governance. Conference after conference, the world’s brightest minds have identified shared risks. They call for cooperation, sign declarations, then watch the world return to fierce competition the moment the panels end.
This paradox troubled me for years. I trust science, but if the threat is truly existential, why can’t even survival unite humanity? Only recently did I grasp a disturbing possibility: these risk warnings fail to foster international cooperation because defining AI risk has itself become a new arena for international competition.
Traditionally, technology governance follows a clear causal chain: identify specific risks, then develop governance solutions. Nuclear weapons pose stark, objective dangers: blast yield, radiation, fallout. Climate change offers measurable indicators and an increasingly solid scientific consensus. AI, by contrast, is a blank canvas. No one can definitively convince everyone whether the greatest risk is mass unemployment, algorithmic discrimination, superintelligent takeover, or something entirely different that we have not even heard of.
This uncertainty transforms AI risk assessment from scientific inquiry into strategic gamesmanship. The US emphasizes “existential risks” from “frontier models”, terminology that spotlights Silicon Valley’s advanced systems.
This framework positions American tech giants as both sources of danger and essential partners in control. Europe focuses on “ethics” and “trustworthy AI”, extending its regulatory expertise from data protection into artificial intelligence. China advocates that “AI safety is a global public good”, arguing that risk governance should not be monopolized by a few nations but serve humanity’s common interests, a narrative that challenges Western dominance while calling for multipolar governance.
Corporate actors prove equally adept at shaping risk narratives. OpenAI’s emphasis on “alignment with human goals” highlights both genuine technical challenges and the company’s particular research strengths. Anthropic promotes “constitutional AI” in domains where it claims special expertise. Other firms excel at selecting safety benchmarks that favor their approaches, while suggesting the real risks lie with competitors who fail to meet these standards. Computer scientists, philosophers, economists, each professional community shapes its own value through narrative, warning of technical catastrophe, revealing moral hazards, or predicting labor market upheaval.
The causal chain of AI safety has thus been inverted: we construct risk narratives first, then deduce technical threats; we design governance frameworks first, then define the problems requiring governance. Defining the problem creates causality. This is not epistemological failure but a new form of power, namely making your risk definition the unquestioned “scientific consensus”. For how we define “artificial general intelligence”, which applications constitute “unacceptable risk”, what counts as “responsible AI”, answers to all these questions will directly shape future technological trajectories, industrial competitive advantages, international market structures, and even the world order itself.
Does this mean AI safety cooperation is doomed to empty talk? Quite the opposite. Understanding the rules of the game enables better participation.
AI risk is constructed. For policymakers, this means advancing your agenda in international negotiations while understanding the genuine concerns and legitimate interests behind others’.
Acknowledging construction doesn’t mean denying reality, regardless of how risks are defined, solid technical research, robust contingency mechanisms, and practical safeguards remain essential. For businesses, this means considering multiple stakeholders when shaping technical standards and avoiding winner-takes-all thinking.
True competitive advantage stems from unique strengths rooted in local innovation ecosystems, not opportunistic positioning. For the public, this means developing “risk immunity”, learning to discern the interest structures and power relations behind different AI risk narratives, neither paralyzed by doomsday prophecies nor seduced by technological utopias.
International cooperation remains indispensable, but we must rethink its nature and possibilities. Rather than pursuing a unified AI risk governance framework, a consensus that is neither achievable nor necessary, we should acknowledge and manage the plurality of risk perceptions. The international community needs not one comprehensive global agreement superseding all others, but “competitive governance laboratories” where different governance models prove their worth in practice. This polycentric governance may appear loose but can achieve higher-order coordination through mutual learning and checks and balances.
We habitually view AI as another technology requiring governance, without realizing it is changing the meaning of “governance” itself. The competition to define AI risk isn’t global governance’s failure but its necessary evolution: a collective learning process for confronting the uncertainties of transformative technology.
The author is an associate professor at the Center for International Security and Strategy, Tsinghua University.
The views don’t necessarily represent those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.
AI Research
Albania’s prime minister appoints an AI-generated ‘minister’ to tackle corruption

TIRANA, Albania — Albania’s prime minister on Friday tapped an Artificial Intelligence-generated “minister” to tackle corruption and promote transparency and innovation in his new Cabinet.
Officially named Diella — the female form of the word for sun in the Albanian language — the new AI minister is a virtual entity.
Diella will be a “member of the Cabinet who is not present physically but has been created virtually,” Prime Minister Edi Rama said in a post on Facebook.
Rama said the AI-generated bot would help ensure that “public tenders will be 100% free of corruption” and will help the government work faster and with full transparency.
Diella uses AI’s up-to-date models and techniques to guarantee accuracy in offering the duties it is charged with, according to Albania’s National Agency for Information Society’s website.
Diella, depicted as a figure in a traditional Albanian folk costume, was created earlier this year, in cooperation with Microsoft, as a virtual assistant on the e-Albania public service platform, where she has helped users navigate the site and get access to about 1 million digital inquiries and documents.
Rama’s Socialist Party secured a fourth consecutive term after winning 83 of the 140 Assembly seats in the May 11 parliamentary elections. The party can govern alone and pass most legislation, but it needs a two-thirds majority, or 93 seats, to change the Constitution.
The Socialists have said it can deliver European Union membership for Albania in five years, with negotiations concluding by 2027. The pledge has been met with skepticism by the Democrats, who contend Albania is far from prepared.
The Western Balkan country opened full negotiations to join the EU a year ago. The new government also faces the challenges of fighting organized crime and corruption, which has remained a top issue in Albania since the fall of the communist regime in 1990.
Diella also will help local authorities to speed up and adapt to the bloc’s working trend.
Albanian President Bajram Begaj has mandated Rama with the formation of the new government. Analysts say that gives the prime minister authority “for the creation and functioning” of AI-generated Diella.
Asked by journalists whether that violates the constitution, Begaj stopped short on Friday of describing Diella’s role as a ministerial post.
The conservative opposition Democratic Party-led coalition, headed by former prime minister and president Sali Berisha, won 50 seats. The party has not accepted the official election results, claiming irregularities, but its members participated in the new parliament’s inaugural session. The remaining seats went to four smaller parties.
Lawmakers will vote on the new Cabinet but it was unclear whether Rama will ask for a vote on Diella’s virtual post. Legal experts say more work may be needed to establish Diella’s official status.
The Democrats’ parliamentary group leader Gazmend Bardhi said he considered Diella’s ministerial status unconstitutional.
“Prime minister’s buffoonery cannot be turned into legal acts of the Albanian state,” Bardhi posted on Facebook.
Parliament began the process on Friday to swear in the new lawmakers, who will later elect a new speaker and deputies and formally present Rama’s new Cabinet.
AI Research
AI fuels false claims after Charlie Kirk’s death, CBS News analysis reveals

False claims, conspiracy theories and posts naming people with no connection to the incident spread rapidly across social media in the aftermath of conservative activist Charlie Kirk’s killing on Wednesday, some amplified and fueled by AI tools.
CBS News identified 10 posts by Grok, X’s AI chatbot, that misidentified the suspect before his identity, now known to be southern Utah resident Tyler Robinson, was released. Grok eventually generated a response saying it had incorrectly identified the suspect, but by then, posts featuring the wrong person’s face and name were already circulating across X.
The chatbot also generated altered “enhancements” of photos released by the FBI. One such photo was reposted by the Washington County Sheriff’s Office in Utah, which later posted an update saying, “this appears to be an AI enhanced photo” that distorted the clothing and facial features.
One AI-enhanced image portrayed a man appearing much older than Robinson, who is 22. An AI-generated video that smoothed out the suspect’s features and jumbled his shirt design was posted by an X user with more than 2 million followers and was reposted thousands of times.
On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok’s replies to X users’ inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.
CBS News also identified a dozen instances where Grok said that Kirk was alive the day following his death. Other Grok responses gave a false assassination date, labeled the FBI’s reward offer a “hoax” and said that reports about Kirk’s death “remain conflicting” even after his death had been confirmed.
Most generative AI tools produce results based on probability, which can make it challenging for them to provide accurate information in real time as events unfold, S. Shyam Sundar, a professor at Penn State University and the director of the university’s Center for Socially Responsible Artificial Intelligence, told CBS News.
“They look at what is the most likely next word or next passage,” Sundar said. “It’s not based on fact checking. It’s not based on any kind of reportage on the scene. It’s more based on the likelihood of this event occurring, and if there’s enough out there that might question his death, it might pick up on some of that.”
X did not respond to a request for comment about the false information Grok was posting.
Meanwhile, the AI-powered search engine Perplexity’s X bot described the shooting as a “hypothetical scenario” in a since-deleted post, and suggested a White House statement on Kirk’s death was fabricated.
Perplexity’s spokesperson told CBS News that “accurate AI is the core technology we are building and central to the experience in all of our products,” but that “Perplexity never claims to be 100% accurate.”
Another spokesperson added the X bot is not up to date with improvements the company has made to its technology, and the company has since removed the bot from X.
Google’s AI Overview, a summary of search results that sometimes appears at the top of searches, also provided inaccurate information. The AI Overview for a search late Thursday evening for Hunter Kozak, the last person to ask Kirk a question before he was killed, incorrectly identified him as the person of interest the FBI was looking for. By Friday morning, the false information no longer appeared for the same search.
“The vast majority of the queries seeking information on this topic return high quality and accurate responses,” a Google spokesperson told CBS News. “Given the rapidly evolving nature of this news, it’s possible that our systems misinterpreted web content or missed some context, as all Search features can do given the scale of the open web.”
Sundar told CBS News that people tend to perceive AI as being less biased or more reliable than someone online who they don’t know.
“We don’t think of machines as being partisan or bias or wanting to sow seeds of dissent,” Sundar said. “If it’s just a social media friend or some somebody on the contact list that’s sent something on your feed with unknown pedigree … chances are people trust the machine more than they do the random human.”
Misinformation may also be coming from foreign sources, according to Cox, Utah’s governor, who said in a press briefing on Thursday that foreign adversaries including Russia and China have bots that “are trying to instill disinformation and encourage violence.” Cox urged listeners to spend less time on social media.
“I would encourage you to ignore those and turn off those streams, and to spend a little more time with our families,” he said.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi