AI Insights
Assassin’s Creed Mirage will get fresh content later this year and it’ll be completely free

The Assassin’s Creed fanbase may be waiting for the first DLC for Assassin’s Creed Shadows, but Ubisoft instead confirmed new content for its previous title, Assassin’s Creed Mirage. The studio announced on the official Assassin’s Creed X account that there will be a new story chapter and missions for protagonist Basim, who will venture into ninth-century alUla. More importantly, the DLC will be free.
According to the post, Ubisoft will bring gameplay improvements to both the new content and the base game, which revisits the franchise‘s roots that emphasize open-world design and stealth combat. The announcement from Ubisoft comes after a Les Echos report earlier in the year said that new content for Assassin’s Creed Mirage was created thanks to a partnership between Ubisoft and Savvy Games Group, a gaming and esports company that has backing from the Saudi Arabian government.
The upcoming DLC sheds more light on what Stephane Boudon, one of the Ubisoft developers for Assassin’s Creed Mirage, teased during a Reddit AMA following the game’s release in October 2023. In the thread, Boudon said the game was designed “as a standalone experience without any DLC plan,” only adding that the team had “ideas of how we could extend the story of Basim.” Ubisoft didn’t specify exactly when the DLC would drop, only revealing that it would be “later this year.” In the meantime, Microsoft updated its included games for the Xbox Game Pass for August, which include Assassin’s Creed Mirage.
AI Insights
The human thinking behind artificial intelligence

Artificial intelligence is built on the thinking of intelligent humans, including data labellers who are paid as little as US$1.32 per hour. Zena Assaad, an expert in human-machine relationships, examines the price we’re willing to pay for this technology. This article was originally published in the Cosmos Print Magazine in December 2024.
From Blade Runner to The Matrix, science fiction depicts artificial intelligence as a mirror of human intelligence. It’s portrayed as holding a capacity to evolve and advance with a mind of its own. The reality is very different.
The original conceptions of AI, which hailed from the earliest days of computer science, defined it as the replication of human intelligence in machines. This definition invites debate on the semantics of the notion of intelligence.
Can human intelligence be replicated?
The idea of intelligence is not contained within one neat definition. Some view intelligence as an ability to remember information, others see it as good decision making, and some see it in the nuances of emotions and our treatment of others.
As such, human intelligence is an open and subjective concept. Replicating this amorphous notion in a machine is very difficult.
Software is the foundation of AI, and software is binary in its construct; something made of two things or parts. In software, numbers and values are expressed as 1 or 0, true or false. This dichotomous design does not reflect the many shades of grey of human thinking and decision making.
Not everything is simply yes or no. Part of that nuance comes from intent and reasoning, which are distinctly human qualities.
To have intent is to pursue something with an end or purpose in mind. AI systems can be thought to have goals, in the form of functions within the software, but this is not the same as intent.
The main difference is goals are specific and measurable objectives whereas intentions are the underlying purpose and motivation behind those actions.
You might define the goals as ‘what’, and intent as ‘why’.
To have reasoning is to consider something with logic and sensibility, drawing conclusions from old and new information and experiences. It is based on understanding rather than pattern recognition. AI does not have the capacity for intent and reasoning and this challenges the feasibility of replicating human intelligence in a machine.
There is a cornucopia of principles and frameworks that attempts to address how we design and develop ethical machines. But if AI is not truly a replication of human intelligence, how can we hold these machines to human ethical standards?
Can machines be ethical?
Ethics is a study of morality: right and wrong, good and bad. Imparting ethics on a machine, which is distinctly not human, seems redundant. How can we expect a binary construct, which cannot reason, to behave ethically?
Similar to the semantic debate around intelligence, defining ethics is its own Pandora’s box. Ethics is amorphous, changing across time and place. What is ethical to one person may not be to another. What was ethical 5 years ago may not be considered appropriate today.
These changes are based on many things; culture, religion, economic climates, social demographics, and more. The idea of machines embodying these very human notions is improbable, and so it follows that machines cannot be held to ethical standards. However, what can and should be held to ethical standards are the people who make decisions for AI.
Contrary to popular belief, technology of any form does not develop of its own accord. The reality is their evolution has been puppeteered by humans. Human beings are the ones designing, developing, manufacturing, deploying and using these systems.
If an AI system produces an incorrect or inappropriate output, it is because of a flaw in the design, not because the machine is unethical.
The concept of ethics is fundamentally human. To apply this term to AI, or any other form of technology, anthropomorphises these systems. Attributing human characteristics and behaviours to a piece of technology creates misleading interpretations of what that technology is and is not capable of.
Decades long messaging about synthetic humans and killer robots have shaped how we conceptualise the advancement of technology, in particular, technology which claims to replicate human intelligence.
AI applications have scaled exponentially in recent years, with many AI tools being made freely available to the general public. But freely accessible AI tools come at a cost. In this case, the cost is ironically in the value of human intelligence.
The hidden labour behind AI
At a basic level, artificial intelligence works by finding patterns in data, which involves more human labour than you might think.
ChatGPT is one example of AI, referred to as a large language model (LLM). ChatGPT is trained on carefully labelled data which adds context, in the form of annotations and categories, to what is otherwise a lot of noise.
Using labelled data to train an AI model is referred to as supervised learning. Labelling an apple as “apple”, a spoon as “spoon”, a dog as “dog”, helps to contextualise these pieces of data into useful information.
When you enter a prompt into ChatGPT, it scours the data it has been trained on to find patterns matching those within your prompt. The more detailed the data labels, the more accurate the matches. Labels such as “pet” and “animal” alongside the label “dog” provide more detail, creating more opportunities for patterns to be exposed.
Data is made up of an amalgam of content (images, words, numbers, etc.) and it requires this context to become useful information that can be interpreted and used.
As the AI industry continues to grow, there is a greater demand for developing more accurate products. One of the main ways for achieving this is through more detailed and granular labels on training data.
Data labelling is a time consuming and labour intensive process. In absence of this work, data is not usable or understandable by an AI model that operates through supervised learning.
Despite the task being essential to the development of AI models and tools, the work of data labellers often goes entirely unnoticed and unrecognised.
Data labelling is done by human experts and these people are most commonly from the Global South – Kenya, India and the Philippines. This is because data labelling is labour intensive work and labour is cheaper in the Global South.
Data labellers are forced to work under stressful conditions, reviewing content depicting violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.
Data labellers are pressured to meet high demands within short timeframes. For this, they earn as little as US$1.32 per hour, according to TIME magazine’s 2023 reporting, based on an OpenAI contract with data labelling company Sama.
Countries such as Kenya, India and the Philippines incur less legal and regulatory oversight of worker rights and working conditions.
Similar to the fast fashion industry, cheap labour enables cheaply accessible products, or in the case of AI, it’s often a free product.
AI tools are commonly free or cheap to access and use because costs are being cut around the hidden labour that most people are unaware of.
When thinking about the ethics of AI, cracks in the supply chain of development rarely come to the surface of these discussions. People are more focused on the machine itself, rather than how it was created. How a product is developed, be it an item of clothing, a TV, furniture or an AI-enabled capability, has societal and ethical impacts that are far reaching.
A numbers game
In today’s digital world, organisational incentives have shifted beyond revenue and now include metrics around the number of users.
Releasing free tools for the public to use exponentially scales the number of users and opens pathways for alternate revenue streams.
That means we now have a greater level of access to technology tools at a fraction of the cost, or even at no monetary cost at all. This is a recent and rapid change in the way technology reaches consumers.
In 2011, 35% of Americans owned a mobile phone. By 2024 this statistic increased to a whopping 97%. In 1973, a new TV retailed for $379.95 USD, equivalent to $2,694.32 USD today. Today, a new TV can be purchased for much less than that.
Increased manufacturing has historically been accompanied by cost cutting in both labour and quality. We accept poorer quality products because our expectations around consumption have changed. Instead of buying things to last, we now buy things with the expectation of replacing them.
The fast fashion industry is an example of hidden labour and its ease of acceptance in consumers. Between 1970 and 2020, the average British household decreased their annual spending on clothing despite the average consumer buying 60% more pieces of clothing.
The allure of cheap or free products seems to dispel ethical concerns around labour conditions. Similarly, the allure of intelligent machines has created a facade around how these tools are actually developed.
Achieving ethical AI
Artificial intelligence technology cannot embody ethics; however, the manner in which AI is designed, developed and deployed can.
In 2021, UNESCO released a set of recommendations on the ethics of AI, which focus on the impacts of the implementation and use of AI. The recommendations do not address the hidden labour behind the development of AI.
Misinterpretations of AI, particularly those which encourage the idea of AI developing with a mind of its own, isolate the technology from the people designing, building and deploying that technology. These are the people making decisions around what labour conditions are and are not acceptable within their supply chain, what remuneration is and isn’t appropriate for the skills and expertise required for data labelling.
If we want to achieve ethical AI, we need to embed ethical decision making across the AI supply chain; from the data labellers who carefully and laboriously annotate and categorise an abundance of data through to the consumers who don’t want to pay for a service they have been accustomed to thinking should be free.
Everything comes at a cost, and ethics is about what costs we are and are not willing to pay.
AI Insights
Apple sued by authors over use of books in AI training

Tim Cook, chief executive officer of Apple Inc., during the 60th presidential inauguration in the rotunda of the US Capitol in Washington, DC, US, on Monday, Jan. 20, 2025.
Bloomberg | Getty Images
Technology giant Apple was accused by authors in a lawsuit on Friday of illegally using their copyrighted books to help train its artificial intelligence systems, part of an expanding legal fight over protections for intellectual property in the AI era.
The proposed class action filed in the federal court in Northern California, said Apple copied protected works without consent and without credit or compensation.
“Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture,” according to the lawsuit, filed by authors Grady Hendrix and Jennifer Roberson.
Apple and lawyers for the plaintiffs did not immediately respond to requests for comment on Friday.
The lawsuit is the latest in a wave of cases from authors, news outlets and others accusing major technology companies of violating legal protections for their works.
Artificial intelligence startup Anthropic on Friday disclosed in a court filing in California that it agreed to pay $1.5 billion to settle a class action from a group of authors who accused the company of using their books to train its AI chatbot Claude without permission.
Anthropic did not admit any liability in the accord, which lawyers for the plaintiffs called the largest publicly reported copyright recovery in history.
In June, Microsoft was hit with a lawsuit by a group of authors who claimed the company used their books without permission to train its Megatron artificial intelligence model. Meta Platforms and Microsoft-backed OpenAI also have faced claims over the alleged misuse of copyrighted material in AI training.
The lawsuit against Apple accused the company of using a known body of pirated books to train its “OpenELM” large language models.
Hendrix, who lives in New York, and Roberson in Arizona, said their works were part of the pirated dataset, according to the lawsuit.
AI Insights
Artificial Intelligence Legal Holds: Preserving Prompts & Outputs

You are your company’s in-house legal counsel. It’s 3 PM on a Friday (because of course it is), and you’ve just received notice of impending litigation. Your first thought? “Time to issue a legal hold.” Your second thought, as you watch your colleague casually chatting with Claude about contract drafting? “Oh no… what about all the AI stuff?”
Welcome to 2025, where your legal hold obligations just got an AI-powered upgrade you never signed up for. This isn’t just theoretical hand-wringing. Companies are already being held accountable for incomplete AI-related preservation, and the costs are real — both in terms of litigation exposure and the scramble to retrofit compliance systems that never anticipated chatbots.
The Plot Twist Nobody Saw Coming
Remember when legal holds meant telling people not to delete their emails? The foundational duty to preserve electronically stored information (ESI) when litigation is “reasonably anticipated” remains the cornerstone of legal hold obligations. However, generative AI’s emergence has significantly complicated this well-established framework. Courts are increasingly making clear that AI-generated content, including prompts and outputs, constitutes ESI subject to traditional preservation obligations.
Those were simpler times. Now, every prompt your team types into ChatGPT, every AI-generated marketing copy, and yes, even that time someone asks Perplexity for restaurant recommendations during a business trip — it’s all potentially discoverable ESI.
Or so say several recent court decisions:
- In the In re OpenAI, Inc. Copyright Infringement Litigation MDL (SDNY), Magistrate Judge Ona T. Wang ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted (whether deletion would occur by user choice or to satisfy privacy laws). Judge Sidney H. Stein later denied OpenAI’s objection and left the order standing (now on appeal to the Second Circuit). This is the clearest signal yet that courts will prioritize litigation preservation over default deletion settings.
- In Tremblay v. OpenAI (N.D. Cal.), the district court issued a sweeping order requiring OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” The Tremblay court dropped a truth bomb on us: AI inputs — prompts — can be discoverable.
- And although not AI-specific, recent chat-spoliation rulings (e.g., Google’s chat auto-delete practices) show that judges expect parties to suspend auto-delete once litigation is reasonably anticipated. These cases serve as analogs for AI chat tools.
Your New Reality Check: What Actually Needs Preserving?
Let’s break down what’s now on your preservation radar:
The Obvious Stuff:
- Every prompt typed into AI tools (yes, even the embarrassing ones)
- All AI-generated outputs used for business purposes
- The metadata showing who, what, when, and which AI model
The Not-So-Obvious Stuff:
- Failed queries and abandoned outputs (they still count!)
- Conversations in AI-powered Slack bots and Teams integrations
- That “quick question” someone asked Claude about a competitor
The “Are You Kidding Me?” Stuff:
- Deleted conversations (spoiler alert: they’re often not really deleted)
- Personal AI accounts used for work purposes
- AI-assisted research that never made it into final documents
Of course, knowing what to preserve is only half the battle. The real challenge? Actually implementing AI-aware legal holds when your IT department is still figuring out how to monitor these tools, your employees are using personal accounts for work-related AI, and new AI integrations appear in your tech stack on a weekly basis.
Next week, we’ll dive into the practical playbook for AI preservation — including the compliance frameworks that actually work, the vendor questions you should be asking, and why your current legal hold software might be more helpful than you think (or more useless than you fear).
P.S. – Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we’re practicing what we preach. No, we’re not perfect at it yet either.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi