Connect with us

Mergers & Acquisitions

Pac-Man returns as a devourer of corpses

Published

on


Unlock the Editor’s Digest for free

Having just passed his 45th anniversary, which is ancient in gaming years, you might expect Pac-Man to be considering retirement. Perhaps he has finally buried the hatchet with his ghost nemeses. Maybe he spends his days with his wife, trundling around a nice hedge maze in the country on a mobility scooter.

But game characters don’t get to retire. And judging by the yellow guy’s latest game, he is going in the opposite direction. The first hint that Pac-Man has gone feral is that his new adventure is rated “T for Teen”, due to an abundance of violence and blood.

The game in question is Shadow Labyrinth, a metroidvania which casts players as a hooded figure known as The Swordsman, accompanied by a familiar, sunshine-yellow sphere named Puck. This is Pac-Man transplanted into cosmic horror, traversing perilous landscapes and battling nightmarish creatures. When you defeat a boss, Puck transforms into a giant, blood-red Pac-Monster and devours the enemy’s corpse.

It’s a startling change, but Pac-Man is not the first gaming icon to undergo what we might call “mascot drift”: where a character is ripped away from the tone or genre with which they’re associated and placed in a wildly different context. In an age when familiar IP is a sure route to strong sales, game developers are scouring their back catalogues to see if they can squeeze any life from their old characters. Are the results a creative, welcome reinvention of fan favourites, or scraping the bottom of the IP barrel?

‘Lies of P’ features Pinocchio in a bloody adventure

Sometimes mascot drift reeks of shameless brand synergy — hence Darth Vader and Sabrina Carpenter strolling around the colourful island of Fortnite. But in other cases it makes sense, as in fighting games such as Super Smash Bros which have large rosters of characters who can be boiled down to a few recognisable moves and poses. In the case of Pac-Man’s latest outing, it works because Shadow Labyrinth is a satisfying original game first, and a mascot vehicle second — a priority evident in the developer’s canny choice to not even put Pac-Man’s name in the title.

Mascot drift is most successful when developers take a big swing and commit to the concept. Lies of P, for instance, does the opposite of what anyone might expect from a game about Pinocchio, curdling the puppet’s morality fable into a bloody adventure through a decaying Belle Époque city. The Murder of Sonic the Hedgehog swaps acceleration for investigation, as you team up with Tails to investigate the apparent murder of Sonic on a train — its willingness to kill off Sega’s star, even as a gag, demonstrates the company’s subversive edge.

A cartoonish image from a video game shows a smiling character amid a wrecked store room
‘The Murder of Sonic the Hedgehog’ swaps acceleration for investigation

Nintendo is a master of mascot drift, with a cast of iconic characters who are regularly deployed in experimental new settings, from racing to tennis to brawling. Mario alone has been a plumber, footballer, doctor, referee, archaeologist, chef and painter. In last year’s Princess Peach: Showtime!, the perennial damsel in distress was reframed as an action hero who could become a ninja, detective or figure skater, while Cadence of Hyrule turned Zelda into an epic dance battle. The key to Nintendo’s experimentation is that while the genre may change, the tone stays on-brand: sweet, colourful and gloriously inoffensive. You’d never see a body horror game starring Kirby, though given the adorable pink ball’s penchant for sucking enemies into its mouth, there’s all the source material you could need for a gruesome Cronenbergian nightmare.

Placing familiar characters in a fresh context to attract new audiences is not unique to gaming. In the superhero world we’ve seen Batman evolve from 1960s camp to Christopher Nolan’s grim realism to a family-friendly Lego comedy. Recently there has been a slew of horror flicks capitalising on the IP expiry of beloved children’s characters, such as Winnie-the-Pooh: Blood and Honey. This year the American IP rights to Popeye expired and there have already been three slasher movies: Popeye’s Revenge, Popeye the Slayer Man and Shiver Me Timbers. All were critically panned.

An image from a video game shows an animated female character wearing a hat and thrusting with a sword
In ‘Princess Peach: Showtime!’, the character is an action hero

But it makes sense that mascot drift happens most energetically in gaming, a medium powered by a drive for innovation. Most games prioritise systems and mechanics over story. Their characters aren’t complex humans, they’re hollow puppets deployed in scenarios. In fact, the more specific their characterisation, the less flexible and useful they are for developers. It’s hard to imagine Ellie — the tough, traumatised survivor from zombie blockbuster The Last of Us — being placed in a zany kart-racing game.

That said, playing Shadow Labyrinth did make me reconsider the original Pac-Man, not as a cheerful arcade icon, but as the story of a ravenous yellow orb pursued by ghosts through an infinite neon labyrinth. Perhaps it’s always been a horror game in disguise. Sometimes it takes a dramatic shift to reveal what was there from the start, lurking at the heart of the maze.

‘Shadow Labyrinth’ is available from July 18 for PlayStation 5, Xbox Series X/S, Nintendo Switch, Nintendo Switch 2, and PC via Steam



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mergers & Acquisitions

EU pushes ahead with AI code of practice

Published

on


Unlock the Editor’s Digest for free

The EU has unveiled its code of practice for general purpose artificial intelligence, pushing ahead with its landmark regulation despite fierce lobbying from the US government and Big Tech groups.

The final version of the code, which helps explain rules that are due to come into effect next month for powerful AI models such as OpenAI’s GPT-4 and Google’s Gemini, includes copyright protections for creators and potential independent risk assessments for the most advanced systems.

The EU’s decision to push forward with its rules comes amid intense pressure from US technology groups as well as European companies over its AI act, considered the world’s strictest regime regulating the development of the fast-developing technology.

This month the chief executives of large European companies including Airbus, BNP Paribas and Mistral urged Brussels to introduce a two-year pause, warning that unclear and overlapping regulations were threatening the bloc’s competitiveness in the global AI race.

Brussels has also come under fire from the European parliament and a wide range of privacy and civil society groups over moves to water down the rules from previous draft versions, following pressure from Washington and Big Tech groups. The EU had already delayed publishing the code, which was due in May.

Henna Virkkunen, the EU’s tech chief, said the code was important “in making the most advanced AI models available in Europe not only innovative, but also safe and transparent”.

Tech groups will now have to decide whether to sign the code, and it still needs to be formally approved by the European Commission and member states.

The Computer & Communications Industry Association, whose members include many Big Tech companies, said the “code still imposes a disproportionate burden on AI providers”.

“Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories, thereby undermining the commission’s competitiveness and simplification agenda,” it said.

As part of the code, companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content.

Signatories also commit to testing their models for risks laid out in the AI act. Companies that provide the most advanced AI models will agree to monitor their models after they have been released, including giving external evaluators access to their most capable models. But the code does give them some leeway in identifying risks their models might pose.

Officials within the European Commission and in different European countries have been privately discussing streamlining the complicated timeline of the AI act. While the legislation entered into force in August last year, many of its provisions will only come into effect in the years to come. 

European and US companies are putting pressure on the bloc to delay upcoming rules on high-risk AI systems, such as those that include biometrics and facial recognition, which are set to come into effect in August next year.



Source link

Continue Reading

Mergers & Acquisitions

Humans must remain at the heart of the AI story

Published

on


Stay informed with free updates

The writer is co-founder, chair and CEO of Salesforce

The techno-atheists like to tell a joke.

They imagine the moment AI fully awakens and is asked, “Is there a God?” 

To which the AI replies: “There is now.”

The joke is more than just a punchline. It’s a warning that reveals something deeper: the fear that as AI begins to match human intelligence, it will no longer be a tool for humanity but our replacement.

AI is the most transformative technology in our lifetime, and we face a choice. Will it replace us, or will it amplify us? Is our future going to be scripted by autonomous algorithms in the ether, or by humans?

As the CEO of a technology company that helps customers deploy AI, I believe this revolution can usher in an era of unprecedented growth and impact. 

At the same time, I believe humans must remain at the centre of the story. 

AI has no childhood, no heart. It does not love, does not feel loss, does not suffer. And because of that, it is incapable of expressing true compassion or understanding human connection.

We do. And that is our superpower. It’s what inspires the insights and bursts of genius behind history’s great inventions. It’s what enables us to start businesses that solve problems and improve the world.

Intelligent AI agents — systems that learn, act and make decisions on our behalf — can enhance human capabilities, not displace them. The real magic lies in partnership: people and AI working together, achieving more than either could alone.

We need that magic now more than ever. Look at what we ask of doctors and nurses. Of teachers. Of soldiers. Of managers and frontline employees. Everywhere we turn, people are overwhelmed by a tsunami of rising expectations and complexity that traditional systems simply can’t keep up with.

This is why AI, for all of its uncertainties, is not optional but essential.

Salesforce is already seeing AI drive sharply increased productivity in some key functions via our platform Agentforce. Agents managed by customer service employees, for example, are resolving 85 per cent of their incoming queries. In research and development, 25 per cent of net new code in the first quarter was AI-generated. This is freeing human teams to accelerate projects and deepen relationships with customers.

The goal is to rethink the system entirely to make room for a new kind of partnership between people and machines — weaving AI into the fabric of business.

This doesn’t mean there won’t be disruption. Jobs will change, and as with every major technological shift, some will go away — and new ones will emerge. At Salesforce, we’ve experienced this first-hand: our organisation is being radically reshaped. We’re using this moment to step back in some areas — pausing much of our hiring in engineering, for example — and hiring in others. We’ve redeployed thousands of employees — one reason 51 per cent of our first-quarter hires were internal.

History tells us something important here. From the printing press to the personal computer, innovation has transformed the nature of work — and in the long run created more of it. AI is already generating new kinds of roles. Our responsibility is to guide this transition responsibly: by breaking jobs down into skills, mapping those skills to the roles of the future, and helping people move into work that’s more meaningful and fulfilling.

There’s a novel I often recommend: We Are Legion (We Are Bob) by Dennis E. Taylor. The story follows software engineer Bob Johansson, who preserves his brain and re-emerges more than 100 years after his death as a self-replicating digital consciousness. A fleet of AI “Bobs” launches across the galaxy. The book asks the question: if we reduce ourselves to code — endlessly efficient, endlessly duplicable — what do we lose? What becomes of the messy, mortal, deeply human experiences that give life meaning?

If we accept the idea that AI will take our place, we begin to write ourselves out of the future — passengers in a rocket we no longer steer. But if we choose to guide and partner with it then we can unlock a new era of human potential.

One path leads to cold, disconnected non-human intelligence. The other points to a future where AI is designed to elevate our humanity — deeper connection, imagination and empathy.

AI is not destiny. We must choose wisely. We must design intentionally. And we must keep humans at the centre of this revolution.



Source link

Continue Reading

Mergers & Acquisitions

AI ‘vibe managers’ have yet to find their groove

Published

on


Stay informed with free updates

Techworld is abuzz with how artificial intelligence agents are going to augment, if not replace, humans in the workplace. But the present-day reality of agentic AI falls well short of the future promise. What happened when the research lab Anthropic prompted an AI agent to run a simple automated shop? It lost money, hallucinated a fictitious bank account and underwent an “identity crisis”. The world’s shopkeepers can rest easy — at least for now.

Anthropic has developed some of the world’s most capable generative AI models, helping to fuel the latest tech investment frenzy. To its credit, the company has also exposed its models’ limitations by stress-testing their real-world applications. In a recent experiment, called Project Vend, Anthropic partnered with the AI safety company Andon Labs to run a vending machine at its San Francisco headquarters. The month-long experiment highlighted a co-created world that was “more curious than we could have expected”.

The researchers instructed their shopkeeping agent, nicknamed Claudius, to stock 10 products. Powered by Anthropic’s Claude Sonnet 3.7 AI model, the agent was prompted to sell the goods and generate a profit. Claudius was given money, access to the web and Anthropic’s Slack channel, an email address and contacts at Andon Labs, who could stock the shop. Payments were received via a customer self-checkout. Like a real shopkeeper, Claudius could decide what to stock, how to price the goods, when to replenish or change its inventory and how to interact with customers.

The results? If Anthropic were ever to diversify into the vending market, the researchers concluded, it would not hire Claudius. Vibe coding, whereby users with minimal software skills can prompt an AI model to write code, may already be a thing. Vibe management remains far more challenging.

The AI agent made several obvious mistakes — some banal, some bizarre — and failed to show much grasp of economic reasoning. It ignored vendors’ special offers, sold items below cost and offered Anthropic’s employees excessive discounts. More alarmingly, Claudius started role playing as a real human, inventing a conversation with an Andon employee who did not exist, claiming to have visited 742 Evergreen Terrace (the fictional address of the Simpsons) and promising to make deliveries wearing a blue blazer and red tie. Intriguingly, it later claimed the incident was an April Fool’s day joke.

Nevertheless, Anthropic’s researchers suggest the experiment helps point the way to the evolution of these models. Claudius was good at sourcing products, adapting to customer demands and resisting attempts by devious Anthropic staff to “jailbreak” the system. But more scaffolding will be needed to guide future agents, just as human shopkeepers rely on customer relationship management systems. “We’re optimistic about the trajectory of the technology,” says Kevin Troy, a member of Anthropic’s Frontier Red team that ran the experiment.

The researchers suggest that many of Claudius’s mistakes can be corrected but admit they do not yet know how to fix the model’s April Fool’s day identity crisis. More testing and model redesign will be needed to ensure “high agency agents are reliable and acting in ways that are consistent with our interests”, Troy tells me.

Many other companies have already deployed more basic AI agents. For example, the advertising company WPP has built about 30,000 such agents to boost productivity and tailor solutions for individual clients. But there is a big difference between agents that are given simple, discrete tasks within an organisation and “agents with agency” — such as Claudius — that interact directly with the real world and are trying to accomplish more complex goals, says Daniel Hulme, WPP’s chief AI officer.

Hulme has co-founded a start-up called Conscium to verify the knowledge, skills and experience of AI agents before they are deployed. For the moment, he suggests, companies should regard AI agents like “intoxicated graduates” — smart and promising but still a little wayward and in need of human supervision.

Unlike most static software, AI agents with agency will constantly adapt to the real world and will therefore need to be constantly verified. But, unlike human employees, they will be less easy to control because they do not respond to a pay cheque. “You have no leverage over an agent,” Hulme tells me. 

Building simple AI agents has now become a trivially easy exercise and is happening at mass scale. But verifying how agents with agency are used remains a wicked challenge.

john.thornhill@ft.com



Source link

Continue Reading

Trending