AI Research
Meet your descendants – and your future self! A trip to Venice film festival’s extended reality island | Film

In the largest cinema at the Venice film festival, guests gather for the premiere of Frankenstein, Guillermo del Toro’s lavish account of a man who dared to play God and created a monster. When the young scientist reanimates a dead body for his colleagues, some see it as a trick while others are outraged. “It’s an abomination, an obscenity,” shouts one hide-bound old timer, and his alarm is partly justified. Every technological breakthrough opens Pandora’s box. You don’t know what’s going to crawl out or where it will then choose to go.
Behind the main festival venue sits the small ruined island of Lazzaretto Vecchio. Since 2017, it’s been home to Venice Immersive, the event’s groundbreaking section dedicated to showcasing and supporting XR (extended reality) storytelling. Before that it was a storage facility, before that a plague quarantine zone. Eliza McNitt, this year’s jury president, remembers the time when work on the exhibits had to be paused because the builders had uncovered human bones in the ground. “There’s something haunting about the fact that we come to the oldest film festival in the world to present this new form of cinema,” she says. “We’re exploring the medium of the future, but we’re also in conversation with ghosts.”
There are 69 different monsters on the island this year and these range from spacious walk-through installations to intricate virtual worlds that can be toured in a headset. Frankenstein’s monster, of course, wound up turning on its creator, and McNitt acknowledges that there are similar concerns about immersive art, which tends to be lumped alongside AI in the public mind, folded in with the runaway technology that threatens to consume us all.
“Immersive storytelling is a completely different conversation from AI,” she says. “But there is a genuine fear about what AI means for the film industry. And that’s mostly down to a misconception that you can type a prompt – “make me a movie” – and a movie magically appears, which is absolutely not the case. In practice, it’s about using AI tools to create something personal and unique in collaboration with an enormous team of other dedicated artists.” AI won’t replace people, she adds, “because AI doesn’t have taste”.
McNitt is an early adopter of AI tools and used them most recently on her autobiographical 2025 film Ancestra. Other film-makers, she suspects, are not far behind. “I think there are only a couple of experiences here on the island that are just beginning to experiment with the tools,” she says. “But next year you’re going to see it a lot more involved in all aspects of these projects.”
Immersive storytelling’s berth at the Venice film festival conveniently aligns it with cinema itself and encourages visitors to view it as a natural extension, or the heir to the throne. Several mainstream Hollywood directors have already crossed over. Asteroid, for instance, is a high-stakes space thriller about a mining expedition gone horribly wrong, spearheaded by Doug Liman, the director of Swingers and The Bourne Identity. His producing partner, Julina Tatlock, tells me that the interactive short film effectively returns Liman to his early independent roots and has allowed him to devise and produce a project free from studio constraints. Asteroid is a labour of love and part of a larger story that may yet see life as a straight film feature. “Doug’s obsessed with space,” she says.
There’s a similarly cinematic quality to The Clouds Are Two Thousand Metres Up, a rapturous arthouse drama in which a bereaved young widower pursues his wife’s spirit through the pages of her unfinished novel. Its Taiwanese director, Singing Chen, has worked in conventional film as well as VR and feels that each form has its strengths. “Immersive art is to cinema what cinema was to photography,” she says. “When cinema arrived, it didn’t replace photography, because the still image has power and value. It affects us in a different way than the moving image.”
The films on the schedule at Venice are largely known quantities. More often than not, we’ll be familiar with the actors and the director and can largely intuit the plot. Whereas the artworks on the island could be almost anything: an immersive video, an installation, a hyperactive adventure or a virtual world to explore. In a space of an afternoon, the visitor can bounce from the arcade-game interactivity of Samantha Gorman and Danny Cannizzaro’s Face Jumping to Kate Voet and Victor Maes’s wrenching human drama A Long Goodbye, in which a husband and wife battle the onset of dementia, to Chloé Lee’s excellent, ingenious Reflections of Little Red Dot, which repurposes an old analogue slide projector for a vivid whistle-stop tour of Singapore’s cultural history. Each experience demands a leap of faith and depends on a certain willingness to get lost. You might fall on your face, but you may also achieve lift off.
Three projects stand out from this year’s Venice lineup. Ancestors, by Steye Hallema, is a boisterous ensemble interactive in which visitors are first paired as couples, then organised into extended families and are shown pictures of their descendants on their synchronised smartphones. It’s a rarity among the immersive experiences in that it’s a purely communal affair, joyous and slightly chaotic in the manner of all good happy families. If Ancestors is about the importance of human relationships, then the form and the content are in perfect harmony here.
Craig Quintero and Phoebe Greenberg’s extraordinary Blur (probably the hottest ticket on the island) covers similar ground to Ancestors with its focus on cloning and identity, genesis and extinction, although it takes the form of precision-tooled immersive theatre. It’s head-spinningly strange, provocative and seductive. In the closing moments, the user is approached by an eerie VR version of themselves in old age: an emissary from the future; the shape of things to come. Distressingly, the bald, withered figure that shuffled in my direction looked only a year or two older than I am now.
If there is a real-world equivalent of the Frankenstein scene in which the angry scientists cry “abomination” and “obscenity”, it occurs on the boat ride to the island when a middle-aged Italian man takes issue with the producers of a sensory installation called Dark Rooms. The producers are satanists, he insists. They assure him they are not. “Maybe not,” says the man. “But you have made a work of Satan.” Actually, Dark Rooms is terrific and not satanic at all, even if it does spend the majority of its time underground, in the shadows. Co-directed by Mads Damsbo, Laurits Flensted-Jensen and Anne Sofie Steen Sverdrup, this rites-of-passage tale spirits the user on a jolting, intense trip through the corners of queer subculture, through nightclubs and back rooms and finally out over the sea. It’s brilliant and unnerving and finally rather moving. Visitors, I’m told, tend to wander out in a daze.
In the early editions of Venice Immersive, most stories erred on the side of simplicity, as if to reassure newcomers who might be put off by the technology. But the medium has now gained in confidence. It’s broken out of the nursery and reached adolescence. The work has turned more potent, daring and psychologically complex. It’s no accident that many of the best Venice Immersive experiences are about ancestors and descendants and the links between the two. Nor for that matter that so many of them feature scenes that take place aboard moving trains and fragile bridges and inside open elevators. Whether intentionally or not, the medium is telling us where it is: at an interstitial stage, in transit, in progress. It’s travelling between worlds, busily finding its range as it heads into the future.
AI Research
Arista touts liquid cooling, optical tech to reduce power consumption for AI networking

Both technologies will likely find a role in future AI and optical networks, experts say, as both promise to reduce power consumption and support improved bandwidth density. Both have advantages and disadvantages as well – CPOs are more complex to deploy given the amount of technology included in a CPO package, whereas LPOs promise more simplicity.
Bechtolsheim said that LPO can provide an additional 20% power savings over other optical forms. Early tests show good receiver performance even under degraded conditions, though transmit paths remain sensitive to reflections and crosstalk at the connector level, Bechtolsheim added.
At the recent Hot Interconnects conference, he said: “The path to energy-efficient optics is constrained by high-volume manufacturing,” stressing that advanced optics packaging remains difficult and risky without proven production scale.
“We are nonreligious about CPO, LPO, whatever it is. But we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion,” Bechtolsheim said at the investor event. “So, to put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, okay? So going from zero to 50 million is just not possible. The supply chain doesn’t exist. So, even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort.”
“We’re all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case,” Bechtolsheim added.
“So what we call the ‘purpose-built AI data center fabric’ around Ethernet technology is to really optimize AI application performance, which is the ultimate measure for the customer in both the scale-up and the scale-out domains. Some of this includes full switch customization for customers. Other cases, it includes the power and cost optimization. But we have a large part of our hardware engineering department working on these things,” he said.
AI Research
Learning by Doing: AI, Knowledge Transfer, and the Future of Skills | American Enterprise Institute

In a recent blog, I discussed Stanford University economist Erik Brynjolfsson’s new study showing that young college graduates are struggling to gain a foothold in a job market shaped by artificial intelligence (AI). His analysis found that, since 2022, early-career workers in AI-exposed roles have seen employment growth lag 13 percent behind peers in less-exposed fields. At the same time, experienced workers in the same jobs have held steady or even gained ground. The conclusion: AI isn’t eliminating work outright, but it is affecting the entry-level rungs that young workers depend on as they begin climbing career ladders.
The potential consequences of these findings, assuming they bear out, become clearer when read alongside Enrique Ide’s recent paper, Automation, AI, and the Intergenerational Transmission of Knowledge. Ide argues that when firms automate entry-level tasks, the opportunity for new workers to gain the tacit knowledge—the kind of workplace norms and rhythms of team-based work that aren’t necessarily written down—isn’t passed on. Thus, productivity gains accrue to seasoned workers while would-be novices lose the hands-on training they need to build the foundation for career progress.
This short-circuiting of early career experiences, Ide says, has macro-economic consequences. He estimates that automating even five percent of entry-level tasks reduces long-run US output growth by an estimated 0.05 percentage points per year; at 30 percent automation, growth slows by more than 0.3 points. Over a hundred year timeline, this would reduce total output by 20 percent relative to a world without AI automation. In other words: automating the bottom rungs might lift firms’ quarterly performance, but at the cost of generational growth.
This is where we need to pause and take a breath. While Ide’s results sound dramatic, it is critical to remember that the dynamics and consequences of AI adoption are unpredictable, and that a century is a very long time. For instance, who would have said in 2022 that one of the first effects of AI automation would be to benefit less tech-savvy boomer and Gen-X managers and harm freshly minted Gen-Z coders?
Given the history of positive, automation-induced wealth and employment effects, why would this time be different?
Finally, it’s important to remember that in a dynamic market-driven economy, skill requirements are always changing and firms are always searching for ways to improve their efficiency relative to competitors. This is doubly true as we enter the era of cognitive, as opposed to physical, automation. AI-driven automation is part of the pathway to a more prosperous economy and society for ourselves and for future generations. As my AEI colleague Jim Pethokoukis recently said, “A supposedly powerful general-purpose technology that left every firm’s labor demand utterly unchanged wouldn’t be much of a GPT.” Said another way, unless AI disrupts our economy and lives, it cannot deliver its promised benefits.
What then should we do? I believe the most important step we can take right now is to begin “stress-testing” our current workforce development policies and programs and building scenarios for how industry and government will respond should significant AI-related job disruptions occur. Such scenario planning could be shaped into a flexible “playbook” of options to guide policymakers geared to the types and numbers of affected workers. Such planning didn’t occur prior to the automation and trade shocks of the 1990s and 2000s with lasting consequences for factory workers and American society. We should try to make sure this doesn’t happen again with AI.
Pessimism is easy and cheap. We should resist the lure of social media-monetized AI doomerism and focus on building the future we want to see by preparing for and embracing change.
AI Research
SBU Researchers Use AI to Advance Alzheimer’s Detection

Alzheimer’s disease is one of the most urgent public health challenges for aging Americans. Nearly seven million Americans over the age of 65 are currently living with the disease, and that number is projected to nearly double by 2060, according to the Alzheimer’s Association.
Early diagnosis and continuous monitoring are crucial to improving care and extending independence, but there isn’t enough high-quality, Alzheimer’s-specific data to train artificial intelligence systems that could help detect and track the disease.
Shan Lin, associate professor of Electrical and Computer Engineering at Stony Brook University, along with PhD candidate Heming Fu, are working with Guoliang Xing from The Chinese University of Hong Kong to create a network of data based on Alzheimer’s patients. Together they developed SHADE-AD (Synthesizing Human Activity Datasets Embedded with AD features), a generative AI framework designed to create synthetic, realistic data that reflects the motor behaviors of Alzheimer’s patients.

Movements like stooped posture, reliance on armrests when standing from sitting, or slowed gait may appear subtle, but can be early indicators of the disease. By identifying and replicating these patterns, SHADE-AD provides researchers and physicians with the data required to improve monitoring and diagnosis.
Unlike existing generative models, which often rely on and output generic datasets drawn from healthy individuals, SHADE-AD was trained to embed Alzheimer’s-specific traits. The system generates three-dimensional “skeleton videos,” simplified figures that preserve details of joint motion. These 3D skeleton datasets were validated against real-world patient data, with the model proving capable of reproducing the subtle changes in speed, angle, and range of motion that distinguish Alzheimer’s behaviors from those of healthy older adults.
The results and findings, published and presented at the 23rd ACM Conference on Embedded Networked Sensor Systems (SenSys 2025), have been significant. Activity recognition systems trained with SHADE-AD’s data achieved higher accuracy across all major tasks compared with systems trained on traditional data augmentation or general open datasets. In particular, SHADE-AD excelled at recognizing actions like walking and standing up, which often reveal the earliest signs of decline for Alzheimer’s patients.

Lin believes this work could have a significant impact on the daily lives of older adults and their families. Technologies built on SHADE-AD could one day allow doctors to detect Alzheimer’s sooner, track disease progression more accurately, and intervene earlier with treatments and support. “If we can provide tools that spot these changes before they become severe, patients will have more options, and families will have more time to plan,” he said.
With September recognized nationally as Healthy Aging Month, Lin sees this research as part of an effort to use technology to support older adults in living longer, healthier, and more independent lives. “Healthy aging isn’t only about treating illness, but also about creating systems that allow people to thrive as they grow older,” he said. “AI can be a powerful ally in that mission.”
— Beth Squire
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries