AI Research
Qantas data breach to impact 6 million airline customers

BBC News, Sydney

Qantas is contacting customers after a cyber attack targeted their third-party customer service platform.
On 30 June, the Australian airline detected “unusual activity” on a platform used by its contact centre to store the data of six million people, including names, email addresses, phone numbers, birth dates and frequent flyer numbers.
Upon detection of the breach, Qantas took “immediate steps and contained the system”, according to a statement.
The company is still investigating the full extent of the breach, but says it is expecting the proportion of data stolen to be “significant”.
It has assured the public that passport details, credit card details and personal financial information were not held in the breached system, and no frequent flyer accounts, passwords or PIN numbers have been compromised.
Qantas has notified the Australian Federal Police of the breach, as well as the Australian Cyber Security Centre and the Office of the Australian Information Commissioner.
“We sincerely apologise to our customers and we recognise the uncertainty this will cause,” said Qantas Group CEO Vanessa Hudson.
She asked customers to call the dedicated support line if they had concerns, and confirmed that there would be no impact to Qantas’ operations or the safety of the airline.
The attack comes just days after the FBI issued an alert on X warning that the airline sector was a target of cyber criminal group Scattered Spider.
US-based Hawaiian Airlines and Canada’s WestJet have both been impacted by similar cyber attacks in the past two weeks.
BBC revealed that the group has also been the key focus of an investigation into the wave of cyber attacks on UK retailers, including M&S.
The Qantas breach is the latest in a string of Australian data breaches this year, with AustralianSuper and Nine Media suffering significant leaks in the past few months.
In March 2025, the Office of the Australian Information Commissioner (OAIC) released statistics revealing that 2024 was the worst year for data breaches in Australia since records began in 2018.
“The trends we are observing suggest the threat of data breaches, especially through the efforts of malicious actors, is unlikely to diminish,” said Australian Privacy Commissioner Carly Kind in a statement from the OAIC.
Ms Kind urged businesses and government agencies to step up security measures and data protection, and highlighted that both the private and public sectors are vulnerable to cyber attacks.

Get our flagship newsletter with all the headlines you need to start the day. Sign up here.
AI Research
Captions rebrands as Mirage, expands beyond creator tools to AI video research

Captions, an AI-powered video creation and editing app for content creators that has secured over $100 million in venture capital to date at a valuation of $500 million, is rebranding to Mirage, the company announced on Thursday.
The new name reflects the company’s broader ambitions to become an AI research lab focused on multimodal foundational models specifically designed for short-form video content for platforms like TikTok, Reels, and Shorts. The company believes this approach will distinguish it from traditional AI models and competitors such as D-ID, Synthesia, and Hour One.
The rebranding will also unify the company’s offerings under one umbrella, bringing together the flagship creator-focused AI video platform, Captions, and the recently launched Mirage Studio, which caters to brands and ad production.
“The way we see it, the real race for AI video hasn’t begun. Our new identity, Mirage, reflects our expanded vision and commitment to redefining the video category, starting with short-form video, through frontier AI research and models,” CEO Gaurav Misra told TechCrunch.
The sales pitch behind Mirage Studio, which launched in June, focuses on enabling brands to create short advertisements without relying on human talent or large budgets. By simply submitting an audio file, the AI generates video content from scratch, with an AI-generated background and custom AI avatars. Users can also upload selfies to create an avatar using their likeness.
What sets the platform apart, according to the company, is its ability to produce AI avatars that have natural-looking speech, movements, and facial expressions. Additionally, Mirage says it doesn’t rely on existing stock footage, voice cloning, or lip-syncing.
Mirage Studio is available under the business plan, which costs $399 per month for 8,000 credits. New users receive 50% off the first month.
Techcrunch event
San Francisco
|
October 27-29, 2025
While these tools will likely benefit brands wanting to streamline video production and save some money, they also spark concerns around the potential impact on the creative workforce. The growing use of AI in advertisements has prompted backlash, as seen in a recent Guess ad in Vogue’s July print edition that featured an AI-generated model.
Additionally, as this technology becomes more advanced, distinguishing between real and deepfake videos becomes increasingly difficult. It’s a difficult pill to swallow for many people, especially given how quickly misinformation can spread these days.
Mirage recently addressed its role in deepfake technology in a blog post. The company acknowledged the genuine risks of misinformation while also expressing optimism about the positive potential of AI video. It mentioned that it has put moderation measures in place to limit misuse, such as preventing impersonation and requiring consent for likeness use.
However, the company emphasized that “design isn’t a catch-all” and that the real solution lies in fostering a “new kind of media literacy” where people approach video content with the same critical eye as they do news headlines.
AI Research
Head of UK’s Turing AI Institute resigns after funding threat

Graham FraserTechnology reporter

The chief executive of the UK’s national institute for artificial intelligence (AI) has resigned following staff unrest and a warning the charity was at risk of collapse.
Dr Jean Innes said she was stepping down from the Alan Turing Institute as it “completes the current transformation programme”.
Her position has come under pressure after the government demanded the centre change its focus to defence and threatened to pull its funding if it did not – leading to staff discontent and a whistleblowing complaint submitted to the Charity Commission.
Dr Innes, who was appointed chief executive in July 2023, said the time was right for “new leadership”.
The BBC has approached the government for comment.
The Turing Institute said its board was now looking to appoint a new CEO who will oversee “the next phase” to “step up its work on defence, national security and sovereign capabilities”.
Its work had once focused on AI and data science research in environmental sustainability, health and national security, but moved on to other areas such as responsible AI.
The government, however, wanted the Turing Institute to make defence its main priority, marking a significant pivot for the organisation.
“It has been a great honour to lead the UK’s national institute for data science and artificial intelligence, implementing a new strategy and overseeing significant organisational transformation,” Dr Innes said.
“With that work concluding, and a new chapter starting… now is the right time for new leadership and I am excited about what it will achieve.”
What happened at the Alan Turing Institute?
Founded in 2015 as the UK’s leading centre of AI research, the Turing Institute, which is headquartered at the British Library in London, has been rocked by internal discontent and criticism of its research activities.
A review last year by government funding body UK Research and Innovation found “a clear need for the governance and leadership structure of the Institute to evolve”.
At the end of 2024, 93 members of staff signed a letter expressing a lack of confidence in its leadership team.
In July, Technology Secretary Peter Kyle wrote to the Turing Institute to tell its bosses to focus on defence and security.
He said boosting the UK’s AI capabilities was “critical” to national security and should be at the core of the institute’s activities – and suggested it should overhaul its leadership team to reflect its “renewed purpose”.
He said further government investment would depend on the “delivery of the vision” he had outlined in the letter.
This followed Prime Minister Sir Keir Starmer’s commitment to increasing UK defence spending to 5% of national income by 2035, which would include investing more in military uses of AI.

A month after Kyle’s letter was sent, staff at the Turing institute warned the charity was at risk of collapse, after the threat to withdraw its funding.
Workers raised a series of “serious and escalating concerns” in a whistleblowing complaint submitted to the Charity Commission.
Bosses at the Turing Institute then acknowledged recent months had been “challenging” for staff.

AI Research
Global Working Group Releases Publication on Responsible Use of Artificial Intelligence in Creating Lay Summaries of Clinical Trial Results

New publication underscores the importance of human oversight, transparency, and patient involvement in AI-assisted lay summaries.
BOSTON, Sept. 4, 2025 /PRNewswire/ — The Center for Information and Study on Clinical Research Participation (CISCRP) today announced the publication of a landmark article, “Considerations for the Use of Artificial Intelligence in the Creation of Lay Summaries of Clinical Trial Results” , in Medical Writing (Volume 34, Issue 2, June 2025). Developed by the working group, Patient-focused AI for Lay Summaries (PAILS) , this comprehensive document addresses both the opportunities and risks of using artificial intelligence (AI) in the development of plain language communications of clinical trial results.
Lay summaries (LS) are essential tools for translating complex clinical trial results into plain language that is clear, accurate, and accessible to patients, caregivers, and the broader community. As AI technologies evolve, they hold promise for streamlining LS creation, improving efficiency, and expanding access to trial results. However, without thoughtful integration and oversight , AI-generated content can risk inaccuracies, cultural insensitivity, and loss of public trust.
For biopharma sponsors, CROs, and medical writing vendors, this framework offers clear, best practices for integrating AI responsibly while maintaining compliance with EU and UK lay summary regulations and improving efficiency at scale.
Key recommendations from the working group include:
-
Human oversight is essential – AI should support, not replace, expert review to ensure accuracy, clarity, and cultural sensitivity.
-
Prompt Engineering is a Critical Skillset – Thoughtful, specific prompts – including instructions on tone, reading level, terminology, structure, and disclaimers – can make the difference between usable and unusable drafts.
-
Full transparency of AI involvement – Disclosing when and how AI was used builds public trust and complies with emerging regulations such as the EU Artificial Intelligence Act.
-
Robust governance frameworks – Policies should address bias, privacy, compliance, and ongoing monitoring of AI systems.
-
Patient and public involvement – Including patient perspectives in review processes improves relevance and comprehension.
“This considerations document is the result of thoughtful collaboration among industry, academia , and CISCRP.” said Kimbra Edwards, Senior Director of Health Communication Services at CISCRP. “By combining human expertise with AI innovation, we can ensure that clinical trial information remains transparent, accurate, and truly patient-centered.”
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics