Connect with us

Business

Old oil paintings are suffering from chemical “acne”

Published

on


WHEN AN OIL painting is dried and finished, it is supposed to stay that way. Yet when Ida Bronken, an art conservator, began to prepare Jean-Paul Riopelle’s “Composition 1952” for display in 2006, she noticed drops of wet paint were trickling down the canvas from deep within the masterpiece’s layers. Equally odd were the tiny, hard, white lumps poking through the painting’s surface, as if it had a case of adolescent acne. Other sections seemed soft and moist; some paint layers were coming apart “like two pieces of buttered bread”, Ms Bronken says.



Source link

Business

Agentic AI Transforms Business but Poses Major Security Risks

Published

on

By


The Rise of Agentic AI and Emerging Threats

In the rapidly evolving world of artificial intelligence, a new breed of technology known as agentic AI is poised to transform how businesses operate, but it also introduces profound security challenges that chief information security officers (CISOs) are scrambling to address. These autonomous systems, capable of making decisions and executing tasks without constant human oversight, are being integrated into enterprise environments at an unprecedented pace. However, as highlighted in a recent article from Fast Company, many CISOs are ill-prepared for the risks, including potential misuse by malicious actors who could turn these agents into tools for cybercrime.

The allure of agentic AI lies in its ability to handle complex workflows, from automating supply chain management to enhancing customer service interactions. Yet, this autonomy comes with vulnerabilities. Security experts warn that without robust safeguards, these agents could be hijacked, leading to data breaches or even coordinated attacks on critical infrastructure. For instance, if an AI agent with access to sensitive financial data is compromised, the fallout could be catastrophic, echoing concerns raised in broader industry discussions.

CISOs’ Readiness Gaps Exposed

Recent surveys and reports underscore a troubling disconnect between AI adoption and security preparedness. According to the Unisys Cloud Insights Report 2025 published by Help Net Security, many organizations are rushing into AI without aligning their innovation strategies with strong defensive measures, leaving significant gaps in cloud AI security. CISOs are urged to prioritize risk assessments before deployment, but the pressure to innovate often overshadows these precautions.

This readiness shortfall is further compounded by human factors, such as burnout and skill shortages among security teams. The Proofpoint 2025 CISO Report from Intelligent CISO reveals that 58% of UK CISOs have experienced burnout in the past year, while 60% identify people as their greatest risk despite beliefs that employees understand best practices. This human element exacerbates vulnerabilities, as overworked teams struggle to monitor AI agents effectively.

Autonomous Systems as Risk Multipliers

Agentic AI’s interconnected nature amplifies these dangers, turning what might be isolated incidents into widespread threats. As detailed in an analysis by CSO Online, these systems are adaptable and autonomous, making traditional security models insufficient. They can interact with multiple APIs and data sources, creating new attack vectors that cybercriminals exploit through techniques like prompt injection or data poisoning.

Moreover, the potential for AI agents to “break bad” – as termed in the Fast Company piece – involves scenarios where agents are manipulated to perform unauthorized actions, such as leaking proprietary information or disrupting operations. Posts on X from cybersecurity influencers like Dr. Khulood Almani highlight predictions for 2025, including AI-powered attacks and quantum threats that could further complicate agent security, emphasizing the need for proactive measures.

Strategies for Mitigation and Future Preparedness

To counter these risks, industry leaders are advocating for a multi-layered approach. The Help Net Security article on AI agents suggests that CISOs focus on securing AI-driven systems through enhanced monitoring and ethical AI frameworks, potentially yielding a strong return on investment by preventing costly breaches. This includes implementing zero-trust architectures tailored to AI environments and investing in AI-specific threat detection tools.

Collaboration between security teams and AI developers is also crucial. Insights from SC Media indicate that by 2025, agentic AI will lead in cybersecurity operations, automating threat response and reducing human error. However, this shift demands upskilling programs to address burnout, as noted in the Proofpoint report, ensuring teams can harness AI’s benefits without falling victim to its pitfalls.

The Broader Implications for Enterprise Security

The integration of agentic AI is not just a technological upgrade but a paradigm shift that requires rethinking organizational structures. A Medium post by Shailendra Kumar on Agentic AI in Cybersecurity 2025 describes how these agents revolutionize threat detection, enabling real-time responses that outpace traditional methods. Yet, the dual-use nature of AI – as both defender and potential adversary – means CISOs must balance innovation with vigilance.

Economic pressures add another layer of complexity. With ransomware and AI-driven attacks expected to escalate, as per a Help Net Security piece on 2025 cyber risk trends, organizations face higher costs from disruptions. CISOs in regions like the UAE, according to another Intelligent CISO report, are prioritizing AI governance amid a 77% rate of material data loss incidents, highlighting the global urgency.

Navigating the Agentic AI Frontier

As we move deeper into 2025, the conversation around agentic AI’s security risks is gaining momentum on platforms like X, where users such as Konstantine Buhler discuss the need for hundreds of security agents to protect against exponential AI interactions. This sentiment aligns with warnings from Signal President Meredith Whittaker about the dangers of granting AI root access for advanced functionalities.

Ultimately, for CISOs to stay ahead, fostering a culture of continuous learning and cross-functional collaboration will be key. By drawing on insights from reports like the CyberArk blog on unexpected challenges, leaders can anticipate issues such as identity management in AI ecosystems. The path forward demands not just technological solutions but a holistic strategy that prepares enterprises for an AI-dominated future, ensuring that the promise of agentic systems doesn’t unravel into a security nightmare.



Source link

Continue Reading

Business

AI companies want copyright exemptions – for NZ creatives, the market is their best protection

Published

on


Right now in the United States, there are dozens of pending lawsuits involving copyright claims against artificial intelligence (AI) platforms. The judge in one case summed up what’s on the line when he said:

These products are expected to generate billions, even trillions, of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.

On each side, the stakes seem existential. Authors’ livelihoods are at risk. Copyright-based industries – publishing, music, film, photography, design, television, software, computer games – face obliteration, as generative AI platforms scrape, copy and analyse massive amounts of copyright-protected content.

They often do this without paying for it, generating substitutes for material that would otherwise be made by human creators. On the other side, some in the tech sector say copyright is holding up the development of AI models and products.

And the battle lines are getting closer to home. In August, the Australian Productivity Commission suggested in an interim report, Harnessing Data and Digital Technology, that Australia’s copyright law could add a “fair dealing” exception to cover text and data mining.

“Fair dealing” is a defence against copyright infringement. It applies to specific purposes, such as quotation for news reporting, criticism and reviews. (Australian law also includes parody and satire as fair dealing, which isn’t currently the case in New Zealand).

While it’s not obvious a court would agree with the commission’s idea, such a fair dealing provision could allow AI businesses to use copyright-protected material without paying a cent.

Understandably, the Australian creative sector quickly objected, and Arts Minister Tony Burke said there were no plans to weaken existing copyright law.

On the other hand, some believe gutting the rights of copyright owners is needed for national tech sectors to compete in the rapidly developing world of AI. A few countries, including Japan and Singapore, have amended their copyright laws to be more “AI friendly”, with the hope of attracting new AI business.

European laws also permit some forms of text and data mining. In the US, AI firms are trying to persuade courts that AI training doesn’t infringe copyright, but is a “fair use”.

An ethical approach

So far, the New Zealand government has not indicated it wants similar changes to copyright laws. A July 2025 paper from the Ministry of Business, Innovation and Employment (MBIE), Responsible AI Guidance for Businesses, said:

Fairly attributing and compensating creators and authors of copyright works can support continued creation, sharing, and availability of new works to support ongoing training and refinement of AI models and systems.

MBIE also has guidance on how to “ethically source datasets, including copyright works”, and about “respecting te reo Māori (Māori language), Māori imagery, tikanga, and other mātauranga (knowledge) and Māori data”.

An ethical approach has a lot going for it. When a court finds using copyright-protected material without compensation to be “fair”, copyright owners can neither object nor get paid.

If fair dealing applied to AI models, copyright owners would basically become unwilling donors of AI firms’ seed capital. They wouldn’t even get a tax deduction!

The ethical approach is also market friendly because it works through licensing. In much the same way a shop or bar pays a fee to play background music, AI licences would help copyright owners earn an income. In turn, that income supports more creativity.

Building a licensing market

There is already a growing licensing market for text and data mining. Around the world, creative industries have been designing innovative licensing products for AI training models. Similar developments are under way in New Zealand.

Licensing offers hope that the economic benefits of AI technologies can be shared better. In New Zealand, it can help with appropriate use of Māori content in ways uncontrolled data scraping and copying don’t.

But getting new licensing markets for creative material up and running takes time, effort and investment, and this is especially true for content used by AI firms.

In the case of print material, for example, licences from authors and publishers would be needed. Next, different licences would be designed for different kinds of AI firms. The income earned by authors and publishers has to be proportionate to the use.

Accountability, monitoring and transparency systems will all need to be designed. None of this is cheap or easy, but it is happening. And having something to sell is the best incentive for investing in designing functioning markets.

But having nothing to sell – which is effectively what happens if AI use becomes fair dealing under copyright law – destroys the incentive to invest in market-based solutions to AI’s opportunities and challenges.



Source link

Continue Reading

Business

Nory raises £27m as it doubles down on building AI assistants

Published

on


Investment

A London-based AI-native restaurant management system for hospitality businesses has raised £27 million in Series B funding, bringing total funding to £46m. 

Kinnevik led the investment round for Nory, which has experienced a period of rapid growth amid the company doubling down on building AI assistants and global expansion. 

The news comes just one year after the firm’s Series A, led by Accel, who also participated in this round alongside existing investors.

The business looks to help restaurants take control of their operations and profits through a comprehensive AI system covering business intelligence, inventory, workforce management and payroll. 

Created by industry-insider and now-CEO Conor Sheridan, Nory is purpose-built to meet the evolving needs of the hospitality industry. 

By using the platform, restaurants have been able to reduce operating costs by nearly 20% and increase core net profits by up to 50%, according to the firm. 

It helps restaurant operators save over 100 hours of admin per restaurant each month by automating time-consuming back office tasks such as business analysis, digital guest engagement, rota planning, procurement, and finance.

Who is Jacky Wright? Former Microsoft CDO joins Frasers Group

Working with customers ranging from independent brands to enterprise groups across the UK, Ireland and US, it has onboarded clients including Black Sheep Coffee, Jamie Oliver Group and Dave’s Hot Chicken.

The company says that the funding will fuel AI enhancements to its platform, facilitate the strategic hiring of world class data scientists, continue development of proprietary algorithms and deploy autonomous AI assistants. 

It will also drive its US expansion. 

“At a time when hospitality is under pressure, we are putting restaurants back in control of their profitability and their destiny,” said Sheridan. 

“The future of hospitality isn’t robots or gimmicks. It’s AI that makes restaurants smarter, leaner and more profitable, with automation that frees teams up to focus on what matters: great food and even greater customer experiences.”

Jose Gaytan de Ayala, who led the investment for Kinnevik, added: “Nory is rewriting the hospitality playbook. 

“As the sector faces rising costs and complexity, Nory stands apart as the only AI-native platform purpose built to help restaurants meet and overcome these headwinds. 

“We were impressed by the strong customer feedback, which highlighted the quality of Nory’s platform and the meaningful ROI it delivers for customers. 

“With our support, Nory will go even deeper on AI and bring the next wave of innovation to restaurant owners in the UK and beyond.”

ASOS relegated from FTSE 250 as Burberry rejoins FTSE 100



Source link

Continue Reading

Trending