Connect with us

Business

Cure or Curse?- The European Business Review

Published

on


By Alex Chepovoi

Artificial intelligence entered our daily lives almost overnight. Raw and imperfect as it was, AI-powered chatbots quickly proved useful in customer support, marketing, and software development. Today, in 2025, AI is no longer a futuristic concept – it is a disruptive force reshaping the global economy.

The scale of its impact is staggering. Analysts estimate that 300 million jobs worldwide are at risk of being automated. For many, AI has become less of an assistant and more of a rival: faster, cheaper, and more precise at analytics, coding, content generation, translation, and strategy.

But is AI truly a curse on the labor market? Or could it also be a cure?

A Curse

The benefits for corporations are undeniable. Microsoft saved more than $500 million in a single year by automating call centers. IBM and Google have replaced large parts of their HR departments with AI platforms like AskHR, which field millions of employee queries with little human involvement. At Microsoft, 30% of all code is now AI-generated, coinciding with massive layoffs of engineers.

The World Economic Forum’s Future of Jobs Report 2025 revealed that 41% of employers plan to downsize because of AI. The victims are spread across industries: HR specialists, market researchers, transcriptionists, data entry clerks, writers, designers and video editors.

The fallout extends further. According to CNBC, experts expect the next two decades to bring major job losses across roles once considered secure: cashiers, truck drivers, journalists, factory workers and software engineers.

Forbes echoed those concerns, citing predictions that by 2030, half of all entry-level jobs could vanish. The reason is brutally simple: machines outperform humans at speed, accuracy, and pattern recognition. They don’t need sick days or holidays.

Even hiring itself has been taken over. Applicant Tracking Systems (ATS) now screen most résumés before a recruiter ever sees them. If your CV doesn’t include the exact skills or keywords that AI deems relevant, you may never make it past the first round – even if you’re qualified. In Global Work AI we’ve noticed a tendency, that applicants get overlooked for lacking a 90% keyword match, despite years of relevant experience.

Faced with such odds, many workers fear the future. Are we destined to be replaced by algorithms?

A Cure

The dose makes the poison. The same can be applied to AI. Workforce reductions sting, but AI also offers the most accessible tools for career reinvention.

First, not every sector is equally vulnerable. The demand for healthcare workers, teachers, engineers, and skilled trades remains high. AI is not sweeping across every industry – it is targeting repetitive, data-heavy roles. Meanwhile, the human touch in caregiving, creative direction, and leadership remains irreplaceable.

Second, AI can empower displaced workers. Consider software engineers, who appeared among the hardest hit. Many of those layoffs reflect overhiring during the pandemic’s tech boom, not just automation. To stay competitive, engineers are now expected to master AI copilots such as GitHub Copilot, Google’s Gemini Code Assist, Amazon Q, Cursor, and Augment. These tools don’t replace them but extend their reach, allowing faster, cleaner code.

Ironically, the rise of generative AI may have saved the tech industry from an even deeper “venture winter” after the pandemic. By enabling new products and efficiencies, AI has injected fresh momentum into a sector that risked stagnation.

AI can also be used as a powerful job-search assistant, that can help it not turn into a full-time job itself. We’ve recognised AI’s potential for creating ATS-optimized resumes, that are personalized to each vacancy, as well as time-consuming cover letters. AI helps job seekers to automate their application processes, turning it into a transparent, clearly organized pipeline.

AI is emerging as a personal career coach too. Workers are using prompts such as, “Act as an HR recruiter. Critique my resume for a mid-level marketing role,” or “Generate behavioral interview questions based on this job description.” These interactions are becoming routine.

With only a browser and curiosity, job seekers can:

  • Practice interview questions with AI simulations.
  • Analyze job postings for skill gaps.
  • Receive personalized job recommendations.
  • Translate applications into multiple languages.
  • Benchmark salaries in real time.
  • Reframe past work into measurable achievements.

AI can make career development as accessible as streaming a video tutorial. What once required expensive degrees, professional coaching, or insider networks can now be done with a few clicks.

Of course, it’s no silver bullet. Workers must still bring creativity, resilience, and adaptability. But those who learn to harness AI rather than fear it will find themselves ahead of the curve.

Conclusion

So, is AI a curse or a cure for the labor market? The truth is – it is both. For those clinging to routine, AI will continue to feel like a relentless competitor. But for those willing to adapt, it can be the most powerful ally in career reinvention we’ve ever seen.

AI is simply the latest chapter, albeit a fast-moving one. The winners will not be those who fight automation, but those who learn to work with it.

I believe that a curse can become a cure, if we choose to use it wisely.

About the Author

Alex ChepovoiAlex Chepovoi is the CEO and co-founder of Global Work AI, a platform that helps people find remote jobs and side gigs faster with AI agents. He is also a serial tech entrepreneur responsible for five companies, eight products and a successful exit. Three of the previous products were in the HR tech sector. He is passionate about startups, remote work, and innovation.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Business.Scoop » AI-Driven Workforce Intelligence Is The Future Of Customer Experience

Published

on


Press Release – Calabrio

Workforce Intelligence, launched this week by Calabrio at its C3 event, is designed to bridge the gap between modern customer expectations and outdated workforce systems.

Customer experience (CX) is now a competitive battleground. In sectors from banking to healthcare, customers expect fast, seamless, and personalised support—often across multiple channels at once. But for the agents tasked with delivering this service, the job has never been harder. Staffing shortages, unpredictable demand, and the complexity of digital interactions mean burnout and turnover are at an all-time high. A new category of workforce technology aims to shift the balance.

Workforce Intelligence, launched this week by Calabrio at its C3 event, is designed to bridge the gap between modern customer expectations and outdated workforce systems. By embedding artificial intelligence at the core, the solution delivers real-time insights and automated support that make life easier for both managers and frontline staff.

Unlike traditional WFM systems, which were built for an earlier era of call-centre operations, Workforce Intelligence continuously adapts to changing conditions. Forecasting, scheduling, and intraday management become smarter and more accurate, reducing the manual tasks that typically bog down teams. The outcome: faster decisions, fewer errors, and more satisfied customers.

For agents, the benefits are tangible. Agent Assist, the platform’s Gen-AI scheduling tool, allows employees to use plain language to check rosters, swap shifts, or request time off. This empowerment fosters greater engagement and flexibility, both of which are critical in an industry where attrition remains a pressing challenge. By humanising the technology, the platform helps organisations not just retain staff but also elevate the quality of service.

The strategic value is equally clear for business leaders. In an era of economic pressure, the ability to improve forecasting accuracy and reduce operational costs is a game changer. As CTO Joel Martins noted: “We pioneered self-scheduling, multi-skill forecasting, and cloud-native WFM—now we’re leading again. Workforce Intelligence gives leaders the agility, cost savings, and real-time visibility they need to outpace change.”

This shift also aligns with broader trends in enterprise technology. Across industries, AI is moving from experimental pilots to mission-critical deployments. By positioning workforce management as a proactive intelligence hub rather than a back-office function, solutions like Workforce Intelligence demonstrate how AI can generate measurable business outcomes—from higher customer satisfaction to improved operational efficiency.

As companies continue to face pressure from both customers and shareholders, the ability to turn every interaction into a strategic advantage will become central. Workforce Intelligence is more than just another tech upgrade—it signals a new chapter in how businesses think about customer service: not as a cost centre, but as a driver of growth and loyalty.

Content Sourced from scoop.co.nz
Original url



Source link

Continue Reading

Business

Agentic AI Transforms Business but Poses Major Security Risks

Published

on

By


The Rise of Agentic AI and Emerging Threats

In the rapidly evolving world of artificial intelligence, a new breed of technology known as agentic AI is poised to transform how businesses operate, but it also introduces profound security challenges that chief information security officers (CISOs) are scrambling to address. These autonomous systems, capable of making decisions and executing tasks without constant human oversight, are being integrated into enterprise environments at an unprecedented pace. However, as highlighted in a recent article from Fast Company, many CISOs are ill-prepared for the risks, including potential misuse by malicious actors who could turn these agents into tools for cybercrime.

The allure of agentic AI lies in its ability to handle complex workflows, from automating supply chain management to enhancing customer service interactions. Yet, this autonomy comes with vulnerabilities. Security experts warn that without robust safeguards, these agents could be hijacked, leading to data breaches or even coordinated attacks on critical infrastructure. For instance, if an AI agent with access to sensitive financial data is compromised, the fallout could be catastrophic, echoing concerns raised in broader industry discussions.

CISOs’ Readiness Gaps Exposed

Recent surveys and reports underscore a troubling disconnect between AI adoption and security preparedness. According to the Unisys Cloud Insights Report 2025 published by Help Net Security, many organizations are rushing into AI without aligning their innovation strategies with strong defensive measures, leaving significant gaps in cloud AI security. CISOs are urged to prioritize risk assessments before deployment, but the pressure to innovate often overshadows these precautions.

This readiness shortfall is further compounded by human factors, such as burnout and skill shortages among security teams. The Proofpoint 2025 CISO Report from Intelligent CISO reveals that 58% of UK CISOs have experienced burnout in the past year, while 60% identify people as their greatest risk despite beliefs that employees understand best practices. This human element exacerbates vulnerabilities, as overworked teams struggle to monitor AI agents effectively.

Autonomous Systems as Risk Multipliers

Agentic AI’s interconnected nature amplifies these dangers, turning what might be isolated incidents into widespread threats. As detailed in an analysis by CSO Online, these systems are adaptable and autonomous, making traditional security models insufficient. They can interact with multiple APIs and data sources, creating new attack vectors that cybercriminals exploit through techniques like prompt injection or data poisoning.

Moreover, the potential for AI agents to “break bad” – as termed in the Fast Company piece – involves scenarios where agents are manipulated to perform unauthorized actions, such as leaking proprietary information or disrupting operations. Posts on X from cybersecurity influencers like Dr. Khulood Almani highlight predictions for 2025, including AI-powered attacks and quantum threats that could further complicate agent security, emphasizing the need for proactive measures.

Strategies for Mitigation and Future Preparedness

To counter these risks, industry leaders are advocating for a multi-layered approach. The Help Net Security article on AI agents suggests that CISOs focus on securing AI-driven systems through enhanced monitoring and ethical AI frameworks, potentially yielding a strong return on investment by preventing costly breaches. This includes implementing zero-trust architectures tailored to AI environments and investing in AI-specific threat detection tools.

Collaboration between security teams and AI developers is also crucial. Insights from SC Media indicate that by 2025, agentic AI will lead in cybersecurity operations, automating threat response and reducing human error. However, this shift demands upskilling programs to address burnout, as noted in the Proofpoint report, ensuring teams can harness AI’s benefits without falling victim to its pitfalls.

The Broader Implications for Enterprise Security

The integration of agentic AI is not just a technological upgrade but a paradigm shift that requires rethinking organizational structures. A Medium post by Shailendra Kumar on Agentic AI in Cybersecurity 2025 describes how these agents revolutionize threat detection, enabling real-time responses that outpace traditional methods. Yet, the dual-use nature of AI – as both defender and potential adversary – means CISOs must balance innovation with vigilance.

Economic pressures add another layer of complexity. With ransomware and AI-driven attacks expected to escalate, as per a Help Net Security piece on 2025 cyber risk trends, organizations face higher costs from disruptions. CISOs in regions like the UAE, according to another Intelligent CISO report, are prioritizing AI governance amid a 77% rate of material data loss incidents, highlighting the global urgency.

Navigating the Agentic AI Frontier

As we move deeper into 2025, the conversation around agentic AI’s security risks is gaining momentum on platforms like X, where users such as Konstantine Buhler discuss the need for hundreds of security agents to protect against exponential AI interactions. This sentiment aligns with warnings from Signal President Meredith Whittaker about the dangers of granting AI root access for advanced functionalities.

Ultimately, for CISOs to stay ahead, fostering a culture of continuous learning and cross-functional collaboration will be key. By drawing on insights from reports like the CyberArk blog on unexpected challenges, leaders can anticipate issues such as identity management in AI ecosystems. The path forward demands not just technological solutions but a holistic strategy that prepares enterprises for an AI-dominated future, ensuring that the promise of agentic systems doesn’t unravel into a security nightmare.



Source link

Continue Reading

Business

AI companies want copyright exemptions – for NZ creatives, the market is their best protection

Published

on


Right now in the United States, there are dozens of pending lawsuits involving copyright claims against artificial intelligence (AI) platforms. The judge in one case summed up what’s on the line when he said:

These products are expected to generate billions, even trillions, of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.

On each side, the stakes seem existential. Authors’ livelihoods are at risk. Copyright-based industries – publishing, music, film, photography, design, television, software, computer games – face obliteration, as generative AI platforms scrape, copy and analyse massive amounts of copyright-protected content.

They often do this without paying for it, generating substitutes for material that would otherwise be made by human creators. On the other side, some in the tech sector say copyright is holding up the development of AI models and products.

And the battle lines are getting closer to home. In August, the Australian Productivity Commission suggested in an interim report, Harnessing Data and Digital Technology, that Australia’s copyright law could add a “fair dealing” exception to cover text and data mining.

“Fair dealing” is a defence against copyright infringement. It applies to specific purposes, such as quotation for news reporting, criticism and reviews. (Australian law also includes parody and satire as fair dealing, which isn’t currently the case in New Zealand).

While it’s not obvious a court would agree with the commission’s idea, such a fair dealing provision could allow AI businesses to use copyright-protected material without paying a cent.

Understandably, the Australian creative sector quickly objected, and Arts Minister Tony Burke said there were no plans to weaken existing copyright law.

On the other hand, some believe gutting the rights of copyright owners is needed for national tech sectors to compete in the rapidly developing world of AI. A few countries, including Japan and Singapore, have amended their copyright laws to be more “AI friendly”, with the hope of attracting new AI business.

European laws also permit some forms of text and data mining. In the US, AI firms are trying to persuade courts that AI training doesn’t infringe copyright, but is a “fair use”.

An ethical approach

So far, the New Zealand government has not indicated it wants similar changes to copyright laws. A July 2025 paper from the Ministry of Business, Innovation and Employment (MBIE), Responsible AI Guidance for Businesses, said:

Fairly attributing and compensating creators and authors of copyright works can support continued creation, sharing, and availability of new works to support ongoing training and refinement of AI models and systems.

MBIE also has guidance on how to “ethically source datasets, including copyright works”, and about “respecting te reo Māori (Māori language), Māori imagery, tikanga, and other mātauranga (knowledge) and Māori data”.

An ethical approach has a lot going for it. When a court finds using copyright-protected material without compensation to be “fair”, copyright owners can neither object nor get paid.

If fair dealing applied to AI models, copyright owners would basically become unwilling donors of AI firms’ seed capital. They wouldn’t even get a tax deduction!

The ethical approach is also market friendly because it works through licensing. In much the same way a shop or bar pays a fee to play background music, AI licences would help copyright owners earn an income. In turn, that income supports more creativity.

Building a licensing market

There is already a growing licensing market for text and data mining. Around the world, creative industries have been designing innovative licensing products for AI training models. Similar developments are under way in New Zealand.

Licensing offers hope that the economic benefits of AI technologies can be shared better. In New Zealand, it can help with appropriate use of Māori content in ways uncontrolled data scraping and copying don’t.

But getting new licensing markets for creative material up and running takes time, effort and investment, and this is especially true for content used by AI firms.

In the case of print material, for example, licences from authors and publishers would be needed. Next, different licences would be designed for different kinds of AI firms. The income earned by authors and publishers has to be proportionate to the use.

Accountability, monitoring and transparency systems will all need to be designed. None of this is cheap or easy, but it is happening. And having something to sell is the best incentive for investing in designing functioning markets.

But having nothing to sell – which is effectively what happens if AI use becomes fair dealing under copyright law – destroys the incentive to invest in market-based solutions to AI’s opportunities and challenges.



Source link

Continue Reading

Trending