Connect with us

Tools & Platforms

Common Pitfalls That Keep Projects From Taking Off

Published

on


The promise of AI in the world of tax is compelling: streamlined compliance, predictive insights, and newfound efficiency. Yet for all the enthusiasm, many tax departments find their ambitious AI projects grounded before they ever reach cruising altitude. The reasons for this often have less to do with the technology itself and more to do with the realities of data, people, and processes.

Starting Smart, Not Big

The journey from understanding AI concepts to actually implementing them is where the first stumbles often occur. A common misstep is starting too big. Tax leaders sometimes try to redesign entire processes at once, hoping to deliver an end-to-end transformation right out of the gate. The result is usually the opposite: projects drag on, resources are stretched thin, and momentum is lost.

Another common trap is picking the wrong first project, jumping straight into high-stakes initiatives that require heavy integrations, while ignoring smaller wins like data extraction. The safer bet is to start with a narrow, low-risk pilot like automating some spreadsheet workflows. It’s the kind of pilot you can complete in a month or two, and if it doesn’t work out, nothing’s lost and you simply fall back on your manual process.

There’s also a tendency to focus on the tool instead of the outcome. AI gets a lot of attention, and some teams feel pressure to use it even when a simpler automation approach would do the job. The label “AI-powered” shouldn’t matter as much as whether the solution solves the problem effectively.

In short, the common mistakes are clear: trying to boil the ocean, chasing perfection too soon, or letting the hype around AI dictate decisions. The smarter path is to start small and scale thoughtfully from there.

Too Many Projects, Not Enough Progress

With all the buzz around generative AI, many tax teams fall into the trap of running pilot after pilot. For example, a tax team might launch pilots for AI-driven invoice scanning, chatbot support for tax queries, and predictive analytics for audit risks. Each pilot sounds promising, but with limited staff and budget, none of them gets the attention needed to succeed. Six months later, the team has three unfinished projects, no live solution, and a frustrated leadership asking why AI hasn’t delivered. This flurry of activity creates the illusion of progress but results in a trail of half-finished experiments.

This “pilot fatigue” often comes from top-down pressure to be seen as innovating with AI. Leaders want momentum, but without focus, the energy gets diluted. Instead of proving value, the department ends up with scattered efforts and no clear win to point to.

The way forward is prioritization. Not every idea deserves a pilot, and not every pilot should move ahead at the same time. The most successful teams pick a few feasible projects, give them proper resources, and see them through beyond the prototype stage. In the end, it’s better to have one working solution in production than a stack of unfinished experiments.

From Prototype to Production

A common stumbling block for tax teams is underestimating the leap from prototype to production. Some estimates place the AI project failure rate as high as 80%, which is almost double the rate of corporate IT project failures. Building a proof of concept in a few weeks is one thing but turning it into a tool people rely on every day is something else entirely. This is where many AI projects stall and why so many never make it beyond the pilot stage.

The problem usually isn’t the technology itself. It’s the messy reality of moving from a controlled demo into a live environment. A prototype might run smoothly on a clean sample dataset, but in production the AI has to handle the company’s actual data that may be incomplete, inconsistent, or scattered across systems. Cleaning, organizing, and integrating that information is often most of the work, yet it’s rarely factored into early pilots.

Integration poses another challenge. A model that runs neatly in a Jupyter notebook isn’t enough. To be production-ready, it must plug into existing workflows, interact with legacy systems, and be supported with monitoring and error handling. That typically requires a broader team of engineers, operations specialists, even designers. These are roles many tax departments don’t have readily available. Without them, promising pilots get stuck in limbo.

The lesson is simple: tax teams need to plan from day one for data readiness, system integration, and long-term ownership. Without that preparation, pilots risk becoming one-off experiments that never make it past the demo stage.

Building on a Shaky Data Foundation

AI projects succeed or fail on the quality of their data. For tax teams, that’s often the first and toughest hurdle. Information is spread across different systems, stored in inconsistent formats, and sometimes incomplete. In many cases, key details are still buried in PDFs or email threads instead of structured databases. When an AI model has to work with that kind of patchy input, the results are bound to be flawed.

The unglamorous but essential part of AI is cleaning data and building reliable pipelines to feed information into the system. It’s rarely the exciting part, but it’s the foundation and without it, no model will perform consistently in production. The challenge is that, in the middle of all the AI hype, executives are often more willing to fund the “flashy” AI projects than the “boring” data cleanup work that actually makes them possible.

The takeaway is simple: treat data readiness as a core step in your AI journey, not an afterthought. A few weeks spent getting the data right can save months of wasted effort later.

Automating a Broken Process

A common pitfall for tax teams is dropping AI into processes that are already complex or inefficient. Automating a clunky workflow doesn’t fix the problems but it just makes them harder to manage.

AI adoption isn’t about layering a shiny new tool on top of old habits. It’s about rethinking the process as a whole. If AI takes over Task A, then Tasks B and C may need to change too. Reviewing the process upfront makes it easier to spot redundancies and cut steps that no longer add value.

The takeaway is simple: don’t just automate what you already do. Use AI as a chance to simplify and modernize. Otherwise, you risk hard-wiring inefficiency into the future of your operations.

The Trap of 100% Accuracy

Tax professionals are trained to value precision, so it’s no surprise many are reluctant to trust an AI tool unless it delivers flawless answers. The problem is, that bar is unrealistic with generative AI. These systems don’t “know” facts the way a database does. They predict words that are statistically likely to follow each other, which makes them great at generating fluent text but prone to confident-sounding mistakes, often called hallucinations.

Tax leaders need to understand this isn’t a bug that will soon be patched. It’s the nature of how these models work today. That doesn’t mean they’re unusable, but it does mean the goal shouldn’t be perfection. Instead, the focus should be on managing the risks and setting up safeguards that make AI outputs reliable enough for practical use.

On the technical side, tools like retrieval-augmented generation (RAG) can help by grounding AI answers in trusted documents instead of letting the model make things up. On the process side, though, there’s no way around human review. If the output involves regulations, case law, or financial figures, a qualified professional still needs to check it.

The real shift is in how we think about AI. Waiting for a system that’s 100% accurate isn’t realistic. The smarter approach is to design workflows where AI handles the heavy lifting and humans handle the judgment calls. When you set it up that way, AI doesn’t have to be perfect but reliable enough to speed things up without taking control out of human hands.

The Human Side of AI

For all the talk about data and algorithms, one of the biggest obstacles to AI adoption in tax departments may be people. Employees often view new technology as a threat, either to their jobs or to the way they’ve always worked. Fear of being replaced, or simple distrust in an unfamiliar tool, can stall an AI initiative before it even begins.

AI projects are often pitched as a way to save time and reclaim capacity by shifting people from repetitive, low-value tasks to higher-impact “strategic” work. In theory, that sounds ideal. But here’s the reality: not everyone naturally transitions from manual tasks to strategic ones. Can every compliance specialist suddenly become an advisor? Does the company actually need five more people in strategic roles instead of five handling tax filings?

When a department frees up dozens of hours of compliance work, there has to be a clear plan for how that capacity will be redeployed. Without one, employees are more likely to see AI as a threat than as a tool that supports them. For adoption to succeed, teams need to believe the technology will make their work more valuable and not make their roles redundant.

Pragmatism Over Hype

The promise of AI in tax is real but so are the pitfalls. Projects rarely stumble because the technology is broken. They stumble because of human, process, and data challenges that get overlooked.

Starting too big. Spreading resources across too many pilots. Ignoring data quality. Clinging to inefficient processes. Chasing perfection. Failing to bring people along. Any one of these can stall progress.

The way forward isn’t about shiny labels but about small wins that build trust and momentum. And it’s about shifting expectations. For tax departments, success won’t come from doing everything at once. It will come from doing the right things, in the right order, with the right support.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of any organizations with which the author is affiliated.



Source link

Tools & Platforms

Can the Middle East fight unauthorized AI-generated content with trustworthy tech? – Fast Company Middle East

Published

on


Since its emergence a few years back, generative AI has been the center of controversy, from environmental concerns to deepfakes to the non-consensual use of data to train models. One of the most troubling issues has been deepfakes and voice cloning, which have affected everyone from celebrities to government officials. 

In May, a deepfake video of Qatari Emir Sheikh Tamim bin Hamad Al Thani went viral. It appeared to show him criticizing US President Donald Trump after his Middle East tour and claiming he regretted inviting him. Keyframes from the clip were later traced back to a CBS 60 Minutes interview featuring the Emir in the same setting.

Most recently, YouTube drew backlash for another form of non-consensual AI use after revealing it had deployed AI-powered tools to “unblur, denoise, and improve clarity” on some uploaded content. The decision was made without the knowledge or consent of creators, and viewers were also unaware that the platform had intervened in the material.

In February, Microsoft disclosed that two US and four foreign developers had illegally accessed its generative AI services, reconfigured them to produce harmful content such as celebrity deepfakes, and resold the tools. According to a company blog post tied to its updated civil complaint, users created non-consensual intimate images and explicit material using modified versions of Azure OpenAI services. Microsoft also stated it deliberately excluded synthetic imagery and prompts from its filings to avoid further circulation of harmful content.

THE RISE OF FAKE CONTENT

Matin Jouzdani, Partner, Data Analytics & AI at KPMG Lower Gulf, says more and more content is being produced through AI, whether it’s commentary, images, or clips. “While fake or unauthorized content is nothing new, I’d say it’s gone to a new level.  When browsing content, we increasingly ask, ‘Is that AI-generated?’ A concept that just a few years ago barely existed.”

Moussa Beidas, Partner and ideation lead at PwC Middle East, says the ease with which deepfakes can be created has become a major concern.

“A few years ago, a convincing deepfake required specialist skills and powerful hardware. Today, anyone with a phone can download an app and produce synthetic voices or images in minutes,” Beidas says. “That accessibility means the issue is far more visible, and it is touching not just public figures but ordinary people and businesses as well.”

Though regulatory frameworks are evolving, they still struggle to catch up to the speed of technical advances in the field. “The Middle East region faces the challenge of balancing technological innovation with ethical standards, mirroring a global issue where we see fraud attempts leveraging deepfakes increasing by a whopping 2137% across three years,” says Eliza Lozan, Partner, Privacy Governance & Compliance Leader at Deloitte Middle East.

Fabricated videos often lure users into clicking on malicious links that scam them out of money or install malware for broader system control, adds Lozan.

These challenges demand two key responses: organizations must adopt trustworthy AI frameworks, and individuals must be trained to detect deepfakes—an area where public awareness remains limited.

“To protect the wider public interest, Digital Ethics and the Fair Use of AI have been introduced and are now gaining serious traction among decision-makers in corporate and regulatory spaces,” Lozan says.

DEFINING CONSENT

Drawing on established regulatory frameworks, Lozan explains that “consent” is generally defined as obtaining explicit permission from individuals before collecting their data. It also clearly outlines the purpose of the collection—such as recording user commands to train cloud-based virtual assistants.

“The concept of proper ‘consent’ management can only be achieved on the back of a strong privacy culture within an organization and is contingent on privacy being baked into the system management lifecycle, as well as upskilling talent on the ethical use of AI,” she adds.

Before seeking consent, Lozan notes, individuals must be fully informed about why their data is being collected, who it will be shared with, how long it will be stored, any potential biases in the AI model, and the risks associated with its use.

Matt Cooke, cybersecurity strategist for EMEA at Proofpoint, echoes this: “We are all individuals, and own our appearance, personality, and voice. If someone will use those attributes to train AI to reproduce our likeness, we should always be asked for consent.”

There’s a gap between technology and regulation, and the pace of technological advancement has seemingly outstripped lawmakers’ ability to keep up. 

While many ethically minded companies have implemented opt-in measures, Cooke says that “cybercriminals don’t operate with those levels of ethics and so we have to assume that our likeness will be used by criminals, perhaps with the intention of exploiting the trust of those within our relationship network.”

Beidas simplifies the concept further, noting that consent boils down to three essentials: people need to know what is happening, have a genuine choice, and be able to change their mind.

“If someone’s face, voice, or data is being used, the process should be clear and straightforward. That means plain language rather than technical jargon, and an easy way for individuals to opt out if they no longer feel comfortable,” he says.

TECHNOLOGY SAFEGUARDS

Still, the idea of establishing clear consent guidelines often seems far-fetched. While some leeway is given due to the technology’s relative newness, it is difficult to imagine systems capable of effectively moderating the sheer volume of content produced daily through generative AI, and this reality is echoed by industry leaders.

In May, speaking at an event promoting his new book, former UK deputy prime minister and ex-Meta executive Nick Clegg said that a push for artist consent would “basically kill” the AI industry overnight. He acknowledged that while the creative community should have the right to opt out of having their work used to train AI models, it is not feasible to obtain consent beforehand.

Michael Mosaad, Partner, Enterprise Security at Deloitte Middle East, highlights some practices being adopted for generative AI models. 

“Although not a mandatory requirement, some Gen AI models now add watermarks to their generated text as best practice,” he explains.

“This means that, to prevent misuse, organizations are embedding recognizable signals into AI-generated content to make it traceable and protected without compromising its quality.”

Mosaad adds that organizations also voluntarily leverage AI to fight AI, using tools to prevent the misuse of generated content by limiting copying and inserting metadata into text. 

Expanding on the range of tools being developed, Beidas says, “Some systems now attach content credentials, which act like a digital receipt showing when and where something was created. Others use invisible watermarks hidden in pixels or audio waves, detectable even after edits.”  

“Platforms are also introducing their own labels for AI-generated material. None of these are perfect on their own, but layered together, they help people better judge what they see.”

GOVERNMENT AND PLATFORM REGULATIONS

Like technology safeguards, government and platform regulation are still in the air. However, their responsibility remains heavy, as individuals look to them to address online consent violations.

While platform policies are evolving, the challenge is speed. “Synthetic content can spread across different apps in seconds, while review processes often take much longer,” says Beidas. “The real opportunity lies in collaboration—governments, platforms, and the private sector working together on common standards such as watermarking and provenance, as well as faster response mechanisms. That is how we begin to close the gap between creation and enforcement.”

However, change is underway in countries such as Qatar, Saudi Arabia, and the UAE, which are adopting AI regulations or guidelines, following the example of the European Union’s AI Act.

Since they are still in their early stages, Lozan says, “a gap persists in practically supporting organizations to understand and implement effective frameworks for identifying and managing risks when developing and deploying technologies like AI.”

According to Jouzdani, since the GCC already has a strong legal foundation protecting citizens from slander and discrimination, the same principles could be applied in AI-related cases. 

“Regulators and lawmakers could take this a step further by ensuring that consent remains relevant not only to the initial use of content but also to subsequent uses, particularly on platforms beyond immediate jurisdiction,” he says, adding the need to strengthen online enforcement, especially when users remain anonymous or hidden.

  Be in the Know. Subscribe to our Newsletters.





Source link

Continue Reading

Tools & Platforms

Exploring AI Agents’ Implementation in Enterprise and Financial Scenarios

Published

on


Recently, the “Opinions on Deeply Implementing the ‘Artificial Intelligence +’ Initiative” issued by the State Council clearly stated that by 2027, the penetration rate of new – generation intelligent terminals and AI agents should exceed 70%, marking that the “AI agent economy” has entered an accelerated implementation phase. Against this policy background, the 3rd China AI Agent Annual Conference will be held in Shanghai on November 21st. The conference, themed “Initiating a New Intelligent Journey”, is hosted by MetaverseFamily. It focuses on two core tracks: enterprise – level and financial AI agents, bringing together the forces from upstream and downstream of the industrial chain to solve the pain points in the implementation of agent technologies and promote the effective implementation of the “Artificial Intelligence +” policy.

Two Core Segments Drive to Create a “Benchmark Platform” for Industry Exchange

As an important industry event after the implementation of the “Artificial Intelligence +” policy, this conference takes “practicality” and “precision” as its core. Through the form of “setting the tone at the main forum + solving problems at sub – forums + empowering through special sessions”, it provides participants with all – dimensional communication opportunities.

The Main Forum Focuses on AI Agent Trends and Determines the Technological Direction: In the morning, industry experts will discuss “the latest development and future opportunities of AI agents”, deeply analyze three technological paths: multimodal fusion, tool enhancement, and memory upgrade. At the same time, in line with the requirements of the “Opinions”, they will explore how agents can be deeply integrated with six key sectors, providing direction for enterprise layout.

Sub – forums Target “Enterprise – level and Financial AI Agents” and Provide Practical Solutions: Two sub – forums are set up in the afternoon to precisely meet the needs of different fields. Among them, the sub – forum on “Enterprise – level AI Agents Driving Marketing and Business Transformation” will break down the practical methods of local deployment of agents and database construction, share cases of cost reduction through business process automation, and help enterprises transform technologies into actual benefits. The sub – forum on “Financial AI Agents Reshaping the Industry’s Future” focuses on topics such as the technical principle of the 7×24 “risk – control sentinel” and the upgrade of intelligent customer service driven by large models, providing references for financial institutions to balance innovation and compliance.

Strengthen Resource Matching

The conference also features in – depth breakdowns of over 10 benchmark cases, a fireside chat among agent leaders, an annual award ceremony, and over 15 technology demonstration and experience areas. Participants can experience up close the application of agents in scenarios such as contract review and risk – control interception, and connect with over 300 industry professionals.

Supported by High – level Decision – Makers and Covering the Entire Industrial Chain

This conference has attracted the core forces of the AI agent industrial chain. The participants are characterized by a “high – end” profile. According to statistics, 30% of the participants are enterprise chairmen/general managers, and 25% are management personnel, covering key departments such as planning, marketing, information technology, and business, which can directly promote the implementation of cooperation.

As the conference organizer, MetaverseFamily is a hub – type media platform in the AI and metaverse fields. It has rich industry resources and event experience, providing strong support for the effectiveness of the conference. So far, the platform has held over 20 offline AI and metaverse conferences and over 40 online events, covering over 100,000 industry practitioners and having over 20,000 high – quality private WeChat group members. At the same time, it has successfully facilitated over 20 business cooperations, including the investment promotion project in Nanjing Niushoushan, the metaverse business cooperation with Mobile Tmall, and the upgrade of the AI shopping experience in an outlet mall, which can effectively connect resources and promote cooperation for participants.

In addition to the core topics, the “2025 AI Agent Annual Award Ceremony” at the conference is also worth looking forward to. The winners of heavy – weight awards such as the “Top 10 AI Agent Influential Brands of 2025” and the “Model Award for Financial AI Agents” are about to be announced.

Contact Us
Zhi Shu: 159 0191 1431  Alex.gan@ccglobal.com.cn

(Event Registration)



Source link

Continue Reading

Tools & Platforms

Fidelity touts renewed Wealthscape, but departs from the AI hype with low-key launch; it's 'part of the larger story, not necessarily the story,' company says – RIABiz

Published

on



Fidelity touts renewed Wealthscape, but departs from the AI hype with low-key launch; it’s ‘part of the larger story, not necessarily the story,’ company says  RIABiz



Source link

Continue Reading

Trending