Connect with us

Tools & Platforms

Can the Middle East fight unauthorized AI-generated content with trustworthy tech? – Fast Company Middle East

Published

on


Since its emergence a few years back, generative AI has been the center of controversy, from environmental concerns to deepfakes to the non-consensual use of data to train models. One of the most troubling issues has been deepfakes and voice cloning, which have affected everyone from celebrities to government officials. 

In May, a deepfake video of Qatari Emir Sheikh Tamim bin Hamad Al Thani went viral. It appeared to show him criticizing US President Donald Trump after his Middle East tour and claiming he regretted inviting him. Keyframes from the clip were later traced back to a CBS 60 Minutes interview featuring the Emir in the same setting.

Most recently, YouTube drew backlash for another form of non-consensual AI use after revealing it had deployed AI-powered tools to “unblur, denoise, and improve clarity” on some uploaded content. The decision was made without the knowledge or consent of creators, and viewers were also unaware that the platform had intervened in the material.

In February, Microsoft disclosed that two US and four foreign developers had illegally accessed its generative AI services, reconfigured them to produce harmful content such as celebrity deepfakes, and resold the tools. According to a company blog post tied to its updated civil complaint, users created non-consensual intimate images and explicit material using modified versions of Azure OpenAI services. Microsoft also stated it deliberately excluded synthetic imagery and prompts from its filings to avoid further circulation of harmful content.

THE RISE OF FAKE CONTENT

Matin Jouzdani, Partner, Data Analytics & AI at KPMG Lower Gulf, says more and more content is being produced through AI, whether it’s commentary, images, or clips. “While fake or unauthorized content is nothing new, I’d say it’s gone to a new level.  When browsing content, we increasingly ask, ‘Is that AI-generated?’ A concept that just a few years ago barely existed.”

Moussa Beidas, Partner and ideation lead at PwC Middle East, says the ease with which deepfakes can be created has become a major concern.

“A few years ago, a convincing deepfake required specialist skills and powerful hardware. Today, anyone with a phone can download an app and produce synthetic voices or images in minutes,” Beidas says. “That accessibility means the issue is far more visible, and it is touching not just public figures but ordinary people and businesses as well.”

Though regulatory frameworks are evolving, they still struggle to catch up to the speed of technical advances in the field. “The Middle East region faces the challenge of balancing technological innovation with ethical standards, mirroring a global issue where we see fraud attempts leveraging deepfakes increasing by a whopping 2137% across three years,” says Eliza Lozan, Partner, Privacy Governance & Compliance Leader at Deloitte Middle East.

Fabricated videos often lure users into clicking on malicious links that scam them out of money or install malware for broader system control, adds Lozan.

These challenges demand two key responses: organizations must adopt trustworthy AI frameworks, and individuals must be trained to detect deepfakes—an area where public awareness remains limited.

“To protect the wider public interest, Digital Ethics and the Fair Use of AI have been introduced and are now gaining serious traction among decision-makers in corporate and regulatory spaces,” Lozan says.

DEFINING CONSENT

Drawing on established regulatory frameworks, Lozan explains that “consent” is generally defined as obtaining explicit permission from individuals before collecting their data. It also clearly outlines the purpose of the collection—such as recording user commands to train cloud-based virtual assistants.

“The concept of proper ‘consent’ management can only be achieved on the back of a strong privacy culture within an organization and is contingent on privacy being baked into the system management lifecycle, as well as upskilling talent on the ethical use of AI,” she adds.

Before seeking consent, Lozan notes, individuals must be fully informed about why their data is being collected, who it will be shared with, how long it will be stored, any potential biases in the AI model, and the risks associated with its use.

Matt Cooke, cybersecurity strategist for EMEA at Proofpoint, echoes this: “We are all individuals, and own our appearance, personality, and voice. If someone will use those attributes to train AI to reproduce our likeness, we should always be asked for consent.”

There’s a gap between technology and regulation, and the pace of technological advancement has seemingly outstripped lawmakers’ ability to keep up. 

While many ethically minded companies have implemented opt-in measures, Cooke says that “cybercriminals don’t operate with those levels of ethics and so we have to assume that our likeness will be used by criminals, perhaps with the intention of exploiting the trust of those within our relationship network.”

Beidas simplifies the concept further, noting that consent boils down to three essentials: people need to know what is happening, have a genuine choice, and be able to change their mind.

“If someone’s face, voice, or data is being used, the process should be clear and straightforward. That means plain language rather than technical jargon, and an easy way for individuals to opt out if they no longer feel comfortable,” he says.

TECHNOLOGY SAFEGUARDS

Still, the idea of establishing clear consent guidelines often seems far-fetched. While some leeway is given due to the technology’s relative newness, it is difficult to imagine systems capable of effectively moderating the sheer volume of content produced daily through generative AI, and this reality is echoed by industry leaders.

In May, speaking at an event promoting his new book, former UK deputy prime minister and ex-Meta executive Nick Clegg said that a push for artist consent would “basically kill” the AI industry overnight. He acknowledged that while the creative community should have the right to opt out of having their work used to train AI models, it is not feasible to obtain consent beforehand.

Michael Mosaad, Partner, Enterprise Security at Deloitte Middle East, highlights some practices being adopted for generative AI models. 

“Although not a mandatory requirement, some Gen AI models now add watermarks to their generated text as best practice,” he explains.

“This means that, to prevent misuse, organizations are embedding recognizable signals into AI-generated content to make it traceable and protected without compromising its quality.”

Mosaad adds that organizations also voluntarily leverage AI to fight AI, using tools to prevent the misuse of generated content by limiting copying and inserting metadata into text. 

Expanding on the range of tools being developed, Beidas says, “Some systems now attach content credentials, which act like a digital receipt showing when and where something was created. Others use invisible watermarks hidden in pixels or audio waves, detectable even after edits.”  

“Platforms are also introducing their own labels for AI-generated material. None of these are perfect on their own, but layered together, they help people better judge what they see.”

GOVERNMENT AND PLATFORM REGULATIONS

Like technology safeguards, government and platform regulation are still in the air. However, their responsibility remains heavy, as individuals look to them to address online consent violations.

While platform policies are evolving, the challenge is speed. “Synthetic content can spread across different apps in seconds, while review processes often take much longer,” says Beidas. “The real opportunity lies in collaboration—governments, platforms, and the private sector working together on common standards such as watermarking and provenance, as well as faster response mechanisms. That is how we begin to close the gap between creation and enforcement.”

However, change is underway in countries such as Qatar, Saudi Arabia, and the UAE, which are adopting AI regulations or guidelines, following the example of the European Union’s AI Act.

Since they are still in their early stages, Lozan says, “a gap persists in practically supporting organizations to understand and implement effective frameworks for identifying and managing risks when developing and deploying technologies like AI.”

According to Jouzdani, since the GCC already has a strong legal foundation protecting citizens from slander and discrimination, the same principles could be applied in AI-related cases. 

“Regulators and lawmakers could take this a step further by ensuring that consent remains relevant not only to the initial use of content but also to subsequent uses, particularly on platforms beyond immediate jurisdiction,” he says, adding the need to strengthen online enforcement, especially when users remain anonymous or hidden.

  Be in the Know. Subscribe to our Newsletters.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

New study shows IT is primed to lead AI innovation

Published

on


Businesses are racing to adopt AI. But siloed tools, fragmented efforts, and lack of trust are slowing progress.

A new Forrester study, commissioned by Tines, surveyed over 400 IT leaders across North America and Europe. The study shows that governance and privacy compliance are both the top priorities and the biggest blockers to scaling AI.

The study also found that 88% of IT leaders say AI adoption remains difficult to scale without orchestration. Orchestration connects systems, tools, and teams so AI can run securely, transparently, and efficiently at scale. Without it, AI adoption stays fragmented and organizations struggle to deliver value.

The takeaway is clear: IT is primed to orchestrate AI across the enterprise. But first, teams must overcome blockers in governance, strategic alignment, and trust.

The biggest blockers to scaling AI

When it comes to scaling AI, governance is both a top priority and a top barrier. Forrester’s study found that over half (54%) of IT leaders say ensuring AI complies with privacy, governance, and regulatory standards is the highest priority for the next 12 months. Yet more than a third (38%) cite governance and security concerns as the biggest blockers to scaling AI.

This reflects a growing tension. A compliance-first approach to AI is essential. But if it isn’t effectively embedded into AI initiatives, it can also stall innovation and competitiveness.

AI introduces risks that existing governance processes weren’t built to handle, with many traditional approaches proving inadequate for AI’s real-time demands, speed, and complexity. Gaps in governance expose organizations to liabilities including bias, ethical breaches, shadow AI, and compliance failures that can lead to regulatory penalties and reputational damage.

Beyond security and governance concerns, the other top challenges when scaling AI include lack of budget or executive sponsorship, concerns about ROI, and fragmented ownership. Siloed AI initiatives and disconnected tools also present a barrier, making it difficult to connect systems across departments for greater visibility, control, and effectiveness.

Orchestration is the missing link to alignment, trust, and scale

AI orchestration offers a way forward. It unifies people, processes, technology, and workflows into a connected system that improves efficiency, transparency, and governance, addressing many of the key blockers that stall scaling AI.

Enabling this type of oversight is a top priority. According to the research, 73% say visibility across AI workflows and systems is critical. To achieve this, nearly half (49%) of organizations are looking for partners that provide end-to-end centralized solutions to overcome siloed workflows and fragmented AI efforts.

The cost of inaction is high. Without orchestration, the study shows, organizations face difficulties like:

  • Ensuring AI practices are ethical and transparent (50%)
  • Security concerns related to data access, compliance issues, inconsistent governance, auditing, and shadow AI (44%)
  • Lack of employee trust in the outcomes generated by AI (40%)

These challenges don’t just slow down your AI initiatives: they risk halting progress on core business goals, damaging brand reputation, and undermining trust.

IT is primed to lead orchestration

Some 86% of respondents believe that IT is uniquely positioned to orchestrate AI across workflows, systems, and teams. But while organizations are increasingly recognizing IT as an enabler of efficiency and innovation, many still underestimate its broader strategic potential.

Today, 40% of respondents say IT’s reactive focus on troubleshooting and uptime is what holds it back from being seen as a driver of business outcomes at the board level. Similarly, 38% believe that other departments frequently or occasionally overlook or underestimate IT’s potential to improve overall organizational efficiency.

With AI orchestration, IT has the opportunity to take a key strategic role that shapes the future of their organization’s success. IT leaders are ready: 38% of survey respondents believe that IT should own and lead AI orchestration, while 28% say it should act as the coordination hub between different business functions.

IT is primed to lead this charge as they’re well-placed to connect strategy, teams, and data. Through AI orchestration, they can facilitate secure, compliant adoption and scaling of AI that meets robust governance requirements.

This won’t just fuel organization-wide efficiency, but will unlock tangible business value, such as enhancing collaboration between business units, accelerating digital transformation, and improving employee productivity, positioning IT as significant drivers of impact.

Key recommendations

To strengthen its strategic role, IT should:

  • Orchestrate AI for visibility and alignment: Lead orchestration to connect tools, improve transparency, and align teams.
  • Embed governance by design: Orchestration provides a framework to build compliance and security into AI workflows from the start, ensuring consistency at scale.
  • Frame outcomes in business value: To secure executive sponsorship, IT should frame orchestration’s impact in terms of ROI, efficiency gains, and revenue opportunities unlocked.

For more insights on how IT can leverage AI orchestration to unlock strategic value, read the full study: Unlocking AI’s Full Value: How IT Orchestrates Secure, Scalable Innovation.



Source link

Continue Reading

Tools & Platforms

KAIST develops AI to predict crowd density and flow – 조선일보

Published

on



KAIST develops AI to predict crowd density and flow  조선일보



Source link

Continue Reading

Tools & Platforms

Shift left might have failed – but AI looks set to deliver on its promise

Published

on


“AI will replace QA.” It was not the first time I had heard this claim. But when someone said it to me directly, I asked them to demonstrate how and they simply couldn’t.

That exchange occurred shortly after my co-founder Guy and I launched our second company, BlinqIO. This time, we focused our efforts on building a fully autonomous AI Test Engineer.



Source link

Continue Reading

Trending