Connect with us

Tools & Platforms

AI, IoT And Edge To Transform Digital Banking

Published

on


The Forrester Research report, The Future of Digital Experiences in Banking, reveals how artificial intelligence (AI), the Internet of Things (IoT), and edge computing are poised to revolutionise digital banking over the next decade.

The analyst posits that as financial institutions transition these from merely assistive technologies to anticipatory and ultimately agentic experiences, trust and transparency will be paramount in fostering consumer adoption.

The findings reveal that key innovations are reshaping the banking landscape. AI-powered virtual assistants are set to enhance customer interactions, delivering multimodal, intuitive, and emotionally aware banking experiences.

Financial institutions will harness the power of AI to offer tailored insights, while IoT-driven intelligence will enable embedded finance, providing real-time financial recommendations based on predictive insights.

Furthermore, the advent of 5G and 6G technologies will facilitate instantaneous analytics through edge computing, optimising efficiency and scalability for banking services.

Zhi-Ying Barry, principal analyst at Forrester, emphasises the delicate balance banks must maintain while leveraging these advanced technologies.

“Banks in Singapore and Australia that are looking to leverage AI and experiment with agentic AI are treading very carefully,” she notes. “There could be higher-risk scenarios where errors could have significant negative consequences, such as financial losses and reputational damage.”

Barry highlights the proactive measures being taken by regulatory bodies, such as the Monetary Authority of Singapore (MAS) and the Australian government, which have introduced ethical guidelines to steer firms in the responsible design and implementation of AI.

As an example, Barry cites DBS Bank’s initiative to align its AI strategies with the FEAT principles, further complemented by its own PURE framework.

“It’s not uncommon to see banks establish AI task forces or steering committees to assess AI’s potential while ensuring human oversight.” Zhi-Ying Barry

The decision of consumers regarding which banks to trust will largely hinge on their confidence in AI technologies, the specific use cases presented, and their perceived risks.

Conversational banking is also highlighted as a vital evolution.

“Advancements in AI are set to further transform consumer interactions within financial services. The future of digital banking will be defined by modern, intuitive, and human-centred interfaces,” states Aurélie L’Hostis, another principal analyst at Forrester.

She elaborates on how AI-powered virtual assistants will enhance organisations’ understanding of consumer intent and emotions, allowing for more personalised and engaging interactions.

As the banking industry stands on the cusp of this digital transformation, the role of ethical governance and consumer trust will be crucial in navigating the future landscape.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

The Biggest Barriers Blocking Agentic AI Adoption

Published

on


The era of agentic AI is here, or so we are told, bringing super-smart AI assistants capable of carrying out complex tasks on our behalf.

This represents the next generation of AI beyond current chatbots like ChatGPT and Claude, which simply answer questions or generate content.

Those building (and selling) the tech tell us we are on the verge of a fully-automated future where AIs cooperate and access external systems to carry out vast numbers of routine knowledge and decision-making tasks.

But just as emerging concerns around hallucinations, data privacy and copyright have put up barriers to generative AI that some organizations have found insurmountable, agents have their own set of obstacles.

So, here’s my rundown of the challenges that developers of AI agents, organizations wanting to leverage them, and society at large will have to overcome, if we’re going to deliver the promised agentic future.

Trust

The biggie. To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money.

Agents are far from flawless, and it’s already been shown that it’s possible to trick them. Companies see the benefits but also understand the real risks of breaching customer trust, which can include severe reputational and business damage. Mitigating these risks requires careful planning and compliance, which creates barriers for many.

Lack Of Agentic Infrastructure

Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. However, they’re far from perfect, with current benchmarking showing that they’re generally less successful than humans at many tasks.

As agentic frameworks mature, the digital infrastructure of the world is likely to mature around them. Most people reading this will remember that it took a few years from the introduction of smartphones to mobile-friendly websites becoming the norm. However, at this early stage, this creates risk for operators of services like e-commerce or government portals that agents need to interact with. Who is responsible if an agent makes erroneous buying decisions or incorrectly files a legal document? Until issues like this are resolved, operators may shy away from letting agents interact with their systems.

Security Concerns

It doesn’t take much imagination to see that, in principle, AI agents could be a security nightmare. With their broad and trusted access to tools and platforms, as well as our data, they are powerful assistants and also high-value propositions for cybercriminals. If hijacked or exploited, criminals potentially have decision-making access to our lives. Combined with other high-tech attacks, such as deepfake phishing attempts, AI agents will create new and potentially highly problematic avenues of attack for hackers, fraudsters and extortionists. Agents must be deployed by individuals as well as businesses in a way that’s resilient to these types of threats, which not everyone is yet capable of doing.

Cultural And Societal Barriers

Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.

Unfortunately, there’s no shortcut available here. Addressing this will involve demonstrating that agents can work in a reliable, trustworthy and ethical way. Pulling this off while also building a culture that manages change effectively and shares the benefits of agentic AI inclusively is the key here.

Agents Of Tomorrow

The vision of agentic AI is quite mind-boggling: Millions of intelligent systems around the world interacting to get things done, in ways that make us more efficient and capable.

As we’ve seen, however, the obstacles to this are just as likely to be human as they are technological. As well as solving fundamental issues like AI hallucination, and building infrastructure that enables agents in ways that are trustworthy and accountable, we have to prepare society for a fundamental shift in the way people work with machines.

Accomplishing this will pave the way for AI agents to hit the mainstream in a safe way that enhances our lives rather than exposes us to risks.



Source link

Continue Reading

Tools & Platforms

“AI + Agriculture”: Solving Industry Pain Points, “Maimai Technology” Bags Over 100M Yuan in Pre – A Round Financing

Published

on


36Kr learned that “Maimai Technology Group” (hereinafter referred to as “Maimai Technology”) has completed a Pre – A round of financing exceeding 100 million yuan, with a post – investment valuation exceeding 1 billion yuan. The round was jointly led by institutions such as Qihong Yuyuan, Xinglian Capital, Spring Light Lane, and Honglian Qiyuan, with some existing shareholders participating in the follow – on investment. The funds from this round of financing will mainly be used for the R & D and innovation of core technologies such as AI agricultural large models and intelligent sensing devices.

According to Li Nan, the founder, chairman, and CEO of Maimai Technology Group, this financing is the largest – scale early – stage financing in the smart agriculture field recently.

Maimai Technology is an artificial intelligence company that takes data, algorithms, and scenarios as its core foundation and “technology + consumption” as its core business model. It transforms traditional agriculture through agricultural production technologies and information technologies such as the Internet of Things, artificial intelligence, cloud computing, and blockchain. Its headquarters is located in the core area of Zhongguancun, Beijing.

When talking about why he entered the agricultural sector, Li Nan admitted that it stemmed from two heart – wrenching contrasts. When he worked in a large Internet company before, he saw that cutting – edge technologies such as aerospace remote sensing and virtual reality were concentrated in the consumer entertainment field, while the agriculture that supported consumption lagged behind in technology. Later, during a rural survey, he found that over 95% of farmers in rural areas were over 55 years old, and “people under 55 neither liked nor knew how to farm.” Agriculture was facing a crisis of “no successors.”

Under such an impact, Li Nan entered the agricultural field and spent half a year assembling a so – called “luxury” founding team. Different from traditional agricultural enterprises or pure technology companies, its core team consists of three types of cross – border elites, forming a stable triangular structure of “understanding agriculture, strong in technology, and good at commercialization.”

Since 2018, Maimai Technology’s business model has been iterating according to industrial needs.

In the first two years, in Li Nan’s words, the company was more like an “agricultural technology solution integrator,” piecing together various technologies to solve specific problems of farmers, targeting the technical needs of customers. Later, in order to become a technology company that understands the industry best, Maimai Technology completed the innovation of a new model, providing customers with technical capabilities and helping them complete the connection between production and sales.

What really helped Maimai Technology build its moat was the investment in relevant research on artificial intelligence starting in 2021. After several years of exploration, Maimai Technology has successfully built a core foundation of “model + data + scenario” and a complete crop growth model system covering “description, diagnosis, prediction, and decision – making,” deeply empowering key links in agricultural planting management and establishing advantages in crop models, data computing power, and in – depth scenario development.

The Super Brain Smart Agriculture Big Data Platform is a digital infrastructure for smart agriculture built by Maimai Technology based on technologies such as the Internet of Things, AI, big data, satellite remote sensing, and blockchain, combined with agricultural professional knowledge and front – line business experience. The platform takes “models, scenarios, and data” as its core elements and realizes the collection, processing, analysis, and application of data across the entire agricultural industry chain through a three – layer architecture (information perception layer, data processing layer, and intelligent analysis layer), providing data empowerment for the entire process of agricultural production, circulation, and sales.

According to Li Nan, the “crop large model” developed by Maimai Technology is not a “simplified version” of a general large model, but a “coupled system of small models” focusing on vertical scenarios, which can accurately solve specific industrial problems. For example, a vertical model can only solve the problem of peach planting in Central China, while different models are needed for plum and cherry planting in East China.

As of now, Maimai Technology has completed the R & D of nearly a thousand vertical scenario models for 15 major categories and over 200 sub – categories of staple food crops such as wheat and rice and cash crops such as strawberries and blueberries. It has also deployed multi – point data collection in places such as Beijing, Jingmen in Hubei, Chongqing, and Hainan for model verification and optimization.

The value of technology ultimately needs to be verified by industrial results. A citrus – growing customer in Hubei previously faced three major pain points: a high proportion of blemished fruits, which prevented them from entering supermarkets; a low percentage of large fruits, resulting in low selling prices; and insufficient sugar content, making the fruits less competitive in taste. After analysis, the Maimai Technology team took “water” as the core regulatory factor and quantified two key dimensions through the model: one is the water environment (drought duration, air humidity, soil moisture), and the other is the crop’s water demand pattern (upper and lower limits of water demand at different growth stages).

After four years of practical application, the results are remarkable: the blemished fruit rate has dropped to less than 5%, the large – fruit rate has stabilized above 85%, and the sugar content has increased by 2.5 – 2.7 units, ultimately helping the customer successfully enter some high – end supermarket channels.

Taking a large blueberry model in Yunnan as an example, based on in – depth analysis of the blueberry crop mechanism, Maimai Technology scientifically demonstrated the natural environment in Mengzi and established a comprehensive blueberry model system, including environment simulation and optimization models, soil water and fertilizer simulation and optimization models, etc. Eventually, the application of the crop growth model helped increase the blueberry yield by up to 30%.

In the strawberry – planting large model, Maimai Technology combines its self – developed crop growth model with agricultural Internet of Things technology. Through the intelligent collection and analysis of strawberry – planting environment data, it can make accurate decisions on environmental parameters such as light, temperature, humidity, and carbon dioxide in the strawberry – planting environment. Practical data shows that the application of the strawberry growth model can shorten the strawberry growth cycle by 10% and increase the yield by 20%.

In terms of R & D, Maimai Technology has two core R & D institutions: the National R & D Center and the Agricultural Industry Research Institute. The National R & D Center has assembled 7 national – level expert teams and has a professional R & D team of 270 people, focusing on the R & D of hard technologies such as drones, automated equipment, and visual recognition technology, covering the entire chain of technological breakthroughs in agricultural artificial intelligence. The Agricultural Industry Research Institute focuses on research at agricultural bases, specifically studying crop growth mechanisms and product optimization, and is committed to breakthroughs in cutting – edge technologies such as large crop growth models.

In terms of technology accumulation, Maimai Technology holds over 120 patents and software copyrights related to smart agriculture and has served over 170 Fortune 500 – level customers in China. On the production side, it is deeply involved in the technology and production guidance of over 70 modern digital farms, with a total production capacity exceeding 10 billion yuan; it has established ecological cooperation with over 190 modern digital farms, with a total production capacity exceeding 18 billion yuan.

It is worth mentioning that the company has had positive audited net profits for six consecutive years, and its revenue growth rate has remained above 200% for many years. Currently, the company has initiated the planning for a Series A round of financing and has clearly set the strategic goal of officially striving for an IPO in 2027. In the future, it will continue to achieve exponential growth in revenue and profit driven by technology.



Source link

Continue Reading

Tools & Platforms

Can the Middle East fight unauthorized AI-generated content with trustworthy tech? – Fast Company Middle East

Published

on


Since its emergence a few years back, generative AI has been the center of controversy, from environmental concerns to deepfakes to the non-consensual use of data to train models. One of the most troubling issues has been deepfakes and voice cloning, which have affected everyone from celebrities to government officials. 

In May, a deepfake video of Qatari Emir Sheikh Tamim bin Hamad Al Thani went viral. It appeared to show him criticizing US President Donald Trump after his Middle East tour and claiming he regretted inviting him. Keyframes from the clip were later traced back to a CBS 60 Minutes interview featuring the Emir in the same setting.

Most recently, YouTube drew backlash for another form of non-consensual AI use after revealing it had deployed AI-powered tools to “unblur, denoise, and improve clarity” on some uploaded content. The decision was made without the knowledge or consent of creators, and viewers were also unaware that the platform had intervened in the material.

In February, Microsoft disclosed that two US and four foreign developers had illegally accessed its generative AI services, reconfigured them to produce harmful content such as celebrity deepfakes, and resold the tools. According to a company blog post tied to its updated civil complaint, users created non-consensual intimate images and explicit material using modified versions of Azure OpenAI services. Microsoft also stated it deliberately excluded synthetic imagery and prompts from its filings to avoid further circulation of harmful content.

THE RISE OF FAKE CONTENT

Matin Jouzdani, Partner, Data Analytics & AI at KPMG Lower Gulf, says more and more content is being produced through AI, whether it’s commentary, images, or clips. “While fake or unauthorized content is nothing new, I’d say it’s gone to a new level.  When browsing content, we increasingly ask, ‘Is that AI-generated?’ A concept that just a few years ago barely existed.”

Moussa Beidas, Partner and ideation lead at PwC Middle East, says the ease with which deepfakes can be created has become a major concern.

“A few years ago, a convincing deepfake required specialist skills and powerful hardware. Today, anyone with a phone can download an app and produce synthetic voices or images in minutes,” Beidas says. “That accessibility means the issue is far more visible, and it is touching not just public figures but ordinary people and businesses as well.”

Though regulatory frameworks are evolving, they still struggle to catch up to the speed of technical advances in the field. “The Middle East region faces the challenge of balancing technological innovation with ethical standards, mirroring a global issue where we see fraud attempts leveraging deepfakes increasing by a whopping 2137% across three years,” says Eliza Lozan, Partner, Privacy Governance & Compliance Leader at Deloitte Middle East.

Fabricated videos often lure users into clicking on malicious links that scam them out of money or install malware for broader system control, adds Lozan.

These challenges demand two key responses: organizations must adopt trustworthy AI frameworks, and individuals must be trained to detect deepfakes—an area where public awareness remains limited.

“To protect the wider public interest, Digital Ethics and the Fair Use of AI have been introduced and are now gaining serious traction among decision-makers in corporate and regulatory spaces,” Lozan says.

DEFINING CONSENT

Drawing on established regulatory frameworks, Lozan explains that “consent” is generally defined as obtaining explicit permission from individuals before collecting their data. It also clearly outlines the purpose of the collection—such as recording user commands to train cloud-based virtual assistants.

“The concept of proper ‘consent’ management can only be achieved on the back of a strong privacy culture within an organization and is contingent on privacy being baked into the system management lifecycle, as well as upskilling talent on the ethical use of AI,” she adds.

Before seeking consent, Lozan notes, individuals must be fully informed about why their data is being collected, who it will be shared with, how long it will be stored, any potential biases in the AI model, and the risks associated with its use.

Matt Cooke, cybersecurity strategist for EMEA at Proofpoint, echoes this: “We are all individuals, and own our appearance, personality, and voice. If someone will use those attributes to train AI to reproduce our likeness, we should always be asked for consent.”

There’s a gap between technology and regulation, and the pace of technological advancement has seemingly outstripped lawmakers’ ability to keep up. 

While many ethically minded companies have implemented opt-in measures, Cooke says that “cybercriminals don’t operate with those levels of ethics and so we have to assume that our likeness will be used by criminals, perhaps with the intention of exploiting the trust of those within our relationship network.”

Beidas simplifies the concept further, noting that consent boils down to three essentials: people need to know what is happening, have a genuine choice, and be able to change their mind.

“If someone’s face, voice, or data is being used, the process should be clear and straightforward. That means plain language rather than technical jargon, and an easy way for individuals to opt out if they no longer feel comfortable,” he says.

TECHNOLOGY SAFEGUARDS

Still, the idea of establishing clear consent guidelines often seems far-fetched. While some leeway is given due to the technology’s relative newness, it is difficult to imagine systems capable of effectively moderating the sheer volume of content produced daily through generative AI, and this reality is echoed by industry leaders.

In May, speaking at an event promoting his new book, former UK deputy prime minister and ex-Meta executive Nick Clegg said that a push for artist consent would “basically kill” the AI industry overnight. He acknowledged that while the creative community should have the right to opt out of having their work used to train AI models, it is not feasible to obtain consent beforehand.

Michael Mosaad, Partner, Enterprise Security at Deloitte Middle East, highlights some practices being adopted for generative AI models. 

“Although not a mandatory requirement, some Gen AI models now add watermarks to their generated text as best practice,” he explains.

“This means that, to prevent misuse, organizations are embedding recognizable signals into AI-generated content to make it traceable and protected without compromising its quality.”

Mosaad adds that organizations also voluntarily leverage AI to fight AI, using tools to prevent the misuse of generated content by limiting copying and inserting metadata into text. 

Expanding on the range of tools being developed, Beidas says, “Some systems now attach content credentials, which act like a digital receipt showing when and where something was created. Others use invisible watermarks hidden in pixels or audio waves, detectable even after edits.”  

“Platforms are also introducing their own labels for AI-generated material. None of these are perfect on their own, but layered together, they help people better judge what they see.”

GOVERNMENT AND PLATFORM REGULATIONS

Like technology safeguards, government and platform regulation are still in the air. However, their responsibility remains heavy, as individuals look to them to address online consent violations.

While platform policies are evolving, the challenge is speed. “Synthetic content can spread across different apps in seconds, while review processes often take much longer,” says Beidas. “The real opportunity lies in collaboration—governments, platforms, and the private sector working together on common standards such as watermarking and provenance, as well as faster response mechanisms. That is how we begin to close the gap between creation and enforcement.”

However, change is underway in countries such as Qatar, Saudi Arabia, and the UAE, which are adopting AI regulations or guidelines, following the example of the European Union’s AI Act.

Since they are still in their early stages, Lozan says, “a gap persists in practically supporting organizations to understand and implement effective frameworks for identifying and managing risks when developing and deploying technologies like AI.”

According to Jouzdani, since the GCC already has a strong legal foundation protecting citizens from slander and discrimination, the same principles could be applied in AI-related cases. 

“Regulators and lawmakers could take this a step further by ensuring that consent remains relevant not only to the initial use of content but also to subsequent uses, particularly on platforms beyond immediate jurisdiction,” he says, adding the need to strengthen online enforcement, especially when users remain anonymous or hidden.

  Be in the Know. Subscribe to our Newsletters.





Source link

Continue Reading

Trending