Connect with us

Tools & Platforms

Meta Rumored to Roll Out AI-first Messaging Bots!

Published

on


Get Set for Chats Initiated by AI

Last updated:

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

Meta might be testing a new feature that allows AI-powered bots to message users first, enabling proactive engagement. This could change the way people interact with technology and reshape online communication. But how will users react to these AI-initiated interactions, and what does this mean for privacy and data concerns?

Banner for Meta Rumored to Roll Out AI-first Messaging Bots!

Background Information

Meta, one of the leading firms in artificial intelligence and technology innovations, is rumored to be testing a new advancement in AI technology. According to reports, the company is working on AI bots capable of initiating messages with users. This revolutionary capability could significantly alter how people interact with technology, creating new paradigms in digital communication. Discover more details about these developments here.

The initiative by Meta to explore AI bots that proactively message users indicates a strategic shift in how artificial intelligence can enhance user engagement. While specific details remain sparse, experts believe this could lead to more personalized user experiences and new opportunities for businesses to connect with their customers. To explore the nuances of this emerging technology, read the full article here.

Public reactions regarding the potential rollout of AI bots that can message first vary widely, with some expressing excitement over increased AI interactivity, while others voice concerns about privacy and data security. The implications of such technology could resonate across various sectors, from customer service to social networking. For a deeper dive into public sentiment, check out the news here.

The possible introduction of AI-driven messaging bots by Meta has sparked conversations about the future implications on society and digital ecosystems. Experts suggest that this could pave the way for more autonomous AI systems that assist in routine tasks, potentially transforming how we view artificial intelligence in our day-to-day lives. Further insights on this topic are available here.

News URL

Meta, the parent company of social media giants Facebook and Instagram, is reportedly exploring the development of AI bots capable of initiating conversations with users. This potential new feature could distinguish Meta’s platforms, offering users an innovative way to engage with AI by allowing these digital entities to make the first move in starting a dialogue. In the fast-evolving world of technology, such advancements showcase Meta’s commitment to staying at the forefront of AI innovation. For more insights, you can read the full article on Techloy.

The introduction of AI bots that reach out first could revolutionize user interactions on social media. These bots may be designed to assist with customer service, provide personalized recommendations, or just engage users in casual conversation, offering a unique user experience. This concept aligns with the broader trend of incorporating AI in everyday digital interactions, making platforms more intuitive and user-friendly. Learn more about how this could change the landscape of social media engagement.

As AI technology continues to evolve, the implications of robots initiating conversation are significant. It raises questions about privacy and the user’s comfort level with AI presence on personal devices. Moreover, it could have profound effects on digital marketing strategies, as companies might use these interactive bots to drive engagement and sales. The development is being closely watched by tech analysts, eager to see how this will affect user behavior and data privacy. To explore these perspectives, you can visit Techloy.

Article Summary

In the rapidly evolving world of technology, Meta appears to be embarking on a groundbreaking venture by potentially testing AI bots capable of initiating conversations with users. This initiative aligns with an ever-increasing demand for more interactive and customized user experiences across digital platforms. By allowing AI to message users first, Meta could redefine the engagement landscape, nurturing more dynamic interactions that both entertain and inform. Such an advancement could pave the way for an era where AI plays a pivotal role not just in responding, but in leading human-machine communication.

The implications of Meta’s potential project cannot be understated. As one of the leading tech giants, Meta’s exploration into proactive AI conversationalists might spearhead a trend where digital assistants and chatbots are more proactive, reducing friction in user interactions. Furthermore, it could help businesses better understand consumer needs and preferences through AI-driven insights and analytics. By reaching out to users first, these AI systems may significantly enhance customer service efficiency and personalization, presenting new opportunities for companies to engage with their audience more deeply.

In light of these developments, expert opinions remain cautiously optimistic. On one hand, there’s enthusiasm about the potential for AI to herald a new level of personalization and convenience. On the other, concerns about privacy and the ethical use of AI technology persist. The introduction of AI systems that reach out proactively raises questions about data security, user consent, and the fundamental nature of human interaction with machines. As these discussions unfold, the public’s perspective is likely to play a crucial role in shaping the future trajectory of such technologies.

Public reactions to Meta’s potential endeavors with AI have been mixed, reflecting both excitement and skepticism. Many users express curiosity and enthusiasm about the more interactive possibilities such AI can unleash, envisaging a digital future that’s more engaging and responsive to individual needs. However, this optimism is tempered by apprehension regarding privacy implications and the ever-increasing presence of AI in daily life. Engaging the public in discussions and ensuring transparency about how such systems function could be pivotal in building trust and acceptance.

Looking ahead, the future implications of messaging AI systems initiating contact with users are vast. Such technology could revolutionize customer service, marketing, and even social interactions by providing more intelligent, responsive, and autonomous digital assistants. If successful, Meta’s initiative might not only advance its own platform’s capabilities but also set a new standard across the tech industry for how personalized digital interactions are conceived and executed. As these AI systems evolve, ongoing evaluations and user feedback will be essential in ensuring they serve to enhance, rather than intrude upon, human experiences.

Related Events

The field of artificial intelligence continues to witness groundbreaking developments, and the recent news about Meta possibly testing AI bots capable of initiating conversations is a noteworthy event. Recently highlighted in an article by Techloy, Meta’s endeavor could redefine user interaction on its platforms. This initiative follows a series of advancements in AI technology, focusing on enhancing user experiences and interaction efficiency across digital platforms. As companies like Meta invest in AI communication tools, the landscape of social media engagement is poised for transformative changes, potentially setting new standards for digital communication Techloy.

Such developments don’t come in isolation but are interconnected with various technological trends worldwide. Similar AI-driven communication advancements have been explored by other tech giants, including Google and Microsoft, striving to optimize their virtual assistants and chat bots to be more proactive and contextually aware. These efforts collectively underscore a broader industry movement toward autonomous digital communication, ensuring that AI tools not only respond to user inquiries but also anticipate and initiate dialogue based on user behavior and preferences. This shift is expected to influence user expectations and redefine personal and professional communication standards, particularly on platforms like Meta, which hosts a vast global network of users Techloy.

The potential success of such initiatives could fuel further research and development in AI technologies, encouraging collaborations among experts and tech industries. As public curiosity and demand for innovative tech solutions grow, companies are likely to accelerate their efforts in testing and deploying AI-driven features that can seamlessly integrate into everyday communication. The ripple effect of these related events will gradually unfold, signaling a new era of AI-enhanced interactions where digital agents play a proactive role in shaping the future of online communications Techloy.

Expert Opinions

In a rapidly evolving digital landscape, experts continue to scrutinize developments like Meta’s potential testing of AI bots that can initiate messaging with users. This innovation could redefine the way users interact with technology, adding a layer of proactive engagement not previously seen in traditional communication platforms. Analysts are cautiously optimistic, acknowledging the potential for improved customer service and user experience while also pointing out the critical need for stringent data privacy measures.

One expert commented on the dynamic shift this represents in AI interaction paradigms. By allowing AI to message first, it could lead to more natural and engaging user experiences, similar to a real-world conversation. However, experts stress the importance of balancing innovation with ethical considerations, particularly in handling sensitive user data. This sentiment is echoed across various think tanks and tech forums, highlighting the dual-edged nature of such advancements.

Several tech analysts argue that such features, if successfully implemented, could give Meta a significant competitive edge in the AI domain. However, they also emphasize the importance of regulatory compliance, especially concerning user consent and data handling practices. These insights reflect broader concerns within the tech industry about maintaining ethical standards in technological innovations.

According to Techloy, while the technology is promising, it is not without its challenges. Experts caution that user acceptance will heavily depend on the implementation of transparent communication and privacy policies. The ongoing dialogue among industry experts underscores the complexity of integrating such AI technologies responsibly into mainstream applications.

Public Reactions

The news that Meta may be testing AI bots capable of initiating conversations has sparked a wide range of public reactions. Many people are intrigued by the potential for such AI bots to transform communication on social media platforms, allowing for more interactive and engaging user experiences. Some users have taken to platforms like Twitter to express their excitement, noting that this innovation could enhance customer service by providing quick and efficient responses. There are also discussions about the potential for these AI bots to assist in mental health support, offering timely conversations to those in need.

However, alongside the excitement, there is a significant amount of skepticism and concern about privacy and data security. The idea of AI bots messaging users first has raised questions about how Meta plans to protect user data and ensure these interactions remain secure. Some users are wary of the potential misuse of such technology, fearing it might lead to increased surveillance or manipulation. Discussions on forums and social media highlight the desire for transparency from Meta regarding how these AI interactions would be governed and what measures would be in place to safeguard user privacy.

In addition to privacy concerns, there are ethical considerations being raised by the public. The prospect of AI bots engaging with users autonomously brings up questions about consent and the dynamics of human-AI interactions. Critics argue about the importance of setting clear boundaries on AI capabilities to prevent them from crossing into intrusive or unwelcome territories. Many are calling for robust ethical guidelines to be established before such technology is widely deployed, emphasizing the need to balance innovation with responsibility.

Future Implications

The future implications of AI technology are expansive and multifaceted, potentially reshaping how we interact with digital platforms. Recent developments signal a shift in human-computer interaction, where AI bots might take on more proactive roles in communication. For instance, reports suggest that Meta may be experimenting with AI bots that initiate conversations, paving the way for more intuitive user interfaces.

This advancement could significantly enhance customer service experiences by offering faster, personalized interactions, as AI bots could anticipate user needs and provide solutions before questions are even raised. However, with these innovations come concerns about privacy, data security, and the potential erosion of personal interactions. As AI continues to evolve, it will be crucial for developers and policymakers to address these issues to build trust among users.

Additionally, the integration of AI into social media and messaging platforms could redefine marketing strategies. Brands might leverage these bots to engage with users more directly and personally, creating opportunities for more targeted advertising. The ability to initiate contact at optimal times could increase engagement rates and customer satisfaction.

Furthermore, the ethical implications of such AI capabilities must be thoroughly examined. Ensuring transparency and user consent will be paramount to prevent misuse of AI-driven communications. As society grapples with these potential changes, it becomes vital for ongoing dialogue to occur between technologists, ethicists, and the general public to navigate the complexities of these advancements responsibly.



Source link

Tools & Platforms

Searching for boundaries in the AI jungle

Published

on


Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.

He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.

Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.

Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.

Copyrights

One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”

Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.

Personal data

Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”

The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.

“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.

searching-for-boundaries-in-the-ai-jungle2
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.

The team’s activities

Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.

In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.

The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues. 

In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.” 





Source link

Continue Reading

Tools & Platforms

How to start a career in the age of AI – Computerworld

Published

on



How to start a career in the age of AI  Computerworld



Source link

Continue Reading

Tools & Platforms

AI will boost the value of human creativity in financial services, says AWS

Published

on


shomos uddin/Getty Images

Financial services firms are making early gains from artificial intelligence (AI), which is not surprising given that finance is historically an industry that embraces new technologies aggressively.

Also: The AI complexity paradox: More productivity, more responsibilities

One surprising outcome is that AI might end up making the most critical functions of banking, insurance, and trading, or the creative functions that require human insights, even more valuable. 

“What happens is there’s going to be a premium on creativity and judgment that goes into the process,” said John Kain, who is head of market development efforts in financial services for AWS, in an interview with ZDNET via Zoom. 

Also: AI usage is stalling out at work from lack of education and support

By process, he meant those areas that are most advanced, and presumably hardest to automate, such as a bank’s risk calculations.

amazon-aws-2025-john-kain-headshot

Amazon AWS

“So much of what’s undifferentiated will be automated,” said Kaine. “But what that means is what actually differentiates the business and the ability to serve customers better, whether that’s better understanding products or risk, or coming up with new products, from a financial perspective, the pace of that will just go so much more quickly in the future.”

Amazon formed its financial services unit 10 years ago, the first time the cloud giant took an industry-first approach.

For eight years, Kaine has helped bring the cloud giant’s tools to banks, insurers, and hedge funds. That approach includes both moving workloads to the cloud and implementing AI, including the large language models (LLMs) of generative AI (Gen AI), in his clients’ processes. 

“If you look at what we’re trying to do, we’re trying to provide our customers an environment where, from a security, compliance, and governance perspective, we give them a platform that ticks the boxes for everything that’s table stakes for financial services,” said Kaine, “but also gives them the access to the latest technologies, and choice in being able to bring the best patterns to the industry.”

Also: Are AI subscriptions worth it? Most people don’t seem to think so, according to this study

Kaine, who started his career in operations on the trading floor, and worked at firms such as JP Morgan Chase and Nasdaq, had many examples of gains through the automation of financial functions, such as customer service and equity research.

Early use of AWS by financials included things such as back-testing portfolios of investments to predict performance, the kind of workload that is “well-suited to cloud” because it requires computer simulations “to really work well in parallel,” said Kaine.

“That ability to be able to do research much more quickly in AWS meant that investment research firms could quickly see those benefits,” he said. “You’ve seen that repeated across the industry regardless of the firm.”

Taking advantage of the tech

Early implementations of Gen AI are showing many commonalities across firms. “They’ll be repeatable patterns, whether it’s document processing that could show up as mortgage automation with PennyMac, or claims processing with The Travelers Companies.”

Such processes come with an extra degree of sensitivity, Kain said, given the regulated status of finance. “Not only do they have a priority on resilience as well as security, they have evidence that is in a far greater degree than any other industry because the regulations on financial services are typically very prescriptive,” he explained. “There’s a much higher bar in the industry.”

Also: Amazon’s Andy Jassy says AI will take some jobs but make others more ‘interesting’

Finance has been an early adopter of an AI-based technology invented at AWS, originally called Zelkova, and that is now more generally referred to as “automated reasoning.” The technology combines machine-learning AI with mathematical proofs to formally validate security measures, such as who has access to resources in a bank. 

“It was an effort to allow customers to prove that the security controls they put in place were knowably effective,” said Kain. “That was important for our financial services customers,” including hedge fund Bridgewater and other early adopters.

Now, automated reasoning is also being employed to fix Gen AI.

“You’re seeing that same approach now being taken to improve the performance of large language models, particularly with hallucination reduction,” he said. 

To mitigate hallucinations, or “confabulations,” as the errors in Gen AI are more properly known, AWS’s Bedrock platform for running machine learning programs uses retrieval-augmented generation (RAG). 

The RAG approach involves connecting an LLM to a source of validated information, such as a database. The source serves as a gold standard to “anchor” the models to limit error.

Also: Cisco rolls out AI agents to automate network tasks at ‘machine speed’ – with IT still in control

Once anchored, automated reasoning is applied to “actually allow you to create your own policies that will then give you an extra level of security and detail to make sure that the responses that you’re providing [from the AI model] are accurate.”

The RAG approach, and automated reasoning, are increasingly leading clients in financial services to implement “smaller, domain-specific tasks” in AI that can be connected to a set of specific data, he said. 

Financial firms start with Gen AI use cases in surveys of enterprise use, including automating call centers. “From a large language model perspective, there are actually a number of use cases that we’ve seen the industry achieve almost immediate ROI [return on investment],” said Kain. “The foremost is customer interaction, particularly at the call center.”

AWS customers, including Principal Financial, Ally Financial, Rocket Mortgage, and crypto-currency exchange Coinbase, have all exploited Gen AI to “take those [customer] calls, transcribe them in real time, and then provide information to the agents that provide the context of why customers are calling, plus their history, and then guide them [the human call agents] to the right response.” 

Coinbase used that approach to automate 64% of support calls, up from 19% two years ago, with the aim of reaching 90% in the future.

coinbase-presents-at-amazon-aws-financials-services-summit-nyc-2025

Coinbase presents its findings at AWS Summit.

Tiernan Ray/ZDNET

Finding fresh opportunities

Another area where automation is being used is in monitoring alerts, such as fraud warnings. It’s a bit like AI in cybersecurity, where AI handles a flood of signals that would overwhelm a human analyst or investigator.

Fraud alerts and other warnings “generate a large number of false positives,” said Kain, which means a lot of extra work for fraud teams and other financial staff to “spend a good chunk of their day looking at things that aren’t actually fraud.” 

Instead, “customers can use large language models to help accelerate the investigation process” by summarizing the alerts, and then create a summary report to be given to the human investigator. 

Verafin specializes in anti-money laundering efforts and is an AWS customer using this approach. 

“They’ve shown they can save 80% to 90% of the time it takes to investigate an alert,” he said. 

Also: Think DeepSeek has cut AI spending? Think again

Another automation area is “middle office processing,” including customer inquiries to a brokerage for trade confirmation. 

One AWS client, brokerage Jefferies & Co., has set up “agentic AI” where the AI model “would actually go through their inbox, saying, this is a request for confirming a price” of a securities trade. 

That agent passes the request to another agent to “go out and query a database to get the actual trade price for the customer, and then generate the email” that gets sent to the customer.

“It’s not a huge process, it takes a human, maybe, ten, fifteen minutes to go do it themselves,” said Kain, “but you go from something that was minutes down to seconds through agents.” 

The same kinds of applications have been seen in the mortgage and insurance business, he said, and in energy, with Canada’s Total Energy Services confirming contracts. 

Also: You’ve heard about AI killing jobs, but here are 15 news ones AI could create

One of the “most interesting” areas in finance for Gen AI, said Kain, is in investment research. 

Hedge fund Bridgewater uses LLMs to “basically take a freeform text [summary] about an investment idea, break that down into nine individual steps, and, for each step, kick off an [AI] agent that would go understand what data was necessary to answer the question, build a dependency map between the various trade-offs within an investment model, and then write the code to pull real-time data from the investment data store, and then generate a report like a first-year investment professional.”

Credit rating giant Moody’s is using agents to automate memos on credit ratings. However, credit ratings are usually for public companies because only these firms must report their financial data by law. Now, Moody’s peer, S&P Global, has been able to extend ratings to private companies by amassing snippets of data here and there. 

“There’s an opportunity to leverage large language models to scour what’s publicly available to do credit information on private companies,” said Kain. “That allows the private credit market to have better-anchored information to make private credit decisions.”

These represent “just amazing capabilities,” said Kain of the AI use cases.

Moving into new areas

AI is not yet automating many core functions of banks and other financial firms, such as calculating the most complex risk profiles for securities. But, “I think it’s closer than you think,” said Kain.

“It’s not where we’ve completely moved to trusting the machine to generate, let’s say, trading strategies or risk management approaches,” said Kain. 

Also: 5 ways you can plug the widening AI skills gap at your business

However, the beginnings of forecasting and analysis are present. Consider the problem of calculating the impact of new US tariffs on the cash flows of companies. That is “happening today as partially an AI function,” he said. 

Financial firms “are definitely looking at data at scale, reacting to market movements, and then seeing how they should be updating their positions accordingly,” he explained. 

“That ability to ingest data at a global scale is something that I think is so much easier than it was a year ago,” because of Gen AI.

AWS customer Crypto.com, a trading platform for cryptocurrencies, can watch news feeds in 25 different languages using a combination of multiple LLMs. 

“They are able to identify which stories are about currencies, and tell if that is a positive or negative signal, and then aggregate that as inputs to their customers,” for trading purposes. As long as two of the three models monitoring the feeds agreed, “they had conviction that there was a signal there” of value. 

“So, we’re seeing that use of generative AI to check generative AI, if you will, to provide confidence at scale.”

Also: Phishers built fake Okta and Microsoft 365 login sites with AI – here’s how to protect yourself

Those human-centered tasks that remain at the core of banking, insurance, and trading are probably the most valuable in the industry, including the most complex functions, such as creating new derivative products or underwriting initial public offerings. 

Those are areas that will enjoy the “premium” for creativity, in Kain’s view. Yet how much longer these tasks remain centered on human creation is an open question. 

“I wish I had a crystal ball to say how much of that is truly automatable in the next few years,” said Kain. 

“But given the tremendous adoption [of AI], and the ability for us to process data so much more effectively than even just two, three years ago, it’s an exciting time to see where this will all end up.”





Source link

Continue Reading

Trending