Connect with us

Tools & Platforms

Perplexity, ChatGPT, and Gemini: The AI Trio Revolutionizing Productivity at 2 PM

Published

on


AI’s Latest Surge: Reinventing Afternoon Productivity

Last updated:

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

AI tools like Perplexity, ChatGPT, and Gemini are gaining traction as the go-to solutions for boosting productivity, especially during the notorious post-lunch slump. This surge is part of a larger trend of AI integration in daily work routines, offering innovative ways to enhance efficiency and creativity when energy levels dip.

Banner for Perplexity, ChatGPT, and Gemini: The AI Trio Revolutionizing Productivity at 2 PM

Introduction

In the rapidly evolving realm of artificial intelligence, recent developments underscore a growing momentum that echoes far beyond the tech industry. A particularly notable trend is the stride in advancements among prominent AI models and platforms, such as Perplexity, ChatGPT, and Gemini. According to an in-depth report from Inkl, these platforms are witnessing a substantial surge in their capabilities and market interest ().

The public’s reaction to the growth of AI is mixed, reflecting both excitement and concern. Some worry about the implications of AI on employment and privacy, while others are enthusiastic about the possibilities AI provides in enhancing productivity and innovation. Experts argue that the key to maximizing AI’s benefits lies in responsible development and implementation. Ensuring that AI systems are transparent, equitable, and aligned with human values is essential. For insights into how these technologies are reshaping our world, consider the perspectives shared on inkl.

Looking towards the future, AI’s influence is poised to only grow stronger across various domains, from healthcare and education to finance and governance. Experts predict that AI will increasingly become integrated into our daily lives, offering unprecedented efficiency and capabilities. However, this comes with the need for robust ethical guidelines and regulatory frameworks to ensure that the growth of AI is sustainable and beneficial for society as a whole. For a comprehensive analysis of these trends and their potential impacts, one can explore the detailed discussions available on inkl.

The Rise of AI Plans at $200 per Month

The landscape of AI technology is constantly evolving, and recent developments have highlighted the increasing trend towards premium AI subscription plans, priced at around $200 per month. This pricing model is being adopted by various leading AI companies, including Perplexity, ChatGPT, and Gemini, as part of their strategic growth plans. These companies are trying to balance the cost of advanced research and development with accessibility to cutting-edge AI tools for businesses and tech enthusiasts alike. More details on this trend can be found in recent news reports that discuss why these firms have chosen to elevate their monthly fees.

The premium pricing for AI services is mainly driven by the rich features and enhanced capabilities that these services offer. As AI models become more sophisticated, the cost of maintaining the infrastructure and the continuous improvements they undergo have naturally led to higher subscription costs. For instance, companies like Perplexity and ChatGPT often introduce advanced APIs, improved natural language processing capabilities, and more robust security measures, which add considerable value to their services and justify the $200 monthly fee. Interested readers can explore the specifics of these advancements through trusted sources such as this article.

Public reactions to the rise in subscription costs have been mixed. Some users express concern over the affordability and accessibility of these technologies, worrying that smaller businesses or individual users might find it difficult to justify the expense. On the other hand, there are users who believe that the investment is well worth it considering the benefits and efficiencies gained from utilizing cutting-edge AI technologies. The varying opinions on this topic reflect the broader debate about the future role of AI in society, as discussed in expert analyses.

Looking ahead, the implications of these $200 per month AI plans could be profound. As AI becomes deeply integrated into various sectors such as healthcare, finance, and education, the accessibility and scalability of these services will likely impact global economic structures and societal norms. The decision by companies like Gemini to partake in this pricing strategy may also influence emerging AI startups and established tech giants alike, shaping the future trajectory of AI development. For a deeper exploration into how these changes might unfold, please reference this comprehensive overview.

Why Perplexity, ChatGPT, and Gemini are Gaining Popularity

The rise in popularity of AI models like Perplexity, ChatGPT, and Gemini can be attributed to several key factors that resonate with both industry experts and the general public. These innovative AI systems have demonstrated significant capabilities in processing and generating human-like text, which has pushed them to the forefront of technological advancements. The increasing interest in these technologies is reflected in the news, such as the article featured on inkl.com, which explores the multifaceted reasons behind their surge in prominence.

Experts suggest that the demand for AI-driven solutions in communication, problem-solving, and creative content generation has sparked the growing popularity of these models. ChatGPT, known for its conversational abilities, has become a cornerstone in customer service and personal assistance applications. Similarly, models like Perplexity and Gemini have found their niches by excelling in areas such as data analysis and complex computations. These functionalities meet the evolving needs of various industries, from healthcare to finance, enhancing the tools available for both professional and personal use.

Public reactions to these AI technologies have been overwhelmingly positive, with users appreciating the efficiency and intelligence that these systems bring to everyday tasks. As highlighted in the inkl.com article, these tools are not just seen as novelties but as essential components that integrate into people’s daily lives. This acceptance is further driven by the ease with which these systems can be implemented to work alongside human operators, fostering collaboration between man and machine.

Looking towards the future, the implications of Perplexity, ChatGPT, and Gemini’s growing popularity are vast. As AI continues to evolve, these models are expected to become even more sophisticated, potentially leading to more personalized user experiences and broader adoption across various sectors. The inkl.com news piece suggests that as more organizations recognize the value of AI, investment and innovation in this field will continue to rise, paving the way for even more advanced AI solutions.

Related Events in the AI Industry

The AI industry is witnessing a surge of events that are shaping its future trajectory. Recently, the spotlight has been on platforms like Perplexity, ChatGPT, and Gemini, all of which are making significant strides in AI’s capabilities and accessibility. These platforms are becoming central to the development of AI-driven technologies, offering innovative ways to interact with and harness AI power for various industries. According to a recent article, the strategic advancements of these platforms highlight the growing competition and their ambitious plans in the AI sector (source).

The rise of AI platforms has led to a myriad of events that reflect the industry’s dynamism. Conferences and symposiums have sprung up worldwide, drawing experts and enthusiasts keen to understand the next breakthroughs in AI. Notably, companies that are investing heavily in AI research and development are leading these events, showcasing their latest innovations and discussing the potential implications on society and various economic sectors. Such gatherings offer a glimpse into how AI is poised to redefine everything from customer service to complex problem-solving. As an article pointed out, the competitive atmosphere among AI innovators is pushing the boundaries of what’s possible, leading to exciting developments and new opportunities for collaboration (source).

Public reaction to these events has generally been positive, with an increasing number of individuals and businesses recognizing the potential benefits of AI technologies. However, alongside the enthusiasm, there are growing discussions about ethical considerations and the need for robust governance models to manage AI’s rapid growth. Industry leaders are calling for a balanced approach to innovation and regulation to ensure that the evolution of AI continues to align with societal values and ethical standards. The discourse around these issues is gaining momentum, as evidenced by a report highlighting the importance of adapting to AI’s transformative power responsibly (source).

Expert Opinions on the AI Subscription Trend

The demand for AI subscription services has been increasing rapidly, reflecting a significant shift in how consumers and businesses perceive artificial intelligence’s value. Experts suggest that this trend is largely influenced by the growing recognition of AI as a crucial driver of efficiency and innovation across various industries. As such, companies like Perplexity, ChatGPT, and Gemini are positioning themselves as leaders in this burgeoning market by offering premium packages that promise enhanced capabilities and exclusive features .

The broader implications of these high-cost AI services are significant, with potential effects rippling across various sectors. Critically, there are concerns about inequality in access to advanced AI technologies, as only well-funded companies might afford these premium services. Such a dynamic could lead to a widening gap between tech-savvy giants and smaller businesses. Furthermore, customers accustomed to using lower-cost or free versions of these AI tools are now faced with the difficult decision of either upgrading their plans or seeking alternative solutions. This shift comes at a time when the demand for sophisticated AI capabilities is higher than ever, according to insights from Inkl News, making it a point of contention in both business and academic circles.

The debate over the pricing of AI services reveals broader concerns about the market dynamics driving these costs. Stakeholders are actively discussing how competition and innovation should shape the accessibility and pricing of AI technologies. A portion of the public fears that escalating prices could stifle potential innovation by limiting who can afford to work with the most advanced tools. Meanwhile, some industry experts suggest that these prices reflect the significant research and development investments required to maintain cutting-edge AI models, as discussed in Inkl News. Such arguments contribute to the ongoing debate about balancing profitability with accessibility.

Consumers might also perceive higher AI prices as indicative of greater value and enhanced capabilities. Consequently, this perception might drive further demand, encouraging a cycle of continuous price hikes as seen with Perplexity, ChatGPT, and Gemini. These shifts could reflect broader trends in tech consumption, where customers are willing to pay a premium for superior technological offerings. However, it’s essential for companies to balance this with the risk of alienating a portion of the market that is price sensitive, thereby necessitating diverse pricing strategies to accommodate varying consumer needs.

Moreover, the upward trajectory in AI costs could also push businesses to adopt more versatile AI tools that can handle multiple functions, thereby justifying their expense. In this context, AI providers might focus on developing more comprehensive solutions that address a broader range of requirements, aligning with business goals such as efficiency and cost reduction. Nevertheless, this shift might necessitate substantial initial investments, thus impacting the budgeting and financial planning of companies looking to integrate AI into their operations.

As public and expert opinions swirl around the implications of these advancements, it becomes increasingly clear that AI will continue to challenge existing norms and provoke critical discourse. With stakeholders from various sectors voicing both anticipation and apprehension, the dialogue surrounding AI’s trajectory is as dynamic as the technology itself. Observers and innovators are keenly aware that the path forward will require careful consideration of ethical, societal, and economic factors to ensure that these powerful tools benefit humanity as a whole.



Source link

Tools & Platforms

Searching for boundaries in the AI jungle

Published

on


Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.

He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.

Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.

Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.

Copyrights

One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”

Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.

Personal data

Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”

The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.

“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.

searching-for-boundaries-in-the-ai-jungle2
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.

The team’s activities

Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.

In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.

The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues. 

In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.” 





Source link

Continue Reading

Tools & Platforms

How to start a career in the age of AI – Computerworld

Published

on



How to start a career in the age of AI  Computerworld



Source link

Continue Reading

Tools & Platforms

AI will boost the value of human creativity in financial services, says AWS

Published

on


shomos uddin/Getty Images

Financial services firms are making early gains from artificial intelligence (AI), which is not surprising given that finance is historically an industry that embraces new technologies aggressively.

Also: The AI complexity paradox: More productivity, more responsibilities

One surprising outcome is that AI might end up making the most critical functions of banking, insurance, and trading, or the creative functions that require human insights, even more valuable. 

“What happens is there’s going to be a premium on creativity and judgment that goes into the process,” said John Kain, who is head of market development efforts in financial services for AWS, in an interview with ZDNET via Zoom. 

Also: AI usage is stalling out at work from lack of education and support

By process, he meant those areas that are most advanced, and presumably hardest to automate, such as a bank’s risk calculations.

amazon-aws-2025-john-kain-headshot

Amazon AWS

“So much of what’s undifferentiated will be automated,” said Kaine. “But what that means is what actually differentiates the business and the ability to serve customers better, whether that’s better understanding products or risk, or coming up with new products, from a financial perspective, the pace of that will just go so much more quickly in the future.”

Amazon formed its financial services unit 10 years ago, the first time the cloud giant took an industry-first approach.

For eight years, Kaine has helped bring the cloud giant’s tools to banks, insurers, and hedge funds. That approach includes both moving workloads to the cloud and implementing AI, including the large language models (LLMs) of generative AI (Gen AI), in his clients’ processes. 

“If you look at what we’re trying to do, we’re trying to provide our customers an environment where, from a security, compliance, and governance perspective, we give them a platform that ticks the boxes for everything that’s table stakes for financial services,” said Kaine, “but also gives them the access to the latest technologies, and choice in being able to bring the best patterns to the industry.”

Also: Are AI subscriptions worth it? Most people don’t seem to think so, according to this study

Kaine, who started his career in operations on the trading floor, and worked at firms such as JP Morgan Chase and Nasdaq, had many examples of gains through the automation of financial functions, such as customer service and equity research.

Early use of AWS by financials included things such as back-testing portfolios of investments to predict performance, the kind of workload that is “well-suited to cloud” because it requires computer simulations “to really work well in parallel,” said Kaine.

“That ability to be able to do research much more quickly in AWS meant that investment research firms could quickly see those benefits,” he said. “You’ve seen that repeated across the industry regardless of the firm.”

Taking advantage of the tech

Early implementations of Gen AI are showing many commonalities across firms. “They’ll be repeatable patterns, whether it’s document processing that could show up as mortgage automation with PennyMac, or claims processing with The Travelers Companies.”

Such processes come with an extra degree of sensitivity, Kain said, given the regulated status of finance. “Not only do they have a priority on resilience as well as security, they have evidence that is in a far greater degree than any other industry because the regulations on financial services are typically very prescriptive,” he explained. “There’s a much higher bar in the industry.”

Also: Amazon’s Andy Jassy says AI will take some jobs but make others more ‘interesting’

Finance has been an early adopter of an AI-based technology invented at AWS, originally called Zelkova, and that is now more generally referred to as “automated reasoning.” The technology combines machine-learning AI with mathematical proofs to formally validate security measures, such as who has access to resources in a bank. 

“It was an effort to allow customers to prove that the security controls they put in place were knowably effective,” said Kain. “That was important for our financial services customers,” including hedge fund Bridgewater and other early adopters.

Now, automated reasoning is also being employed to fix Gen AI.

“You’re seeing that same approach now being taken to improve the performance of large language models, particularly with hallucination reduction,” he said. 

To mitigate hallucinations, or “confabulations,” as the errors in Gen AI are more properly known, AWS’s Bedrock platform for running machine learning programs uses retrieval-augmented generation (RAG). 

The RAG approach involves connecting an LLM to a source of validated information, such as a database. The source serves as a gold standard to “anchor” the models to limit error.

Also: Cisco rolls out AI agents to automate network tasks at ‘machine speed’ – with IT still in control

Once anchored, automated reasoning is applied to “actually allow you to create your own policies that will then give you an extra level of security and detail to make sure that the responses that you’re providing [from the AI model] are accurate.”

The RAG approach, and automated reasoning, are increasingly leading clients in financial services to implement “smaller, domain-specific tasks” in AI that can be connected to a set of specific data, he said. 

Financial firms start with Gen AI use cases in surveys of enterprise use, including automating call centers. “From a large language model perspective, there are actually a number of use cases that we’ve seen the industry achieve almost immediate ROI [return on investment],” said Kain. “The foremost is customer interaction, particularly at the call center.”

AWS customers, including Principal Financial, Ally Financial, Rocket Mortgage, and crypto-currency exchange Coinbase, have all exploited Gen AI to “take those [customer] calls, transcribe them in real time, and then provide information to the agents that provide the context of why customers are calling, plus their history, and then guide them [the human call agents] to the right response.” 

Coinbase used that approach to automate 64% of support calls, up from 19% two years ago, with the aim of reaching 90% in the future.

coinbase-presents-at-amazon-aws-financials-services-summit-nyc-2025

Coinbase presents its findings at AWS Summit.

Tiernan Ray/ZDNET

Finding fresh opportunities

Another area where automation is being used is in monitoring alerts, such as fraud warnings. It’s a bit like AI in cybersecurity, where AI handles a flood of signals that would overwhelm a human analyst or investigator.

Fraud alerts and other warnings “generate a large number of false positives,” said Kain, which means a lot of extra work for fraud teams and other financial staff to “spend a good chunk of their day looking at things that aren’t actually fraud.” 

Instead, “customers can use large language models to help accelerate the investigation process” by summarizing the alerts, and then create a summary report to be given to the human investigator. 

Verafin specializes in anti-money laundering efforts and is an AWS customer using this approach. 

“They’ve shown they can save 80% to 90% of the time it takes to investigate an alert,” he said. 

Also: Think DeepSeek has cut AI spending? Think again

Another automation area is “middle office processing,” including customer inquiries to a brokerage for trade confirmation. 

One AWS client, brokerage Jefferies & Co., has set up “agentic AI” where the AI model “would actually go through their inbox, saying, this is a request for confirming a price” of a securities trade. 

That agent passes the request to another agent to “go out and query a database to get the actual trade price for the customer, and then generate the email” that gets sent to the customer.

“It’s not a huge process, it takes a human, maybe, ten, fifteen minutes to go do it themselves,” said Kain, “but you go from something that was minutes down to seconds through agents.” 

The same kinds of applications have been seen in the mortgage and insurance business, he said, and in energy, with Canada’s Total Energy Services confirming contracts. 

Also: You’ve heard about AI killing jobs, but here are 15 news ones AI could create

One of the “most interesting” areas in finance for Gen AI, said Kain, is in investment research. 

Hedge fund Bridgewater uses LLMs to “basically take a freeform text [summary] about an investment idea, break that down into nine individual steps, and, for each step, kick off an [AI] agent that would go understand what data was necessary to answer the question, build a dependency map between the various trade-offs within an investment model, and then write the code to pull real-time data from the investment data store, and then generate a report like a first-year investment professional.”

Credit rating giant Moody’s is using agents to automate memos on credit ratings. However, credit ratings are usually for public companies because only these firms must report their financial data by law. Now, Moody’s peer, S&P Global, has been able to extend ratings to private companies by amassing snippets of data here and there. 

“There’s an opportunity to leverage large language models to scour what’s publicly available to do credit information on private companies,” said Kain. “That allows the private credit market to have better-anchored information to make private credit decisions.”

These represent “just amazing capabilities,” said Kain of the AI use cases.

Moving into new areas

AI is not yet automating many core functions of banks and other financial firms, such as calculating the most complex risk profiles for securities. But, “I think it’s closer than you think,” said Kain.

“It’s not where we’ve completely moved to trusting the machine to generate, let’s say, trading strategies or risk management approaches,” said Kain. 

Also: 5 ways you can plug the widening AI skills gap at your business

However, the beginnings of forecasting and analysis are present. Consider the problem of calculating the impact of new US tariffs on the cash flows of companies. That is “happening today as partially an AI function,” he said. 

Financial firms “are definitely looking at data at scale, reacting to market movements, and then seeing how they should be updating their positions accordingly,” he explained. 

“That ability to ingest data at a global scale is something that I think is so much easier than it was a year ago,” because of Gen AI.

AWS customer Crypto.com, a trading platform for cryptocurrencies, can watch news feeds in 25 different languages using a combination of multiple LLMs. 

“They are able to identify which stories are about currencies, and tell if that is a positive or negative signal, and then aggregate that as inputs to their customers,” for trading purposes. As long as two of the three models monitoring the feeds agreed, “they had conviction that there was a signal there” of value. 

“So, we’re seeing that use of generative AI to check generative AI, if you will, to provide confidence at scale.”

Also: Phishers built fake Okta and Microsoft 365 login sites with AI – here’s how to protect yourself

Those human-centered tasks that remain at the core of banking, insurance, and trading are probably the most valuable in the industry, including the most complex functions, such as creating new derivative products or underwriting initial public offerings. 

Those are areas that will enjoy the “premium” for creativity, in Kain’s view. Yet how much longer these tasks remain centered on human creation is an open question. 

“I wish I had a crystal ball to say how much of that is truly automatable in the next few years,” said Kain. 

“But given the tremendous adoption [of AI], and the ability for us to process data so much more effectively than even just two, three years ago, it’s an exciting time to see where this will all end up.”





Source link

Continue Reading

Trending