“In the past, artificial intelligence (AI) implementation required all the data to be gathered in one place. “NetApp provides technology that enables AI to run right on scattered data.”
Its competitiveness, selected by global storage company NetApp, which competes with Dell and Hitachi, is the best software for AI, not hardware. How do companies differentiate themselves from the storage equipment they use to store data.
Yoo Jae-sung, CEO of Korea NetApp, recently met with Mail Business and emphasized, “NetApp is a solution that allows data to be accessed and managed quickly no matter what conditions, whether it is in the cloud or on-premises environment.”
What he introduced is ‘On-Tap’, a storage operating system (OS) software developed by Netflix. Not only the data stored in the storage of the netapp, but also the data in the cloud and on-premises environment such as Amazon Web Service (AWS) and Microsoft (MS) Azure can be identified in one place and the data can be moved freely. For example, on-tap solutions enable companies to transfer data generated in their own environment to cloud platforms such as AWS for AI learning.
Then, when asked what is different from storing all data in such a cloud from the beginning, CEO Yoo said, “You can start managing data in the cloud, but depending on the situation, you have to move data to an on-premise environment rather than the cloud. In some cases, it is difficult to store data in the cloud for very sensitive data. It complements each other.”
Meanwhile, as the importance of data grows, cyberattacks targeting such data are also increasing. It is also a challenge for storage companies to prepare for data-seeking attacks like ransomware. Netapp is focusing on upgrading ransomware detection using AI technology in on-tap solutions.
As it is a solution that supports data management, it learns patterns while monitoring all data entering the company’s storage, and when suspicious data is found, it captures the timing so that data can be restored like a movie. CEO Yoo said, “What is important in security is the Zero Trust,” and emphasized, “Since internal users should not be trusted with data movement, users can also be blocked immediately when a problem occurs.”
CEO Yoo, who has been leading Korea’s Internet app since this year, has been leading the company for half a year since he was appointed as the new CEO in January this year. He started as a sales representative at MS Korea and went up to CEO, and he is an expert who has experience in various global information technology (IT) companies such as VMware along with MS.
The areas that CEO Yoo is focusing on this year are the public and financial markets. Netapps, including domestic telecommunications companies and major conglomerates such as Shinhan Financial Group, have already secured big customers. Most major Korean companies are net app customers, but they were relatively weak in public and finance, he said. “We plan to invest more in this field in the future.”
Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.
He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.
Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.
Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.
Copyrights
One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”
Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.
Personal data
Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”
The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.
“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.
The team’s activities
Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.
In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.
The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues.
In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.”
One surprising outcome is that AI might end up making the most critical functions of banking, insurance, and trading, or the creative functions that require human insights, even more valuable.
“What happens is there’s going to be a premium on creativity and judgment that goes into the process,” said John Kain, who is head of market development efforts in financial services for AWS, in an interview with ZDNET via Zoom.
By process, he meant those areas that are most advanced, and presumably hardest to automate, such as a bank’s risk calculations.
Amazon AWS
“So much of what’s undifferentiated will be automated,” said Kaine. “But what that means is what actually differentiates the business and the ability to serve customers better, whether that’s better understanding products or risk, or coming up with new products, from a financial perspective, the pace of that will just go so much more quickly in the future.”
Amazon formed its financial services unit 10 years ago, the first time the cloud giant took an industry-first approach.
For eight years, Kaine has helped bring the cloud giant’s tools to banks, insurers, and hedge funds. That approach includes both moving workloads to the cloud and implementing AI, including the large language models (LLMs) of generative AI (Gen AI), in his clients’ processes.
“If you look at what we’re trying to do, we’re trying to provide our customers an environment where, from a security, compliance, and governance perspective, we give them a platform that ticks the boxes for everything that’s table stakes for financial services,” said Kaine, “but also gives them the access to the latest technologies, and choice in being able to bring the best patterns to the industry.”
Kaine, who started his career in operations on the trading floor, and worked at firms such as JP Morgan Chase and Nasdaq, had many examples of gains through the automation of financial functions, such as customer service and equity research.
Early use of AWS by financials included things such as back-testing portfolios of investments to predict performance, the kind of workload that is “well-suited to cloud” because it requires computer simulations “to really work well in parallel,” said Kaine.
“That ability to be able to do research much more quickly in AWS meant that investment research firms could quickly see those benefits,” he said. “You’ve seen that repeated across the industry regardless of the firm.”
Taking advantage of the tech
Early implementations of Gen AI are showing many commonalities across firms. “They’ll be repeatable patterns, whether it’s document processing that could show up as mortgage automation with PennyMac, or claims processing with The Travelers Companies.”
Such processes come with an extra degree of sensitivity, Kain said, given the regulated status of finance. “Not only do they have a priority on resilience as well as security, they have evidence that is in a far greater degree than any other industry because the regulations on financial services are typically very prescriptive,” he explained. “There’s a much higher bar in the industry.”
Finance has been an early adopter of an AI-based technology invented at AWS, originally called Zelkova, and that is now more generally referred to as “automated reasoning.” The technology combines machine-learning AI with mathematical proofs to formally validate security measures, such as who has access to resources in a bank.
“It was an effort to allow customers to prove that the security controls they put in place were knowably effective,” said Kain. “That was important for our financial services customers,” including hedge fund Bridgewater and other early adopters.
Now, automated reasoning is also being employed to fix Gen AI.
“You’re seeing that same approach now being taken to improve the performance of large language models, particularly with hallucination reduction,” he said.
To mitigate hallucinations, or “confabulations,” as the errors in Gen AI are more properly known, AWS’s Bedrock platform for running machine learning programs uses retrieval-augmented generation (RAG).
The RAG approach involves connecting an LLM to a source of validated information, such as a database. The source serves as a gold standard to “anchor” the models to limit error.
Once anchored, automated reasoning is applied to “actually allow you to create your own policies that will then give you an extra level of security and detail to make sure that the responses that you’re providing [from the AI model] are accurate.”
The RAG approach, and automated reasoning, are increasingly leading clients in financial services to implement “smaller, domain-specific tasks” in AI that can be connected to a set of specific data, he said.
Financial firms start with Gen AI use cases in surveys of enterprise use, including automating call centers. “From a large language model perspective, there are actually a number of use cases that we’ve seen the industry achieve almost immediate ROI [return on investment],” said Kain. “The foremost is customer interaction, particularly at the call center.”
AWS customers, including Principal Financial, Ally Financial, Rocket Mortgage, and crypto-currency exchange Coinbase, have all exploited Gen AI to “take those [customer] calls, transcribe them in real time, and then provide information to the agents that provide the context of why customers are calling, plus their history, and then guide them [the human call agents] to the right response.”
Coinbase used that approach to automate 64% of support calls, up from 19% two years ago, with the aim of reaching 90% in the future.
Coinbase presents its findings at AWS Summit.
Tiernan Ray/ZDNET
Finding fresh opportunities
Another area where automation is being used is in monitoring alerts, such as fraud warnings. It’s a bit like AI in cybersecurity, where AI handles a flood of signals that would overwhelm a human analyst or investigator.
Fraud alerts and other warnings “generate a large number of false positives,” said Kain, which means a lot of extra work for fraud teams and other financial staff to “spend a good chunk of their day looking at things that aren’t actually fraud.”
Instead, “customers can use large language models to help accelerate the investigation process” by summarizing the alerts, and then create a summary report to be given to the human investigator.
Verafin specializes in anti-money laundering efforts and is an AWS customer using this approach.
“They’ve shown they can save 80% to 90% of the time it takes to investigate an alert,” he said.
Another automation area is “middle office processing,” including customer inquiries to a brokerage for trade confirmation.
One AWS client, brokerage Jefferies & Co., has set up “agentic AI” where the AI model “would actually go through their inbox, saying, this is a request for confirming a price” of a securities trade.
That agent passes the request to another agent to “go out and query a database to get the actual trade price for the customer, and then generate the email” that gets sent to the customer.
“It’s not a huge process, it takes a human, maybe, ten, fifteen minutes to go do it themselves,” said Kain, “but you go from something that was minutes down to seconds through agents.”
The same kinds of applications have been seen in the mortgage and insurance business, he said, and in energy, with Canada’s Total Energy Services confirming contracts.
One of the “most interesting” areas in finance for Gen AI, said Kain, is in investment research.
Hedge fund Bridgewater uses LLMs to “basically take a freeform text [summary] about an investment idea, break that down into nine individual steps, and, for each step, kick off an [AI] agent that would go understand what data was necessary to answer the question, build a dependency map between the various trade-offs within an investment model, and then write the code to pull real-time data from the investment data store, and then generate a report like a first-year investment professional.”
Credit rating giant Moody’s is using agents to automate memos on credit ratings. However, credit ratings are usually for public companies because only these firms must report their financial data by law. Now, Moody’s peer, S&P Global, has been able to extend ratings to private companies by amassing snippets of data here and there.
“There’s an opportunity to leverage large language models to scour what’s publicly available to do credit information on private companies,” said Kain. “That allows the private credit market to have better-anchored information to make private credit decisions.”
These represent “just amazing capabilities,” said Kain of the AI use cases.
Moving into new areas
AI is not yet automating many core functions of banks and other financial firms, such as calculating the most complex risk profiles for securities. But, “I think it’s closer than you think,” said Kain.
“It’s not where we’ve completely moved to trusting the machine to generate, let’s say, trading strategies or risk management approaches,” said Kain.
However, the beginnings of forecasting and analysis are present. Consider the problem of calculating the impact of new US tariffs on the cash flows of companies. That is “happening today as partially an AI function,” he said.
Financial firms “are definitely looking at data at scale, reacting to market movements, and then seeing how they should be updating their positions accordingly,” he explained.
“That ability to ingest data at a global scale is something that I think is so much easier than it was a year ago,” because of Gen AI.
AWS customer Crypto.com, a trading platform for cryptocurrencies, can watch news feeds in 25 different languages using a combination of multiple LLMs.
“They are able to identify which stories are about currencies, and tell if that is a positive or negative signal, and then aggregate that as inputs to their customers,” for trading purposes. As long as two of the three models monitoring the feeds agreed, “they had conviction that there was a signal there” of value.
“So, we’re seeing that use of generative AI to check generative AI, if you will, to provide confidence at scale.”
Those human-centered tasks that remain at the core of banking, insurance, and trading are probably the most valuable in the industry, including the most complex functions, such as creating new derivative products or underwriting initial public offerings.
Those are areas that will enjoy the “premium” for creativity, in Kain’s view. Yet how much longer these tasks remain centered on human creation is an open question.
“I wish I had a crystal ball to say how much of that is truly automatable in the next few years,” said Kain.
“But given the tremendous adoption [of AI], and the ability for us to process data so much more effectively than even just two, three years ago, it’s an exciting time to see where this will all end up.”