Connect with us

Tools & Platforms

Searching for boundaries in the AI jungle

Published

on


Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.

He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.

Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.

Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.

Copyrights

One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”

Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.

Personal data

Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”

The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.

“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.

searching-for-boundaries-in-the-ai-jungle2
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.

The team’s activities

Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.

In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.

The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues. 

In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.” 





Source link

Tools & Platforms

Not just giving the answers :: WRAL.com

Published

on


When most people think of AI, they think of chatbots like
Chat GPT and Gemini.

On Monday night, tech leaders are trying to get the word out about a
new form of AI called agentic. Some say we’ll end up engaging
with this technology the most.

Duke professor Jon Reifschneider built his own model that he
believes could be a gamechanger for researchers. He spoke with WRAL News about the rise of the technology and what may lie ahead for its use in daily life.

Reifschneider and cofounder Pramod Singh have a new AI product called Inquisite. They believe could be a game-changer for researchers. 

“Our ultimate goal with this is to speed up discovery and
translation so we can do things like bring new drugs to market,” Reifschneider said. “In, let’s say, 3-to-5 years rather than 10-to-20 years … We need it.”

Before showing how it works, let’s have a quick vocabulary
lesson. 

Popular chatbots like Chat GPT or Gemini are mainly
considered generative AI. That means you give it a question or prompt – and it gives you a
response based on the massive amounts of data it has access to.

Inquisite is something different. It’s referred to as agentic
AI. 

Agentic AI doesn’t just give you answers, it performs tasks
for you.

“Agents are particularly exciting because they can actually
sort of do work, very much like a human might,” Reifschneider said.

Inquisite’s agents play the role of research
assistant – scouring through its massive database of research and medical
journals to find, read and summarize the relevant papers scientists need to do
their jobs. 

“We can see here it found 119 papers that were potentially
relevant using those queries,” Reifschneider said. “It then went through a process where it reviewed
all the metadata, the titles, authors, and abstracts and it filtered those 119
papers down to just 17 papers that it determined were highly relevant to answer
my question.”

“So if you’re saving time, does that mean you get discoveries faster?” Reifschneider said. “We believe so. That’s our ultimate goal with Inquisite.”

That could mean a faster path to a cure for certain
cancers – or a new gene therapy for Parkinson’s. 

Inquisite is ahead of the curve – with the top minds in tech
this summer proclaiming agentic AI is the future.

Tech leaders have acknowledged agentic AI’s capabilities and the likelihood of future use.

“Agentic AI is real,” said Nvidia CEO and President Jensen Huang. “Agentic AI is a giant step function from
one shot AI.” 

“I think every business in the future will have an AI agent
that their customers can talk to in the future,” said Meta CEO Mark Zuckerberg. 

But will these agents replace jobs? 

“They’re really designed to augment human research teams, not
try to replace the scientists and researchers,” Reifschneider said. “That’s kind of key. You’re not
building this to replace researchers. You’re building this to help them. That’s
right, research is a highly creative task.”

When asked about AI agents potentially
taking jobs, he said he thinks fears about AI taking jobs are overblown.

In fact, he’s teaching his graduate-level students that they have a
quality AI can’t replace.

“I don’t think AI will have the creativity we need to do really novel research, I think we very much still need human scientists in the loop,” Reifschneider said.



Source link

Continue Reading

Tools & Platforms

How AI Is Upending Politics, Tech, the Media, and More

Published

on


In an increasingly divided world, one thing that everyone seems to agree on is that artificial intelligence is a hugely disruptive—and sometimes downright destructive—phenomenon.

At WIRED’s AI Power Summit in New York on Monday, leaders from the worlds of tech, politics, and the media came together to discuss how AI is transforming their intertwined worlds. The Summit included voices from the AI industry, a current US senator and a former Trump administration official, and publishers including WIRED’s parent company, Condé Nast. You can view a livestream of the event in full below.

Livestream: WIRED’s AI Power Summit

“In journalism, many of us have been excited and worried about AI in equal measure,” said Anna Wintour, Condé Nast’s chief content officer and the global editorial director of Vogue, in her opening remarks. “We worry about it replacing our work, and the work of those we write about.”

Leaders from the world of politics offered contrasting visions for ensuring AI has a positive impact overall. Richard Blumenthal, the Democratic senator from Connecticut, said policymakers should learn from social media and figure out suitable guardrails around copyright infringement and other key issues before AI causes too much damage. “We want to deal with the perfect storm that is engulfing journalism,” he said in conversation with WIRED global editorial director Katie Drummond.

In a separate conversation, Dean Ball, a senior fellow at the Foundation for American Innovation and one of the authors of the Trump Administration’s AI Action Plan, defended that policy blueprint’s vision for AI regulation. He claimed that it introduced more rules around AI risks than any other government has produced.

Figures from within the AI industry painted a rosy picture of AI’s impact, too, arguing that it will be a boon for economic growth and would not be deployed unchecked.



Source link

Continue Reading

Tools & Platforms

Data analytics, AI in workers’ compensation insurance

Published

on


Rob Evans, director of claim process technology at Broadspire spoke recently on the DigIn podcast about emerging technology within workers’ compensation insurance. He highlighted data analytics and artificial intelligence (AI).

Data analytics have shown value in loss prevention as well as pre-loss and post-loss considerations, Evans said. The harnessing of big data has also allowed for more benchmarking and comparison to industry averages and best in class programs. There has been an evolution in the visualization of data in various forms, he said.

He added that applying AI to the claim process can help reimagine client claim reviews, while not overwhelming claim operations staff with notification fatigue.

“Even the best in class programs we’ve seen will inevitably have some room for additional improvement. The only constant is change. So even if you’ve got things optimized, you got to really stay on top of things. And this is where bringing in the AI component is super helpful when it comes to any improvement opportunities.”

With the evolution of data visualization and analytics, there is also an ability to drill down and uncover opportunities, which can allow for more targeted investment

“When we talk about AI, I like to think of the claims process like cooking where AI provides some of the ingredients for the various recipes. … Now there’s lots of other AI ingredients too, but predictive models and LLMs are providing a couple of the key ingredients that we use to serve up quality claim outcomes. Continuing my corny food metaphor here, people at a restaurant like to order up different dishes or want some customizations made to their order. So if we think of data analytics as a menu, AI lets us think about ways to create the most delicious dish we desire, like finding litigation or closure opportunities that align with achieving the executive’s concept of success,” Evans said.

Listen to the full podcast here.



Source link

Continue Reading

Trending