Connect with us

Tools & Platforms

Beyond the Soul of AI: Art, Ethics, and the Future of Human Expression

Published

on


On June 26th, the weekly Understanding Artificial Intelligence and Robotics training program successfully held their sixth session with massive enthusiasm. This session discusses digital transformation through cross-cultural dimensions. The main debate is the evolution of Artificial Intelligence (AI) that was discovered from long ago to their massive influence nowadays, primarily breaking down the cross-cultural factors. This sixth session aims to dive deeper on the exploration of impacts and implementations of AI in Hollywood, visual arts, and media from industry experts’ perspectives.

This program is a product of a collaboration among the International Institute for Middle Eastern and Balkan Studies (IFIMES), Scientific Magazine European Perspectives, and SilkRoad 4.0 along with multiple global partners, naming the D-8, THC, ICCD, LLA, Modern Diplomacy, C4P, Modern Ghana, IAF, and many other. It gathers colleagues from researchers, practitioners, visionaries, and influentials across the globe to participate in this groundbreaking program. Each session provided three different astonishing experts of honour that will present materials that lead to discussion. The sixth session invited Igor Ridanovic, a post-production specialist from Hollywood Studios, Ceca Bukvich, a CEO of New York Music studios, and Eva Petric, a conceptual artist from New York-Vienna-Ljubljana.

Some key takeaways in this sixth session is how cultural dynamics influences AI development and implementation strategies. Media’s role is seen as a crucial point in the process of shaping technological narratives across societies. Therefore, it uses soft power dimensions, such as cultures, media, and arts to enhance technological leadership. This session also discusses how to build effective cross-cultural dialogue in the digital sphere since it has a vast impact on the shift of modern technology, especially AI in this 21st century. Various case studies are elucidated in order to give a concrete depiction of cultural factors affecting technology transfer and adoption. The dialogue then slides to the effects of AI in aiding re/search, professional correspondence, and academic writing, to also play a role in the global entertainment or infotainment industry. This explains the effect of AI utilisation in visual arts and performing arts.

The session began with Igor Ridanovic who started the discussion with a presentation of Behind the Scenes: How AI Is Changing Hollywood which covered the people and culture themes. He works in Hollywood and elaborates on where Hollywood stands in AI adoption with three points: how and where we use AI in Hollywood, what is working and what is not, and ethical and legal crossroads. Filmmaking is a global business. Post-production represents the final stage in the filmmaking process, occurring after the television show has been filmed. During this phase, all the individual components are assembled and refined to create the completed work. The software tools Ridanovic created for media and entertainment connect him to the world of AI.

In Hollywood, AI is being widely but quietly used as a tool that functions in translation, visual effects, content analysis, and many other areas. Yet, it has not been fully developed for full film generation. There are several factors that cause Hollywood to be of interest when discussing AI. First, the size of economic output that operates in the nation-like economy scale. There are multiple revenue drivers, such as theatrical exhibition, linear TV streaming, video-on-demand, location-based entertainment, etc. All of those combined will generate a huge economic activity. Second, early technology adopters. Hollywood has also been a really forward technology-looking industry. They have always been adopting new technologies over the years and AI change is a significant one that will profoundly impact the major global industry. Lastly, Hollywood movies have huge cultural influence. They have a great ability to influence how people think and a source of soft power as well.

The use of AI today in Hollywood is widespread. Nearly everyone is using it but not owning up to it. AI tools are widely used for object removal, speed changes, facial isolation, and more. However, the post-production is not capable of applying full AI-generated films that meets the adequate Hollywood standards yet. Hollywood studios are experimenting with AI, but the results show no major deployment breakthroughs. Another issue that hinders the use of AI is that it is a divisive matter. A dilemma of AI usage regarding legal or ethical concerns exists as a major debate in this sector, including copyright issues. This then raises the question of authorship and creative process of whether AI is seen as a fundamentally different tool. This is an ethical question that permeates in all kinds of discussion in Hollywood. It became a legal question that may shape business decisions. AI could potentially alter the nature of screenwriting and other creative roles in filmmaking.

               The production process in filmmaking generally consists of four main stages: Development, Pre-production, Production, and Post-production. The Development stage involves story creation and script writing. Pre-production encompasses the initial planning and coordination efforts necessary to prepare for filming. While content distribution is not a formal stage of production, it plays a critical role in ensuring that television content reaches its intended audience. Additionally, visual effects (VFX), though not classified as a standalone production stage, are integrated throughout various phases and contribute significantly to the final product. Among the four stages, Pre-production, Production, and Post-production are particularly driven by technological advancements. As a result, AI has increasingly focused on enhancing these areas.

Many tools are in use within the film industry incorporating machine learning technologies since long ago, though they may not have originally been labeled as AI. Historically, AI-related technologies have been in use in other industries since the 1980s and 1990s. At the time, creators employed machine learning techniques without fully anticipating the transformative impact AI would later have on the industry. Although early applications of AI were not always recognized as such, they laid the groundwork for its current rapid development and integration. Earlier technologies lacked the sophistication and accessibility required to produce high-quality, end-to-end AI video content. The use of fully AI-generated videos was limited or virtually nonexistent prior to the emergence of advanced AI models like ChatGPT.

               AI is increasingly being integrated into visual effects workflows, enabling more efficient and cost-effective creation of complex effects. One notable application is AI-powered lip-syncing, which streamlines the localization process for global audiences. For example, a demo of a proof-of-concept developed by Igor Ridanovic showcased the use of AI to translate dialog and synchronize lip movements, enhancing realism in multilingual content. Similarly, the Swedish film Watch the Skies employed AI technology to align actors’ lip movements with dubbed English dialogue, demonstrating how AI can improve the quality and accessibility of international film releases.

               AI tools such as DeepMind’s Veo 3 and short films like Ancestra have become central to ongoing experiments in the film industry. However, concerns have arisen regarding the data used to train these systems, as much of it reportedly includes unlicensed materials—a deliberate yet controversial approach. According to the current stance of the U.S. Copyright Office, content generated by AI cannot be copyrighted, underscoring legal and ethical challenges in the creative sector. There is a growing public perception that AI poses a threat to traditional jobs, including those in filmmaking. Nonetheless, the industry is clearly undergoing a transitional phase. Many of these AI advancements are supported by substantial subsidies and global research efforts. Despite the hype, creating a fully AI-generated short film still demands extensive research and development, along with iterative refinement. While the tools continue to evolve, a strong foundational understanding of traditional filmmaking remains essential for leveraging AI effectively in this transitional era.

               The next speaker is Ms. Eva Petric, an artist that often utilizes technology. The discussion covers artistic explorations of AI and technology. Ms. Petric is a transmedia artist living and working in between New York, Vienna and Ljubljana, in photography, video, sound, performance, scents and installations. She is interested in how collective views are formed. She finds it interesting to have a capacity to work with AI images but also has its destructive side because art is a connection to what it is to be human. Every human is a creative human and since the beginning of mankind, art has been incorporated into rituals and traditions.

The creation of art holds a profound significance for humanity, rooted in emotional depth and human sensitivity. Relying excessively on AI in artistic processes may risk diminishing this sensitivity and may lead to what Ms. Petric believed to be a ‘collective dementia’. Nonetheless, the advancement of technology is both inevitable and intrinsic to human progress, driven by our innate curiosity and capacity for innovation. Rather than resisting technological change, it is essential to engage with it thoughtfully and responsibly.

               Most of Ms. Petric art embodies human perspectives, such as human emotional awareness (love and hate) and those being two forces that interact in a form of installations. She utilizes materials that connect and catalyze for being aware of a certain pattern that exists in the universe. She uses lace fabrics which when combined together, serves as a reminder of the profound interconnectivity that the universe is composed of. It is a very mathematical pattern. Ms. Petric creates installations and performances integrating science, space imagery, and human presence. She uses technology, like satellite data while emphasizing human creativity/vision. Her recent work which was inaugurated in Kenya combines sculptural art with environmental sectors.

Art possesses the unique capacity to evoke emotional responses and foster a sense of connection—something intangible yet deeply felt. It moves individuals, often transcending language and cultural boundaries, and invites exploration into shared human experiences. For the artist, the process of creating becomes a form of inquiry. The only way to truly understand art’s impact is to produce it and observe how it resonates across different cultures. Her work frequently engages with universal themes, reflecting a commitment to uncovering common threads that unite diverse audiences through artistic expression. She integrates art into various aspects of life. It remains a natural and essential process, even in an age increasingly shaped by technology and science. AI, in this view, should serve as a tool—one that enhances and expands human creativity rather than replaces it.

Art continues to offer space for human connections, rituals, and presence; elements that provoke fundamental questions: Is this for the good of humanity? For the good of mankind? Ms. Petric expresses hope that art will continue to reach and reverberate with a broader audience, emphasizing the importance of coming together around shared human values. Despite our perceived differences, we are inherently connected. She cautions against allowing technological advancement to dilute the essence of our humanity. Imagination, she argues, is something we do not yet fully understand, and it requires the freedom to create beyond the confines of existing material. While AI can generate and mimic nature, it risks overlooking the humility we owe to the natural world—an intelligence far older and deeper than our own. Pushing too far without reflection may lead us to what she called a ‘dead-end street,’ or a form of singularity where something vital is lost. In the end, as long as art exists, it affirms and preserves what it means to be human.

Dr Harvey Cary Dzodin was in dialogue with Dr. Philipe Reinisch. He was a political appointee in the Carter administration, longtime vice president of ABC in New York, and is currently an author and commentator in global media. Dr Dzodin discussed how truth, although never easy to discern, was much simpler when he worked for Jimmy Carter. He discussed how globally there was a sea change beginning with the world wide web in the 1990’s because before that time, knowledge was relatively difficult to obtain and then, almost overnight, people were drowning in information and it was more about getting people’s attention even momentarily.

Now matters have progressed from bad to worse to catastrophic with Generative Artificial Intelligence (AI) full of intertwined information, misinformation and disinformation that makes it near impossible to determine which is which. Moreover, algorithms are splintering society into separate tribes who don’t communicate. At the same time the malevolent use of AI is technically getting easier to accomplish by the day, affecting elections and interfering with societal decision-making. Governments are failing to regulate advanced AI to the point where it’s already too difficult to regulate, as it has with other mature technologies such as cryptocurrencies and fintech. This is known as the Collingwood dilemma.

               Considering the discussion above, it is clear for us as intellectual-minded individuals to formulate and arrange further steps in a way to respond to the ever-changing advanced global technology. By continuing exploring ethical frameworks and public education around AI, we can minimize the fatal disadvantages it sets out to human beings. It is essential to develop better systems for detecting AI-generated content, especially for elections/media. There is also a need to foster interdisciplinary dialogue between technologists, artists, and ethicists on AI development to unite differing minds about the dynamics of technology. This includes further sessions to dive deeper into specific AI applications and impacts. It is imperative that we collaborate and collectively develop effective strategies to address the negative aspects of AI. While it has not yet resulted in severe consequences, there is a shared hope that through cooperation, a constructive solution can be achieved.



Source link

Tools & Platforms

NSU expands cybersecurity, AI programs to meet growing job demand

Published

on


As cybersecurity threats and artificial intelligence continue reshaping the job market, Northeastern State University is stepping up its efforts to prepare students for these in-demand fields.

With programs targeting both K-12 engagement and college-level degrees, NSU is positioning itself as a key player in Oklahoma’s tech talent pipeline.

Cybersecurity: Training the Next Generation

NSU is working to meet the rising need for cybersecurity professionals by launching educational initiatives for students at multiple levels. Dr. Stacey White, the university’s cybersecurity program coordinator, says young people are especially suited for these roles because of their comfort with technology.

That’s why NSU is hosting cybersecurity camps and has built hands-on facilities like a cybersecurity lab to introduce students to real-world applications.

“When I first started in technology and the cyber world, it was usernames and passwords,” Dr. White said. “Today, it’s much more intricate than that.”

The Scope of the Problem

Cybercrime is a growing threat that shows no signs of slowing down. According to Dr. White, everyone should have a basic understanding of cybersecurity, but the greatest need lies in training new professionals who can keep up with evolving threats.

Currently, there are nearly 450,000 open cybersecurity jobs nationwide — including almost 4,200 in Oklahoma alone.

New AI Degree Launching This Fall

This fall, NSU is introducing a new degree in Artificial Intelligence and Data Analytics. Dr. Janet Buzzard, dean of the College of Business and Technology, says the program combines technical knowledge with business insight — a skill set that employers across many industries are seeking.

“All of our graduates in our College of Business and Technology need that skill set of artificial intelligence,” Dr. Buzzard said. “Not just the one major and degree that we’re promoting here.”

The new degree is designed to respond to student interest and market demand, offering versatile career paths in fields such as finance, logistics, and technology development.

Encouraging Early Engagement

Dr. Buzzard adds that exposing students to artificial intelligence and cybersecurity early in their academic careers helps them see these paths as viable and exciting career options.

This is one of the reasons NSU Broken Arrow is hosting a cybersecurity camp for middle school-aged students today and June 8. Campers will learn from industry professionals and experienced educators about the importance of cybersecurity, effective communication in a rapidly evolving digital world and foundational concepts in coding and encoding. 

NSU’s efforts to modernize its programs come at a crucial time, with both AI and cybersecurity jobs seeing major growth. For students and professionals alike, the university is building opportunities that align with the future of work.





Source link

Continue Reading

Tools & Platforms

Lecturer Says AI Has Made Her Workload Skyrocket, Fears Cheating

Published

on


This as-told-to essay is based on a transcribed conversation with Risa Morimoto, a senior lecturer in economics at SOAS University of London, in England. The following has been edited for length and clarity.

Students always cheat.

I’ve been a lecturer for 18 years, and I’ve dealt with cheating throughout that time, but with AI tools becoming widely available in recent years, I’ve experienced a significant change.

There are definitely positive aspects to AI. It’s much easier to get access to information and students can use these tools to improve their writing, spelling, and grammar, so there are fewer badly written essays.

However, I believe some of my students have been using AI to generate essay content that pulls information from the internet, instead of using material from my classes to complete their assignments.

AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them.

I’ve decided to take dramatic action, changing the way I assess students to encourage them to be more creative and rely less on AI. The world is changing, so universities can’t stand still.

Cheating has become harder to detect because of AI

I’ve worked at SOAS University of London since 2012. My teaching focus is ecological economics.

Initially, my teaching style was exam-based, but I found that students were anxious about one-off exams, and their results wouldn’t always correspond to their performance.

I eventually pivoted to a focus on essays. Students chose their topic and consolidated theories into an essay. It worked well — until AI came along.

Cheating used to be easier to spot. I’d maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles.

Now, with more sophisticated AI technologies, it’s harder to detect, and I believe the scale of cheating has increased.

I’ll read 100 essays and some of them will be very similar using identical case examples, that I’ve never taught.

These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set.

While students can use examples from internet sources in their work, I’m concerned that some students have just used AI to generate the essay content without reading or engaging with the original source.

I started using AI detection tools to assess work, but I’m aware this technology has limitations.

AI tools are easy to access for students who feel pressured by the amount of work they have to do. University fees are increasing, and a lot of students work part-time jobs, so it makes sense to me that they want to use these tools to complete work more quickly.

There’s no obvious way to judge misconduct

During the first lecture of my module, I’ll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can’t use it to generate responses to their assignments.

SOAS has guidance for AI use among students, which sets similar principles about not using AI to generate essays.

Over the past year, I’ve sat on an academic misconduct panel at the university, dealing with students who’ve been flagged for inappropriate AI use across departments.

I’ve seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses.

It can be hard to make decisions because you can’t be 100% sure from reading the essay whether it’s AI-generated or not. It’s also hard to draw a line between cheating and using AI to support learning.

Next year, I’m going to dramatically change my assignment format

My colleagues and I speak about the negative and positive aspects of AI, and we’re aware that we still have a lot to learn about the technology ourselves.

The university is encouraging lecturers to change their teaching and assessment practices. At the department level, we often discuss how to improve things.

I send my two young children to a school with an alternative, progressive education system, rather than a mainstream British state school. Seeing how my kids are educated has inspired me to try two alternative assessment methods this coming academic year. I had to go through a formal process with the university to get them approved.

I’ll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they’ll create a blog, so they can translate what they’ve understood of the highly technical terms into a more communicable format.

My aim is to make sure the assignments are directly tied to what we’ve learned in class and make assessments more personal and creative.

The old assessment model, which involves memorizing facts and regurgitating them in exams, isn’t useful anymore. ChatGPT can easily give you a beautiful summary of information like this. Instead, educators need to help students with soft skills, communication, and out-of-the-box thinking.

In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that “uphold academic integrity.” They said the university encouraged students to pursue work that is harder for AI to replicate and have “robust mechanisms” in place for investigating AI misuse. “The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes,” the spokesperson added.

Do you have a story to share about AI in education? Contact this reporter at ccheong@businessinsider.com.





Source link

Continue Reading

Tools & Platforms

Searching for boundaries in the AI jungle

Published

on


Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.

He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.

Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.

Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.

Copyrights

One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”

Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.

Personal data

Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”

The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.

“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.

searching-for-boundaries-in-the-ai-jungle2
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.

The team’s activities

Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.

In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.

The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues. 

In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.” 





Source link

Continue Reading

Trending