Connect with us

AI Research

Reflections on the Ethical use of Artificial Intelligence

Published

on


Introduction

Within the framework of its mandate in the social and human sciences, UNESCO leads ethical reflections on the development and use of artificial intelligence (AI). In 2021, Member States adopted by consensus the Recommendation on the Ethics of Artificial Intelligence, the first global normative instrument in this field. This document establishes principles and values to ensure that AI contributes to sustainable development, social justice, equity and human rights.

In this context, we present an interview with Dr. Carlos García Torres who holds a Bachelor’s degree in Social, Political and Economic Sciences, a law degree, and a doctorate in Jurisprudence from the National University of Loja. He has postgraduate studies in gender, equity and sustainable development, and earned a “Cum Laude” Doctorate in Law and Social Sciences with honors (Cum Laude) from the National University of Distance Education (UNED, Spain). His doctoral thesis was entitled  Tolerance and Liberalism in the Constitutional History of Ecuador..

How should we understand artificial intelligence as a cultural instrument?
Let us begin by reflecting on AI as a cultural instrument—that is, an artificial product created through the collection and processing of the objects generated by human culture. From this perspective, we will understand that it is an element fundamentally distinct from previous technological tools used by humanity up to now, and that its use, while lawful and necessary, requires a careful ethical approach. 

Another element to consider in developing this ethical awareness is the social, political, and economic implications of science, along with the fact that no technology is ever truly neutral. Artificial intelligence has a developmental history always linked to very specific cultural and financial environments. Today, its core revolves around certain companies based in Silicon Valley and the global interests of those companies. Added to this is the global geopolitical landscape, which includes two key players: China and the European Union.

Finally, a not insignificant element is the environmental impact of the technological infrastructure necessary for AI to function, as reflected in its effects on energy consumption, greenhouse gas emissions, and water usage.

What aspects of justice must be considered in the use of artificial intelligence?
We must consider the social, political, economic, and environmental aspects implied by the frequent and indiscriminate use of any application, model, or system involving AI. First, we must take into account data economy and existing national regulations governing the use of third-party data. It is important to consider the cost of texts uploaded to language models, as well as the caution that must be exercised when uploading texts that are protected under third-party intellectual property rights.

Second, as responsible citizens, we must reflect on the future benefits that AI can bring to scientific research and the economic development of our country. We should strive to ensure that our role is not merely that of consumers, but also of facilitators of innovative and imaginative uses, and eventually, developers of new applications in our respective fields of study.

What ontological challenges does AI pose regarding humanity?
A distinguishing feature of large language models and other AI technologies is their ability to convincingly imitate human qualities, including artistic expressions. This capacity gives rise to the compelling illusion of interacting with an entity seemingly endowed with the distinct dignity unique to human beings. This presents the problem of remembering what is genuinely human.

Several studies have shown that the everyday use of, and growing dependence on, various large language models limit intellectual potential. Responsibility regarding the use of AI is twofold: it not only affects individuals personally but also impacts future generations.

What must we take into account regarding the reliability and rationality of AI systems?
One of the most widely promoted claims by AI system developers is the idea that, with a “prompt,” one can obtain the required information at any given moment, under the assumption that this information will always be correct. This assertion is only partially true, as large language models have documented error rates of up to  60 percent.

For this reason, it is crucial to act as content reviewersWE when dealing with AI-generated outputs. Only an expert in each field can detect errors and biases.

This ties in with the general perception of large language models as being rational. We must avoid treating AI responses as the ultimate models of scientific and social rationality.

The use of material obtained from large language models must be subject to continuous scrutiny and refinement.

How should we approach the issue of writing and attribution in AI-generated texts?
In general, a transparent way to produce texts is to acknowledge the use of AI and explain the generation process employed. This highlights the importance of transparency, specifically in recognizing educational, outreach, or research texts that have been created with AI.
It is important to remember that what large language models offer us are generally ideas and resources from other authors. For this reason, PROM distinguish between one’s own work and content generated by a large language model. It should be noted that some models and systems already use watermarks in generated texts, making them easily identifiable as AI-produced.

What environmental impact does the use of artificial intelligence have?
It must be considered that the enormous amount of energy and water resources required by AI models and systems makes them a latent environmental threat, to which every “prompt” contributes. For this reason, their use must always be deliberate and limited with environmental concerns in mind.

It is therefore necessary to constantly inform oneself about the contribution of AI models and systems to greenhouse gas emissions and climate change in general.

Attribution Note: The entire content of the responses belongs to Dr. Carlos García Torres. The final document has been edited with the support of artificial intelligence tools (Copilot). This text is published within the framework of UNESCO’s approach to promoting critical thinking on the ethical use of AI.

The terms used in this publication and the presentation of the data contained herein do not imply any position on the part of UNESCO concerning the legal status of countries, territories, cities or regions, or of their authorities, or concerning the delimitation of their frontiers or boundaries. The ideas and opinions expressed in this work are those of the authors and do not necessarily reflect the views of UNESCO or commit the Organization.
 



Source link

AI Research

School Cheating: Research Shows AI Has Not Increased Its Scale

Published

on


Changes in Learning: Cheating and Artificial Intelligence

When reading the news, one gets the impression that all students use artificial intelligence to cheat in their studies. Headlines in newspapers such as The Wall Street Journal or the New York Times often mention ‘cheating’ and ‘AI’. Many stories, similar to a publication in New York Magazine, describe students who openly testify about using generative AI to complete assignments.

With the rise of such headlines, it seems that education is under threat: traditional exams, readings, and essays are filled with cheating through AI. In the worst cases, students use tools like ChatGPT to write complete works.

This seems frustrating, but such a thought is only part of the story.

Cheating has always existed. As an educational researcher studying cheating with AI, I can assert that preliminary data indicate that AI has changed the methods of cheating, but not its volumes.

Our early data suggest that AI has changed the method, but not necessarily the scale of cheating that was already taking place.

This does not mean that cheating using AI is not a serious problem. Important questions are raised: Will cheating increase in the future due to AI? Is the use of AI in education cheating? How should parents and schools respond to prepare children for a life that is significantly different from our experience?

The Pervasiveness of Cheating

Cheating has existed for a very long Time — probably since the creation of educational institutions. In the 1990s and 2000s, Don McCabe, a business school professor at Rutgers University, recorded high levels of cheating among students. One of his studies showed that up to 96% of business students admitted to engaging in ‘cheating behavior’.

McCabe used anonymous surveys where students had to indicate how often they engaged in cheating. This allowed for high cheating rates, which varied from 61.3% to 82.7% before the pandemic.

Cheating in the AI Era

Has cheating using AI increased? Analyzing data from over 1900 students from three schools before and after the introduction of ChatGPT, we found no significant changes in cheating behavior. In particular, 11% of students used AI to write their papers.

Our diligent work showed that AI is becoming a popular tool for cheating, but many questions remain to be explored. For example, in 2024 and 2025, we studied the behavior of another 28000-39000 students, where 15% admitted to using AI to create their work.

Challenges of Using AI

Students are accustomed to using AI but understand that there are boundaries between acceptable and unacceptable use. Reports indicate that many use AI to avoid doing homework or to gain ideas for creative work.

Students feel that their teachers use AI, and many consider it unfair when they are punished for using AI in education.

What Will AI Use Mean for Schools?

The modern education system was not designed with generative AI in mind. Traditionally, educational tasks are seen as the result of intensive work, but now this work is increasingly blurred.

It is important to understand what the main reasons for cheating are, how it relates to stress, time management, and the curriculum. Protecting students from cheating is important, but ways of teaching and the use of AI in classrooms also need to be rethought.

Four Future Questions

AI has not caused cheating in educational institutions but has only opened new possibilities. Here are questions worth considering:

  • Why do students resort to cheating? The stress of studying may lead them to seek easier solutions.
  • Do teachers adhere to their rules? Hypocrisy in demands on students can shape false perceptions of AI use in education.
  • Are the rules concerning AI clearly stated? Determining the acceptability of AI use in education may be vague.
  • What is important for students to know in a future rich in AI? Educational methods must be timely adapted to the new reality.

The future of education in the age of AI requires an open dialogue between teachers and students. This will allow for the development of new skills and knowledge necessary for successful learning.



Source link

Continue Reading

AI Research

Artificial intelligence helps break barriers for Hispanic homeownership | National News

Published

on


























Artificial intelligence helps break barriers for Hispanic homeownership | National News | ottumwacourier.com

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, call (641) 684-4611.



Source link

Continue Reading

AI Research

Billionaire Ken Griffin Is Loading Up on These 2 Artificial Intelligence (AI) Stocks That Have Increased 88,780% or More

Published

on


These longtime market leaders still have something left in the tank.

Billionaire Ken Griffin, CEO of hedge fund Citadel Advisors, was busy during the second quarter. He and his team went shopping and substantially increased the firm’s stake in some stocks, while also buying new ones.

Some of the biggest names on Wall Street, including Microsoft (MSFT -0.02%) and Apple (AAPL 3.62%), were among the companies whose shares Griffin bought during the period.

These are two of the largest companies in the world by market cap that have generated life-changing returns over the long run. Both have also made moves in the fast-growing artificial intelligence (AI) market. But are these tech leaders still attractive to long-term investors with market caps above $3 trillion?

Let’s find out.

MSFT Total Return Level data by YCharts

1. Microsoft

During the second quarter, Citadel Advisors bought 1.87 million shares of Microsoft, increasing its stake in the company by 1,635.75%.

Griffin and his team aren’t the only ones who have been loading up on the tech leader. There is a reason why Microsoft has crushed broader equities this year and is up 32% since January. Microsoft’s financial results back that up. The company’s revenue and earnings have been growing at a good clip.

In the fourth quarter of its fiscal year 2025, ended on June 30, Microsoft’s revenue jumped by 18% year over year to $76.4 billion. Operating income grew even faster, reaching $34.3 billion, a 23% increase compared to the year-ago period. Net income climbed 24% year over year to $27.2 billion. In other words, Microsoft is capitalizing on growth opportunities while keeping costs under control.

Person sitting at a desk working on a tablet.

Image source: Getty Images.

The tech giant’s most important business is currently its cloud unit, a segment that also offers a host of AI-related services and is growing sales faster than the rest of its business. Microsoft is gaining ground on Amazon, the leader in cloud computing. Although Amazon was first to market, Microsoft has been offering its Office 365 productivity tools (and other services) to businesses for a long time. It’s hardly a leap for these same companies to opt for a provider they already know and trust for their cloud needs.

And the best news is that this is still the early innings of cloud adoption, and for that matter, the AI revolution. As Andy Jassy, Amazon’s CEO, said, “85% of the global IT spend is still on-premises.”

Despite its massive size, Microsoft is poised for excellent long-term opportunities in cloud computing and AI. Add that to the company’s moat from switching costs, its excellent dividend program, and significant cash flow, and Microsoft looks like a no-brainer stock to buy right now.

2. Apple

Citadel Advisors’ stake in Apple increased by a whopping 10,715.95% during the second quarter. That seems like an odd move at first glance.

Apple has faced significant challenges this year, particularly the threat of tariffs. The company manufactures its products abroad, especially in China. With the Trump administration seeking to impose heavy tariffs on imported goods, the market has been concerned about what this will mean for Apple’s business.

Apple recently announced that it would increase its domestic investment in manufacturing to $600 billion over the next decade, in an attempt to appease the current administration and avoid tariffs.

However, Apple has other issues beyond that. The company’s Apple Intelligence — a suite of AI features and services it has released for its latest devices — has failed to impress consumers and investors. So, the iPhone maker is behind in this promising industry.

It’s due to all these factors (and others) that Apple’s shares have declined by 5% this year. However, Griffin and his team clearly saw this as an opportunity to load up on the company’s shares.

In my view, although Apple may struggle for the next few years, the stock remains a solid long-term option. For one, the company’s business is still highly profitable. Apple’s revenue in the third quarter of its fiscal year 2025, ended June 28, increased by 10% year over year to $94 billion. The company’s earnings per share came in at $1.57, representing a 12% increase compared to the year-ago period.

Notably, Apple generates a substantial amount of cash. The company’s trailing-12-month free cash flow may be down 11.6% year over year, but it remains a considerable $96.2 billion.

AAPL Free Cash Flow Chart

AAPL Free Cash Flow data by YCharts

Apple can invest a substantial amount of money in R&D efforts that will ultimately yield results, including advancements in AI. The company has been late to market several times, only to create an innovative version of an already existing product and find massive success. That’s what it did with the iPhone and several products after that, including its AirPods. The difference is that Apple now has a more valuable brand name than it did then.

Apple has an army of loyal customers, an installed base of billions of devices, and a services segment with more than 1 billion paid subscriptions. Even a single highly successful device can have a significant impact on the company’s results.

Lastly, Apple could find ways to fend off the tariff threat. CEO Tim Cook did so during President Donald Trump’s first term. And there is no guarantee that Trump’s aggressive trade plans will survive his administration.

For all these reasons, the stock remains attractive, particularly for investors willing to hold it over the long term.



Source link

Continue Reading

Trending