Connect with us

AI Insights

Dark personality traits linked to generative AI use among art students

Published

on


A new study published in BMC Psychology sheds light on the psychological and behavioral factors that may be influencing how university art students in China use generative artificial intelligence tools. The research found that students who scored higher on personality traits like narcissism, Machiavellianism, psychopathy, and materialism were more likely to engage in academic misconduct, experience academic anxiety, procrastinate, and ultimately rely more heavily on tools like ChatGPT and Midjourney. These behaviors were also associated with increased frustration and negative thinking.

The study was grounded in social cognitive theory, a psychological framework that emphasizes how personal characteristics, behaviors, and environmental factors interact. The researchers focused on a group of university art students in Sichuan province, a population that faces a unique set of challenges. These include high levels of competition, expectations to produce both technically strong and original creative work, and the increasing influence of generative artificial intelligence in their fields.

The researchers began with an interest in whether certain negative personality traits—commonly referred to as “dark traits”—could help explain patterns of academic misconduct and psychological stress. These traits include narcissism (a heightened sense of self-importance), Machiavellianism (manipulativeness and strategic exploitation of others), psychopathy (a lack of empathy and impulsivity), and materialism (a strong focus on acquiring wealth or status symbols).

Prior studies have linked these traits to dishonest behavior, but the research team wanted to explore these dynamics within the specific context of art education, where creativity is often difficult to evaluate and originality is highly prized.

To conduct the study, researchers surveyed 504 students from six major art-focused universities in Sichuan. The sample was diverse in terms of artistic discipline, including students from visual arts, music, dance, and drama programs. Participants were recruited using a stratified sampling method to ensure representative coverage across schools and artistic specialties. Data collection occurred through both in-person and online surveys. Before the main survey, a pilot test with 30 students was conducted to refine the wording and structure of the questionnaire.

Students completed standardized self-report measures assessing their personality traits, experiences of academic anxiety, frequency of procrastination, levels of frustration and negative thinking, and generative AI usage habits. The researchers used translated and validated versions of existing psychological scales to ensure the accuracy and cultural relevance of the survey. They then applied a statistical technique called structural equation modeling to examine how the variables were related to one another.

The results showed clear patterns. Students who scored higher on dark personality traits were significantly more likely to engage in academic misconduct. This misconduct included behaviors such as plagiarism and misrepresenting AI-generated work as their own. These students also reported higher levels of anxiety about their academic performance and a greater tendency to put off assignments. These behaviors, in turn, were linked to increased feelings of frustration, persistent negative thinking, and a stronger reliance on generative AI tools to complete academic tasks.

The researchers found that of the four personality traits measured, narcissism, Machiavellianism, and psychopathy had the strongest associations with misconduct-related behaviors. For example, students high in narcissism may cheat to maintain their self-image or achieve recognition. Those high in Machiavellianism may view academic dishonesty as a strategic way to gain an advantage. Psychopathy was associated with impulsive behavior and a lack of remorse, which may explain its link to dishonest practices.

Materialism also played a role. Students who strongly valued material success were more likely to cut corners to achieve high grades or awards, suggesting that external rewards can be a strong motivator for dishonest behavior.

Academic anxiety and procrastination emerged as important mediating factors in the model. Students who were anxious about their performance were more prone to negative thinking and reported more frustration with their academic experience. Procrastination added to these problems by creating time pressure and reinforcing avoidance behaviors. These psychological pressures appeared to increase the likelihood that students would turn to generative AI tools for assistance.

The researchers highlighted that reliance on AI tools was not limited to students seeking help for legitimate reasons. Rather, it often reflected a broader pattern of behavior driven by personality traits, stress, and a lack of self-regulation. Students who were already engaging in misconduct or experiencing academic distress were more likely to depend on AI technologies as a coping mechanism.

One strength of the study is its focus on art students, a population often overlooked in discussions of academic misconduct. These students face unique challenges, particularly when new technologies like generative AI blur the boundaries between original creation and automated production. The findings may help inform institutional policies in other creative disciplines facing similar issues.

However, the study also has some limitations. It relied entirely on self-report measures, which can be subject to bias. Students may have underreported dishonest behaviors or overestimated their use of AI tools. The cross-sectional design of the research also means that the observed associations cannot be interpreted as direct evidence of causation. Longitudinal studies following students over time would help clarify how these relationships evolve and whether early personality traits predict later behaviors.

While the study does not establish direct cause-and-effect relationships, it does suggest a network of associations that educators and administrators may want to consider. The use of generative AI in academic settings is growing rapidly, and the researchers argue that it is important to understand not only how students are using these tools but also why.

The study, “Dark personality traits are associated with academic misconduct, frustration, negative thinking, and generative AI use habits: the case of Sichuan art universities,” was authored by Jingyi Song and Shuyan Liu.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Regulatory Policy and Practice on AI’s Frontier

Published

on


Adaptive, expert-led regulation can unlock the promise of artificial intelligence.

Technological breakthroughs, historically, have played a distinctive role in accelerating economic growth, expanding opportunity, and enhancing standards of living. Technology enables us to get more out of the knowledge we have and prior scientific discoveries, in addition to generating new insights that enable new inventions. Technology is associated with new jobs, higher incomes, greater wealth, better health, educational improvements, time-saving devices, and many other concrete gains that improve people’s day-to-day lives. The benefits of technology, however, are not evenly distributed, even when an economy is more productive and growing overall. When technology is disruptive, costs and dislocations are shouldered by some more than others, and periods of transition can be difficult.

Theory and experience teach that innovative technology does not automatically improve people’s station and situation merely by virtue of its development. The way technology is deployed and the degree to which gains are shared—in other words, turning technology’s promise into reality without overlooking valid concerns—depends, in meaningful part, on the policy, regulatory, and ethical decisions we make as a society.

Today, these decisions are front and center for artificial intelligence (AI).

AI’s capabilities are remarkable, with profound implications spanning health care, agriculture, financial services, manufacturing, education, energy, and beyond. The latest research is demonstrably pushing AI’s frontier, advancing AI-based reasoning and AI’s performance of complex multistep tasks, and bringing us closer to artificial general intelligence (high-level intelligence and reasoning that allows AI systems to autonomously perform highly complex tasks at or beyond human capacity in many diverse instances and settings). Advanced AI systems, such as AI agents (AI systems that autonomously complete tasks toward identified objectives), are leading to fundamentally new opportunities and ways of doing things, which can unsettle the status quo, possibly leading to major transformations.

In our view, AI should be embraced while preparing for the change it brings. This includes recognizing that the pace and magnitude of AI breakthroughs are faster and more impactful than anticipated. A terrific indication of AI’s promise is the 2024 Nobel Prize in chemistry, winners of which used AI to “crack the code” of protein structures, “life’s ingenious chemical tools.” At the same time, as AI becomes widely used, guardrails, governance, and oversight should manage risks, safeguard values, and look out for those disadvantaged by disruption.

Government can help fuel the beneficial development and deployment of AI in the United States by shaping a regulatory environment conducive to AI that fosters the adoption of goods, services, practices, processes, and tools leveraging AI, in addition to encouraging AI research.

It starts with a pro-innovation policy agenda. Once the goal of promoting AI is set, the game plan to achieve it must be architected and implemented. Operationalizing policy into concrete progress can be difficult and more challenging when new technology raises novel questions infused with subtleties.

Regulatory agencies that determine specific regulatory requirements and enforce compliance play a significant part in adapting and administering regulatory regimes that encourage rather than stifle technology. Pragmatic regulation compatible with AI is instrumental so that regulation is workable as applied to AI-led innovation, further unlocking AI’s potential. Regulators should be willing to allow businesses flexibility to deploy AI-centered uses that challenge traditional approaches and conventions. That said, regulators’ critical mission of detecting and preventing harmful behavior should not be cast aside. Properly calibrated governance, guardrails, and oversight that prudently handle misuse and misconduct can support technological advancement and adoption over time.

Regulators can achieve core regulatory objectives, including, among other things, consumer protection, investor protection, and health and safety, without being anchored to specific regulatory requirements if the requirements—fashioned when agentic and other advanced AI was not contemplated—are inapt in the context of current and emerging AI.

We are not implying that vital governmental interests that are foundational to many regulatory regimes should be jettisoned. Rather, it is about how those interests are best achieved as technology changes, perhaps dramatically. It is about regulating in a way that allows AI to reach its promise and ensuring that essential safeguards are in place to protect persons from wrongdoing, abuses, and harms that could frustrate AI’s real-world potential by undercutting trust in—and acceptance of—AI. It is about fostering a regulatory environment that allows for constructive AI-human collaboration—including using AI agents to help monitor other AI agents while humans remain actively involved addressing nuances, responding to an AI agent’s unanticipated performance, engaging matters of greatest agentic AI uncertainty, and resolving tough calls that people can uniquely evaluate given all that human judgment embodies.

This takes modernizing regulation—in its design, its detail, its application, and its clarity—to work, very practically, in the context of AI by accommodating AI’s capabilities.

Accomplishing this type of regulatory modernity is not easy. It benefits from combining technological expertise with regulatory expertise. When integrated, these dual perspectives assist regulatory agencies in determining how best to update regulatory frameworks and specific regulatory requirements to accommodate expected and unexpected uses of advanced AI. Even when underpinning regulatory goals do not change, certain decades-old—or newer—regulations may not fit with today’s technology, let alone future technological breakthroughs. In addition, regulatory updates may be justified in light of regulators’ own use of AI to improve regulatory processes and practices, such as using AI agents to streamline permitting, licensing, registration, and other types of approvals.

Regulatory agencies are filled with people who bring to bear valuable experience, knowledge, and skill concerning agency-specific regulatory domains, such as financial services, antitrust, food, pharmaceuticals, agriculture, land use, energy, the environment, and consumer products. That should not change.

But the commissions, boards, departments, and other agencies that regulate so much of the economy and day-to-day life—the administrative state—should have more technological expertise in-house relevant to AI. AI’s capabilities are materially increasing at a rapid clip, so staying on top of what AI can do and how it does it—including understanding leading AI system architecture and imagining how AI might be deployed as it advances toward its frontier—is difficult. Without question, there are individuals across government with impressive technological chops, and regulators have made commendable strides keeping apprised of technological innovation. Indeed, certain parts of government are inherently technology-focused. Many regulatory agencies are not, however; but even at those agencies, in-depth understanding of AI is increasingly important.

Regulatory agencies should bring on board more individuals with technology backgrounds from the private sector, academia, research institutions, think tanks, and elsewhere—including computer scientists, physicists, software engineers, AI researchers, cryptographers, and the like.

For example, we envision a regulatory agency’s lawyers working closely with its AI engineers to ensure that regulatory requirements contemplate and factor in AI. Lawyers with specific regulatory knowledge can prompt large language models to measure a model’s interpretation of legal and regulatory obligations. Doing this systematically and with a large enough sample size requires close collaboration with AI engineers to automate the analysis and benchmark a model’s results. AI engineers could partner with an agency’s regulatory experts in discerning the technological capabilities of frontier AI systems to comport with identified regulatory objectives in order to craft regulatory requirements that account for and accommodate the use of AI in consequential contexts. AI could accelerate various regulatory functions that typically have taken considerable time for regulators to perform because they have demanded significant human involvement. To illustrate, regulators could use AI agents to assist the review of permitting, licensing, and registration applications that individuals and businesses must obtain before engaging in certain activities, closing certain transactions, or marketing and selling certain products. Regulatory agencies could augment humans by using AI systems to conduct an initial assessment of applications and other requests against regulatory requirements.

The more regulatory agencies have the knowledge and experience of technologists in-house, the more understanding regulatory agencies will gain of cutting-edge AI. When that enriched technological insight is combined with the breadth of subject-matter expertise agencies already possess, regulatory agencies will be well-positioned to modernize regulation that fosters innovation while preserving fundamental safeguards. Sophisticated technological know-how can help guide regulators’ decisions concerning how best to revise specific regulatory features so that they are workable with AI and conducive to technological progress. The technical elements of regulation should be informed by the technical elements of AI to ensure practicable alignment between regulation and AI, allowing AI innovation to flourish without incurring undue risks.

With more in-house technological expertise, we think regulatory agencies will grow increasingly comfortable making the regulatory changes needed to accommodate, if not accelerate, the development and adoption of advanced AI.

There is more to technological progress that propels economic growth than technological capability in and of itself. An administrative state that is responsive to the capabilities of AI—including those on AI’s expanding frontier—could make a big difference converting AI’s promise into reality, continuing the history of technological breakthroughs that have improved people’s lives for centuries.

Troy A. Paredes



Source link

Continue Reading

AI Insights

In the ever-changing artificial intelligence (AI) world, there is a place that is gaining an unrival..

Published

on


In the ever-changing artificial intelligence (AI) world, there is a place that is gaining an unrivaled status as an AI-based language-specific service. DeepL started in Germany in 2017 and now has 200,000 companies around the world as customers.

DeepL Chief Revenue Officer David Parry Jones, whom Mail Business recently met via video, is in charge of all customer management and support.

DeepL is focusing on securing customers by introducing a large number of services tailored to their needs, such as launching “Deep L for Enterprise,” a corporate product, and “Deep L Voice,” a voice translation solution, last year.

“We are focusing on translators, which are key products, and DeepL Voice is gaining popularity as it is installed in the Teams environment,” Pari-Jones CRO said. “We are also considering installing it on Zoom, a video conference platform.”

DeepL’s voice translation solution is currently integrated into Microsoft’s Teams. If participants in the meeting using Teams speak their own language, other participants can check subtitles that are translated in real-time. As the global video conference market accounts for nearly 90% of Zoom and MS Teams, if DeepL solutions are introduced through Zoom, the language barrier in video conferences will now disappear.

DeepL solutions are all focused on saving time and resources going into translation and delivering accurate results. “According to a study commissioned by Forrester Research last year, companies’ internal document translation time was reduced by 90% when using DeepL solutions,” Parry Jones CRO said, explaining that it is playing a role in breaking down language barriers and strengthening efficiency.

The Asian market, including Korea, a non-English speaking country, is considered a key market for DeepL. CEO Yarek Kutilovsky also visits Korea almost every year and meets with domestic customers.

“The Asia-Pacific region and Japan are behind DeepL’s rapid growth,” said CRO Pari-Jones. In translation services, the region accounts for 45% of sales, he said. “In particular, Japan is the second largest market, and Korea is closely following it.” He explains that Korea and Japan have similar levels of English proficiency, and there are many large corporations that are active in multiple countries, so there is a high demand for high-quality translations.

In Japan, Daiwa Securities is using DeepL solutions in the process of disclosing performance-related data to the world, and Fujifilm and NEC are also representative customers of DeepL. In Korea, Yanolja, Lotte Innovate, and Lightning Market are using DeepL.

DeepL currently only has branches in Japan among Asian countries, and the Korean branch is also considering establishing it, although the exact timing has not been set.

“DeepL continues to improve translation quality and invest at the same time for growth in Korea,” said CRO Pari-Jones. “We need a local team for growth.” We can’t promise you the exact schedule, but (the Korean branch) will be a natural development,” he said.

Meanwhile, as Generative AI services such as ChatGPT become more common, these services are also not the main function, but they also perform compliance levels of translation, threatening translators.

DeepL also sees them as competitors and competes. “DeepL is a translation company, so the difference is that it strives to provide accuracy or innovative language services,” Pari-Jones CRO said. “When comparing translation quality, the gap has narrowed slightly with ChatGPT.” We will continue to improve quality while testing regularly,” he said.

[Reporter Jeong Hojun]



Source link

Continue Reading

AI Insights

There is No Such Thing as Artificial Intelligence – Nathan Beacom

Published

on


One man tried to kill a cop with a butcher knife, because OpenAI killed his lover. A 29-year-old mother became violent toward her husband when he suggested that her relationship with ChatGPT was not real. A 41-year-old now-single mom split with her husband after he became consumed with chatbot communication, developing bizarre paranoia and conspiracy theories.

These stories, reported by the New York Times and Rolling Stone, represent the frightening, far end of the spectrum of chatbot-induced madness. How many people, we might wonder, are quietly losing their minds because they’ve turned to chatbots as a salve for loneliness or frustrated romantic desire?



Source link

Continue Reading

Trending