Connect with us

Education

Enhancing art creation through AI-based generative adversarial networks in educational auxiliary system

Published

on


Quantitative performance evaluation

To evaluate the generative quality and diversity of outputs produced by the proposed GAN-based educational system, we compared our model against four baseline architectures commonly used in image-to-image translation: Baseline-GAN, StyleGAN, Pix2Pix, and CycleGAN. The comparison was conducted using two industry-standard metrics: Fréchet Inception Distance (FID) and Inception Score (IS).

Fig. 4

Quantitative Evaluation using FID and IS. The proposed model achieves the lowest FID of 34.2, indicating high visual fidelity, and the highest Inception Score (IS) of 3.9, reflecting both diversity and recognizability of generated artworks.

As shown in Fig. 4, the proposed model achieved the lowest FID score of 34.2, significantly outperforming CycleGAN (53.4) and Pix2Pix (58.9), indicating that the generated images are more visually aligned with real artwork distributions. In terms of Inception Score, our model reached 3.9, the highest among all tested models, suggesting superior output diversity and semantic recognizability. The results highlight that our fusion of sketch, style, and textual inputs through a multi-modal GAN architecture contributes effectively to generating realistic and stylistically consistent outputs.

Qualitative visual outcomes and user feedback

Beyond quantitative metrics, we evaluated the perceptual quality and user satisfaction of the generated artworks through qualitative analysis. Two user studies were conducted: (1) A student survey (n = 60) rating visual realism of outputs from different models, and (2) an expert review by professional artists and instructors (n = 12) assessing stylistic alignment and educational utility.

Fig. 5
figure 5

Subjective Evaluation of Visual Outcomes. Ratings show that the proposed GAN-based system significantly outperforms baseline models in both perceived realism and stylistic alignment, according to students and domain experts.

As shown in Fig. 5, participants rated the outputs of the proposed model highest for both visual realism (mean = 4.7/5) and style consistency (mean = 4.8/5), surpassing all baselines. In comparison, CycleGAN and Pix2Pix scored 4.0 and 3.8 for realism, and 3.9 and 3.6 for style, respectively. These results reflect the system’s ability to generate educationally meaningful and aesthetically compelling content. Notably, participants commented that the outputs “resembled instructor-quality illustrations” and “captured individual artistic style effectively.”

Ablation study and component impact

To understand the relative contribution of each input modality and architectural component, we conducted an ablation study by systematically removing key modules from the proposed system and evaluating the resulting performance using FID and SSIM metrics. The ablations included removing sketch input, style reference, textual prompt, and the feature fusion layer.

Fig. 6
figure 6

Ablation Study Results. Removing key components (sketch, style, text, fusion) results in a significant drop in performance. The full model achieves the best scores, confirming the complementary nature of each input modality.

As shown in Fig. 6, the full model achieved the best performance with a FID of 34.2 and SSIM of 0.81. When the sketch input was removed, FID increased to 46.8 and SSIM dropped to 0.72, indicating that sketches are essential for structural guidance. Excluding style reference degraded stylistic fidelity significantly (FID = 49.5, SSIM = 0.69), confirming its importance for visual coherence. Omitting text input resulted in FID = 44.7 and SSIM = 0.75, suggesting textual prompts aid in theme alignment and abstraction. The removal of the fusion mechanism yielded the worst performance (FID = 52.1, SSIM = 0.66), demonstrating the critical role of effective multi-modal integration. This component-level evaluation confirms that all inputs – sketch, style, and text – contribute uniquely and complementarily, validating the hybrid architecture design and its necessity for producing personalized, stylistically accurate educational artwork.

Latency and real-time responsiveness

To assess the suitability of the proposed GAN-based educational system for real-time classroom environments, we measured inference latency and scalability under increasing user loads. Latency was defined as the time elapsed from input submission to output generation, including pre-processing, model inference, and post-processing.

Fig. 7
figure 7

Real-Time Responsiveness. (a) The proposed model achieves the lowest latency among all baselines, supporting real-time educational use. (b) Scalability test shows the model maintains sub-300ms responsiveness up to 200 concurrent users.

As illustrated in Fig. 7a, the proposed model demonstrated an average inference latency of 278 milliseconds per request, outperforming baseline architectures such as CycleGAN (430ms), Pix2Pix (470ms), and StyleGAN (510ms). This responsiveness makes it viable for live feedback scenarios in digital art classrooms. Figure 7b shows the system’s scalability when deployed in a server-based architecture. Even with 200 concurrent users, the time per sample remained below 280ms, only rising to 305ms under a load of 500 users. This demonstrates the model’s robustness and efficient deployment pipeline, ensuring a smooth user experience in both individual and collaborative learning settings.

User study and engagement metrics

To assess the system’s impact on learner experience and behavior, we conducted a user study involving 60 students across three institutions. Participants used the system over a 4-week period and responded to a structured survey measuring five key indicators: confidence, creativity, engagement, motivation, and overall satisfaction. Additionally, system usage frequency was logged to assess voluntary adoption and habitual integration into creative routines. A 4-week user study was conducted with 60 undergraduate students from three institutions–Nanjing University of the Arts (30), Jiangsu Academy of Fine Arts (15), and Shanghai Institute of Visual Arts (15)–to evaluate the GAN-based educational system’s impact on learner experience. Participants (aged 18–24, M = 20.3, SD = 1.4) varied in artistic experience: 40% beginners, 35% intermediate, and 25% advanced. Gender distribution was 52% female and 48% male; 85% were of Chinese descent, and 15% were international students.

Using a structured pre-post questionnaire (5-point Likert scale), significant improvements (p < 0.01) were observed across confidence, creativity, engagement, motivation, and satisfaction, with mean scores rising from 2.1–2.3 to 4.1–4.5. Engagement increased by 42.7%, and expert evaluation showed a 35.4% improvement in artwork quality. System usage was high: 45% used it daily, and 31% used it 3–4 times/week. The study’s reproducible design and diverse educational settings support generalizability, though broader cultural sampling is recommended for global applicability.

Fig. 8
figure 8

Learner Experience and Engagement. The proposed system improved confidence, motivation, and creativity. A majority of students used the system more than three times a week, indicating strong adoption.

As shown in Fig. 8a, there was a marked increase across all five metrics after using the system. Average scores rose from approximately 2.1–2.3 before use to 4.1–4.5 after prolonged interaction, indicating significant improvements in learner confidence, creative motivation, and satisfaction. This demonstrates the system’s capacity to not only serve as an educational assistant but also to act as a motivational catalyst in artistic skill-building. Figure 8b presents the distribution of user engagement frequency. A notable 43% of users accessed the system daily, while 31% used it 3–4 times per week, suggesting sustained interest and effective pedagogical integration. Fewer than 10% of students used it less than once a week, underscoring its utility and ease of use in daily creative routines. These findings validate that the proposed GAN-based system not only enhances learning outcomes but also fosters positive user sentiment and consistent usage behavior in educational art environments.

Comparative analysis

To quantify the effectiveness of the proposed GAN-based educational auxiliary system, we conducted a rigorous comparative analysis against state-of-the-art image-to-image translation models, including Pix2Pix, CycleGAN, and StyleGAN. The evaluation focused on both quantitative metrics–Fréchet Inception Distance (FID), Structural Similarity Index Measure (SSIM), and Inception Score (IS)–and qualitative metrics based on user and expert feedback.

Table 2 summarizes the overall performance comparison. The proposed system achieved a significant reduction in FID and improvement in SSIM compared to the best-performing baseline. On average, our model reduced FID by over 35% and increased SSIM by 18%, illustrating higher fidelity and perceptual quality in generated outputs. These gains are expressed formally as:

$$\begin{aligned} \Delta \text {FID} = \text {FID}_{\text {baseline}} – \text {FID}_{\text {proposed}} = 53.4 – 34.2 = 19.2 \end{aligned}$$

(28)

$$\begin{aligned} \text {SSIM Gain (\%)} = \frac{\text {SSIM}_{\text {proposed}} – \text {SSIM}_{\text {baseline}}}{\text {SSIM}_{\text {baseline}}} \times 100 = \frac{0.81 – 0.69}{0.69} \times 100 \approx 17.4\% \end{aligned}$$

(29)

These improvements stem from several architectural and data-centric innovations. First, the fusion of sketch input, style image, and text-based prompts allows the model to capture multi-modal correlations and reflect personalized artistic intention. Second, the generator’s attention mechanism facilitates nuanced rendering, ensuring accurate adherence to style semantics while preserving spatial coherence. Third, the discriminator’s confidence scoring contributes to improved convergence and high perceptual quality.

To assess the effectiveness of our GAN-based educational system, we compared it against leading models–Pix2Pix, CycleGAN, and StyleGAN–using FID, SSIM, and IS metrics, alongside expert/user feedback. Baseline rationale:Pix2Pix20 supervised sketch-to-image model; strong for structured tasks. CycleGAN15 unpaired translation; ideal for style transfer tasks. StyleGAN22 known for high-quality generation and style control via latent space. Exclusions, SPADE GAN20 high FID (38.7), slow inference, low flexibility. DALL-E-inspired models12 high latency (600+?ms), limited interpretability. Our model reduced FID by 35% and improved SSIM by 18% over baselines. Users rated realism (mean = 4.75) and style alignment (4.85) higher than Pix2Pix (4.0, 3.9), CycleGAN (3.8, 3.8), and StyleGAN (4.2, 4.1). It also outperformed in latency (278?ms vs. 450–510?ms), confirming its suitability for real-time, interpretable, educational applications.

Moreover, the system demonstrates superior real-time responsiveness. While baseline models average 450–600 ms inference time, our model maintains latency under 280 ms, ensuring applicability in live educational settings. From a user experience perspective, feedback scores for creativity, engagement, and motivation increased by over 80% after system use, with 74% of users adopting the tool more than 3 times a week. Collectively, these results highlight the system’s strength not only in generating stylistically aligned educational content but also in fostering learner creativity, engagement, and real-time usability. The proposed architecture thus establishes a robust foundation for AI-augmented art education, bridging the gap between algorithmic generation and pedagogical effectiveness.

Potential limitations

Despite its demonstrated performance across generative quality, latency, and user engagement, the proposed GAN-based educational system presents several inherent limitations. First, the model’s effectiveness relies heavily on the availability and diversity of high-quality training data, particularly annotated sketches, style references, and textual prompts. The current datasets–QuickDraw, Sketchy, BAM!, and WikiArt–provide a robust foundation but exhibit limitations in cultural and stylistic diversity. Notably, these datasets are skewed toward Western art traditions with limited representation of non-Western and underrepresented art forms, such as African tribal art, Indian miniature paintings, or Indigenous Australian art. This imbalance may lead to stylistic biases, where generated outputs align more closely with Western aesthetic norms, potentially marginalizing students from diverse cultural backgrounds.

Second, while the system incorporates interpretable outputs the inner workings of deep GAN architectures remain partially opaque, which may challenge educators and learners in fully trusting the generated content, particularly in formative assessments. Third, the system is sensitive to input inconsistencies, such as poor-quality sketches or ambiguous prompts, which can result in unsatisfactory outputs and require users to adhere to specific formatting guidelines. Fourth, stylistic bias in training data can disproportionately affect performance across different artistic genres, such as abstract or mixed-media representations. Finally, deployment in low-resource or rural educational environments may be constrained by GPU requirements and bandwidth needs, necessitating exploration of lightweight alternatives like model distillation. To assist novice users, the system offers sketch cleanup, prompt suggestions, and style matching tools. Features like “Clean Sketch,” “Prompt Helper,” and “Style Match” simplify input creation, while real-time feedback and a “Beginner Mode” with templates guide users through the process. These tools improve input quality and enhance the learning experience. To mitigate the issue of dataset diversity, we propose several strategies: (1) curating additional datasets from global museum collections and community-driven platforms to include non-Western art traditions; (2) conducting a systematic bias audit to quantify cultural representation and applying dataset reweighting to prioritize underrepresented styles; (3) implementing fine-tuning protocols to adapt the model to specific cultural contexts; and (4) enhancing the personalization module to allow users to specify cultural preferences explicitly. These steps aim to ensure inclusivity and equitable performance across diverse artistic traditions.



Source link

Education

Global Artificial Intelligence in Education Market to Reach USD

Published

on


Artificial Intelligence in the Education Sector Market

The global Artificial Intelligence (AI) in the Education Sector market, valued at approximately USD 5.9 billion in 2024, is projected to grow to nearly USD 38.2 billion by 2034, registering a strong CAGR of 20.8%. This growth is fueled by the rising need for personalized learning experiences, faster adoption of virtual classrooms, ongoing teacher shortages, and increasing use of real-time data analytics to improve learning outcomes.

In 2023, over 350 million learners worldwide engaged with AI-powered platforms, with EdTech leaders like Duolingo, BYJU’S, and Pearson AI recording double-digit growth in users. Duolingo’s adaptive engine handled over 1.4 billion daily practices, while BYJU’S facilitated 2.1 million daily learning experiences in India alone. AI tools are transforming teaching methods, streamlining administrative work, and enabling personalized student support.

To Receive A PDF Sample Of The Report, Visit @https://www.emergenresearch.com/request-sample/484

Virtual teaching tools and intelligent tutoring systems are now used by more than 42% of higher education institutions in the U.S. and UK. AI-powered proctoring services such as ProctorU and Examity oversaw over 25 million online exams in 2023, ensuring integrity in digital assessments. Platforms like Google Classroom, Microsoft Education Insights, and ClassDojo provide educators with real-time insights to adapt lessons and support struggling students early.

AI technologies are also helping bridge the education gap in remote areas. In 2023, adaptive learning applications reached around 42 million students in underserved regions across Africa, Asia, and Latin America. Generative AI tools like ChatGPT and Google Gemini are increasingly used for lesson planning, content creation, and reducing teacher preparation time by up to 40%.

Regional Insights

North America leads the market in 2024, backed by strong EdTech investments and early AI adoption in K-12 and higher education. Asia-Pacific is the fastest-growing region due to large-scale government programs in China, India, and South Korea, with China investing over USD 1.2 billion in AI-powered classrooms in 2023. Europe is advancing AI integration through EU funding and partnerships between tech firms and universities.

Market Drivers

Key drivers include the demand for personalized learning, the shift to digital classrooms, and the need for data-driven decision-making. For instance, BYJU’S AI tools in India boosted test scores by 19%, while Khan Academy’s AI tutor “Khanmigo” reached over 200,000 students in just six months. AI analytics tools now assist over 12 million teachers worldwide in tracking student performance.

Browse Detailed Research Report @https://www.emergenresearch.com/industry-report/artificial-intelligence-in-the-education-sector-market

Trends and Innovations

AI is enabling personalized learning, intelligent tutoring systems, and real-time feedback. Accessibility tools such as speech-to-text and translation are supporting over 180 million students annually. AI integration with extended reality (XR) is enhancing immersive learning, while nonprofit programs are expanding access to digital education in underserved areas. Growing attention to AI ethics and student data privacy is shaping industry practices.

Market Restraints

Challenges include data privacy concerns, the digital divide, educator resistance, and policy uncertainty. Limited internet access and infrastructure in rural regions restrict AI adoption, and inconsistent regulations between countries increase compliance costs. Teacher training and clear ethical standards are essential for overcoming these barriers.

Segment Highlights

Technology: Machine Learning & Deep Learning held the largest share (39%) in 2024, followed by Natural Language Processing (24%) and Computer Vision (15%).

Platform: Cloud-based solutions dominated with 61% market share due to scalability and cost efficiency.

Applications: Virtual Learning Environments led with 33% share, while Intelligent Tutoring Systems and Language Learning tools showed the fastest growth.

End Use: K-12 schools accounted for 41% of deployments in 2024, with strong growth also seen in higher education and corporate training sectors.

Buy Now: @https://www.emergenresearch.com/select-license/484

Some major players operating in the artificial intelligence in the education sector market are:

Google LLC

Microsoft Corporation

Amazon Web Services, Inc.

International Business Machines Corporation

Cognizant Technology Solutions Corp.

Pearson PLC

Nuance Communications Inc.

Blackboard Inc.

Carnegie Learning, Inc.

Cognii, Inc.

Artificial Intelligence in the Education Sector Market Market Segmentation Analysis

By Component Outlook (Revenue, USD Billion, 2021-2034)

Solutions (Intelligent Tutoring Systems, Learning Management Systems, AI-Powered Content Creation, Adaptive Assessments, Analytics Platforms)

Services (Professional Services, Managed Services, AI Training and Consulting)

By Deployment Mode Outlook (Revenue, USD Billion, 2021-2034)

Cloud-Based

On-Premises

By Technology Outlook (Revenue, USD Billion, 2021-2034)

Machine Learning & Deep Learning

Natural Language Processing (NLP)

Computer Vision

Speech & Voice Recognition

Others

By Application Outlook (Revenue, USD Billion, 2021-2034)

Virtual Learning Environments

Intelligent Tutoring Systems

Student Information Systems

Classroom Management

Language Learning

Accessibility Tools

Others

By End-User Outlook (Revenue, USD Billion, 2021-2034)

K-12 Schools

Higher Education Institutions

Vocational & Technical Training

Corporate Training & Workforce Development

Government & Nonprofit Organizations

By Regional Outlook (Revenue, USD Billion, 2021-2034)

North America

U.S.

Canada

Mexico

Europe

Germany

United Kingdom

France

Italy

Spain

Nordics

Asia Pacific

China

India

Japan

South Korea

Australia

Latin America

Brazil

Argentina

Middle East & Africa

Saudi Arabia

UAE

South Africa

Nigeria

Browse More Report By Emergen Research:

Automated Suturing Devices Market

https://www.emergenresearch.com/industry-report/automated-suturing-devices-market

Burial Insurance Market

https://www.emergenresearch.com/industry-report/burial-insurance-market

Empty Capsules Market

https://www.emergenresearch.com/industry-report/empty-capsules-market

Forensic Imaging Market

https://www.emergenresearch.com/industry-report/forensic-imaging-market

Contact Us:

Eric Lee

Corporate Sales Specialist

Emergen Research | Web: www.emergenresearch.com

Direct Line: +1 (604) 757-9756

E-mail: sales@emergenresearch.com

Visit for More Insights: https://www.emergenresearch.com/insights

About Us:

Emergen Research is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target, and analyse consumer behavior shifts across demographics, across industries, and help clients make smarter business decisions. We offer market intelligence studies ensuring relevant and fact-based research across multiple industries, including Healthcare, Touch Points, Chemicals, Types, and Energy. We consistently update our research offerings to ensure our clients are aware of the latest trends existent in the market. Emergen Research has a strong base of experienced analysts from varied areas of expertise. Our industry experience and ability to develop a concrete solution to any research problems provides our clients with the ability to secure an edge over their respective competitors.

This release was published on openPR.



Source link

Continue Reading

Education

The role of AI in Purdue University’s academic future – Indianapolis News | Indiana Weather | Indiana Traffic

Published

on


This is the second entry of WISH-TV’s deeper dive into artificial intelligence in education, first examining how AI is being used in colleges and universities.

WEST LAFAYETTE, Ind. (WISH) — Students start the fall semester at Purdue University Aug. 25.

Leaders there say artificial intelligence will likely be a part of all college students’ education. However, how much or how little depends on their major, their professors, and the students themselves.

Jamil Mansouri just graduated from Purdue in May and double majored in Agricultural Economics and Political Science. He will soon start graduate school for Business Analytics and Data Management. He has become familiar with AI as a student.

“I think there are fields where AI can be your biggest tool or not help you that much,” Mansouri said.

He is also a member of the Student Pedagogy Advocates program and is a student voice while the university develops frameworks for how to use and teach AI.

Jamil Mansouri demonstrates AI use via Chat GPT on his phone.
(WISH Photo)

When asked what people should know about AI in the university setting, Mansouri said, “The student body is not a monolith, it is very major specific. The second thing is students want consistent frameworks and guidance when using it.

“A lot of faculty have very different perspectives on AI and it translates into their coursework. Some faculty actively support it and give you these resources. Others say ‘If I even find a hint of it, I’m going to give you an F in the class. Students feel confused students don’t know exactly where to go with it where to engage with these tools.”

That’s where David Nelson comes into play. He is the associate director for Purdue Center for Instructional Excellence and a courtesy faculty in the John Martinson Honors College.

He helps the 2,400 faculty members stay updated on educational trends, such as AI.

“AI disrupts a lot of processes that we’ve come to rely on an education,” Nelson explained. “AI is creating a lot more freedom of choice in cognitive work. That’s a big part of what it is doing right now and we haven’t had to worry about that choice. So, now are we.”

He’s helping implement Purdue University’s AI policy.

“Rather than institute one kind of broad AI policy, the university has encouraged different instructors and different departments to really investigate it,” Nelson said.

Professors will set the AI policy for their own classes; the level of use and implementation will depend on the course, the subject and the faculty member.

“We’re very much encouraging transparency for faculty instructors in their AI policies and those can be very different from class to class. We’re also encouraging faculty and instructors to make sure that there is a human in the loop when trying to give feedback or assessment to students by using any AI,” he said. “Everything else has been kind of ‘We’d like you to experiment we want you to be aware of what existing rules are about academic integrity and research’ —  but it’s a lot of trust in full-time professionals to do their jobs.”

Pedestrians walk across the street in front of Purdue’s main gate.
(WISH Photo)

Purdue is trusting students, too.

Nelson says there are pros and cons to AI: A chat bot can act as a study buddy, or something to bounce ideas off of when brainstorming. It can even act as a language translator; however, it should not do the thinking for students.

Nelson encourages students to learn how algorithms work and determine what they really want to accomplish with their higher education experience – and encouraging staff to have bold and up front conversations with students about AI.

“How can we incentivize change the way that we are encouraging students engaging with them so that they’re making the proactive choice to learn and realize this is something that could be harmful or helpful to me and if they don’t know what guidance can they get?” Nelson said.

The university says there will not be a freshman orientation focused on AI. This will be a subject individual professors will approach at the start of each semester.

While the university is embracing the technology, faculty at Purdue also hope it will encourage students to really think about what they want to get out of their education and perhaps promote face to face interactions with professors.

Mansouri said this has prompted a lot of professors to change their course work.

“It is no secret that in computer science you can get up through your junior year by just using Chat GPT to guide you through the code. So, professors are adapting so now you have to come in and have one-on-one conversations and explain your coding and how you got there. You’re going to see different projects I think. You’re going to see a lot more presentations – a lot more of the social side of it and a lot more, kind of, showing and talking your way through something rather than writing on paper how you got to a solution,” Mansouri said.

When it comes to the issue of cheating, the university says AI detectors don’t work well anymore.

The school is relying on professors getting to know their students and feeling when something seems off. Nelson says he has had to do this with students, and generally, they admit to the generative AI use in circumstances that are not allowed.

“Identifying that is a feeling as well as a discussion. But when students do admit to it it is a violation of academic integrity and so it does violate the university’s honesty policy and there are the same kind of consequences from directly copying from one previous static document to your own work and saying, ‘This is what I did.’” Nelson said.

The university is also encouraging students to take Purdue’s Honor Pledge.

According to its website, students developed the honor pledge to advance a supportive environment that promotes academic integrity and excellence. “It is intended that this pledge inspires Boilermakers of all generations to stay ‘on track’ to themselves and their University,” according to Purdue.

As for Chat GPT’s advice to college students?  

The chat bot’s bottom line is that it’s a tool. It can generally be abused. But, it can challenge you to think critically and help shape your education and future career.

“I think, fundamentally, I agree with its perspective on that,” Mansouri said. “It can give you an edge but it can also harm your education. So use it as the tool that it is and it can be a double-edged sword.”

Nelson said the best analogy for AI is when radio started. It was a paradigm shift in technology that was suddenly everywhere…

He says AI is also a paradigm shift for education and potentially careers. Some students tell News 8 they are re-thinking or worried about jobs after school because of AI’s impact.


Next in this series, WISH-TV explore how AI is changing what students choose to do in higher education, the trades, and its impact on careers.

That story will air Aug. 18 on Daybreak.



Source link

Continue Reading

Education

This week in ETIH EdTech News – AI literacy gap, Anthropic’s education push, robotics programs – EdTech Innovation Hub

Published

on



This week in ETIH EdTech News – AI literacy gap, Anthropic’s education push, robotics programs  EdTech Innovation Hub



Source link

Continue Reading

Trending