Connect with us

Tools & Platforms

AI's Green Dilemma: India Must Tackle Tech's Environmental Cost – Deccan Herald

Published

on

Tools & Platforms

AI is running rampant on college campuses as professors and students lean on artificial intelligence

Published

on


AI use is continuing to cause trouble on college campuses, but this time it’s professors who are in the firing line. While it was once faculty at higher institutions who were up in arms about students’ use of AI, now some students are getting increasingly irked about their professors’ reliance on it.

On forums like Rate My Professors, students have complained about lectures’ overreliance on AI.

Some students argue that instructors’ use of AI diminishes the value of their education, especially when they’re paying high tuition fees to learn from human experts.

The average cost of yearly tuition at a four-year institution in the U.S. is $17,709. If students study at an out-of-state public four-year institution, this average cost jumps to $28,445 per year, according to the research group Education Data.

However, others say it’s unfair that students can be penalised for AI use while professors fly largely under the radar.

One student at Northeastern University even filed a formal complaint and demanded a tuition refund after discovering her professor was secretly using AI tools to generate notes.

College professors told Fortune the use of AI for things like class preparation and grading has become “pervasive.”

However, they say the problem lies not in the use of AI but rather the faculty’s tendency to conceal just why and how they are using the technology.

Automated Grading

One of the AI uses that has become the most contentious is using the technology to grade students.

Rob Anthony, part of the global faculty at Hult International Business School, told Fortune that automating grading was becoming “more and more pervasive” among professors.

“Nobody really likes to grade. There’s a lot of it. It takes a long time. You’re not rewarded for it,” he said. “Students really care a lot about grades. Faculty don’t care very much.”

That disconnect, combined with relatively loose institutional oversight of grading, has led faculty members to seek out faster ways to process student assessments.

“Faculty, with or without AI, often just want to find a really fast way out of grades,” he said. “And there’s very little oversight…of how you grade.”

However, if more and more professors simply decide to let AI tools make a judgment on their students’ work, Anthony is worried about a homogenized grading system where students increasingly get the same feedback from professors.

“I’m seeing a lot of automated grading where every student is essentially getting the same feedback. It’s not tailored, it’s the same script,” he said.

One college teaching assistant and full-time student, who asked to remain anonymous, told Fortune they were using ChatGPT to help grade dozens of student papers.

The TA said the pressure of managing full-time studies, a job, and a mountain of student assignments forced them to look for a more efficient way to get through their workload.

“I had to grade something between 70 to 90 papers. And that was a lot as a full-time student and as a full-time worker,” they said. “What I would do is go to ChatGPT…give it the grading rubric and what I consider to be a good example of a paper.”

While they said they reviewed and edited the bot’s output, they added the process did feel morally murky.

“In the moment when I’m feeling overworked and underslept… I’m just going to use artificial intelligence grading so I don’t read through 90 papers,” they said. “But after the fact, I did feel a little bad about it… it still had this sort of icky feeling.”

They were particularly uneasy about how AI was making decisions that could impact a student’s academic future.

“I am using artificial intelligence to grade someone’s paper,” they said. “And we don’t really know… how it comes up with these ratings or what it is basing itself off of.”

‘Bots Talking to Bots’

Some of the frustration is due to the students’ use of AI, professors say.

“The voice that’s going through your head is a faculty member that says: ‘If they’re using it to write it, I’m not going to waste my time reading.’ I’ve seen a lot of just bots talking to bots,” Anthony said.

A recent study suggests that almost all students are using AI to help them with assignments to some degree.

According to a survey conducted earlier this year by the UK’s Higher Education Policy Institute, in 2025, almost all students (92%) now use AI in some form, up from 66% in 2024.

When ChatGPT was first released, many schools either outright banned or put restrictions on the use of AI.

Students were some of the early adopters of the technology after its release in late 2022, quickly finding they could complete essays and assignments in seconds.

The widespread use of the tech created a distrust between students and teachers as professors struggled to identify and punish the use of AI in work.

Now, many colleges are encouraging students to use the tech, albeit in an “appropriate way.” Some students still appear to be confused—or uninterested—about where that line is.

The TA, who primarily taught and graded intro classes, told Fortune “about 20 to 30% of the students were using AI blatantly in terms of writing papers.”

Some of the signs were obvious, like those who submitted papers that had nothing to do with the topic. Others submitted work that read more like unsourced opinion pieces than research.

Instead of penalizing students for using AI directly, the TA said they docked marks for failing to include evidence or citations, rather than critiquing the use of AI.

They added that the papers written by AI were marked favourably when automated grading was used.

They said when they submitted an obviously AI-written student paper into ChatGPT for grading, the bot graded it “really, really well.”

Lack of transparency

For Ron Martinez, the problem with professors’ use of AI is the lack of transparency.

The former UC Berkeley lecturer and current Assistant Professor of English at the Federal University of Paraná (UFPR), told Fortune he’s upfront with his students about how, when, and why he’s using the tech.

“I think it’s really important for professors to have an honest conversation with students at the very beginning. For example, telling them I’m using AI to help me generate images for slides. But believe me, everything on here is my thoughts,” he said.

He suggests being upfront about AI use, explaining how it benefits students, such as allowing more time for grading or helping create fairer assessments.

In one recent example of helpful AI use, the university lecturer began using large language models like ChatGPT as a kind of “double marker” to cross-reference his grading decisions.

“I started to think, I wonder what the large language model would say about this work if I fed it the exact same criteria that I’m using,” he said. “And a few times, it flagged up students’ work that actually got… a higher mark than I had given.”

In some cases, AI feedback forced Martinez to reflect on how unconscious bias may have shaped his original assessment.

“For example, I noticed that one student who never talks about their ideas in class… I hadn’t given the student their due credit, simply because I was biased,” he said. Martinez added that the AI feedback led to him adjusting a number of grades, typically in the student’s favor.

While some may despair that widespread use of AI may upend the entire concept of higher education, some professors are already starting to see the tech’s usage among students as a positive thing.

Anthony told Fortune he had gone from feeling “this whole class was a waste of time” in early 2023 to “on balance, this is helping more than hurting.”

“I was beginning to think this is just going to ruin education, we are just going to dumb down,” he said.

“Now it seems to be on balance, helping more than hurting… It’s certainly a time saver, but it’s also helping students express themselves and come up with more interesting ideas, they’re tailoring it, and applying it.”

“There’s still a temptation [to cheat]…but I think these students might realize that they really need the skills we’re teaching for later life,” he said.



Source link

Continue Reading

Tools & Platforms

Harnessing AI And Technology To Deliver The FCA’s 2025 Strategic Priorities – New Technology

Published

on


LS

Lewis Silkin





We have two things at our core: people – both ours and yours – and a focus on creativity, technology and innovation.
Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.



Jessica Rusu, chief data, information and intelligence officer at the FCA, recently gave a speech on using AI and tech to deliver the FCA’s strategic priorities.


United Kingdom
Technology


To print this article, all you need is to be registered or login on Mondaq.com.

Jessica Rusu, chief data, information and intelligence officer
at the FCA, recently gave a
speech
on using AI and tech to deliver the FCA’s strategic
priorities.

The FCA’s strategic priorities are:

  • Innovation will help firms attract new customers and serve
    their existing ones better.

  • Innovation will help fight financial crime, allowing the FCA
    and firms to be one step ahead of the criminals who seek to disrupt
    markets.

  • Innovation will help the FCA to be a smarter regulator,
    improving its processes and allowing it to become more efficient
    and effective. For example, it will stop asking firms for data that
    it does not need.

  • Innovation will help support growth.

Industry and innovators, entrepreneurs and explorers want a
practical, pro-growth and proportionate regulatory environment.The
FCA is starting a new supercharged Sandbox in October which is
likely to cover topics such as financial inclusion, financial
wellbeing, and financial crime and fraud.

The FCA has carried out joint surveys with the Bank of England
which found that 75% of firms have already adopted some form of AI.
However, most are using it internally rather than in ways that
could benefit customers and markets. The FCA understands from its
own experience of tech adoption that it’s often internal
processes that are easier to develop. It is testing large language
models to analyse text and deliver efficiencies in its
authorisations and supervisory processes. It wants to respond, make
decisions and raise concerns faster, without compromising
quality.

The FCA’s synthetic data expert group is about to publish
its second report offering industry-led insight into navigating the
use of synthetic data.

Firms have also expressed concerns to the FCA about potentially
ambiguous governance frameworks stopping firms from innovating with
AI. The FCA believes that its existing frameworks, such as the
Senior Managers Regime and the Consumer Duty, give it oversight of
AI in financial services and mean that it does not need new rules.
In fact, it says that avoiding new regulation allows it to remain
nimble and responsive as technology and markets change and its
processes aren’t fast enough to keep up with AI
developments.

The speech follows a
consultation
by the FCA on AI live testing, which ended on 13
June 2025. The FCA plans to launch AI Live Testing, as part of the
existingAI
Lab
, to support the safe and responsible deployment of AI by
firms and achieve positive outcomes for UK consumers and
markets.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading

Tools & Platforms

The Future of Emerging AI Solutions

Published

on


AI has captivated industries with promises to redefine efficiency, innovation and decision-making. Some of the nation’s biggest companies, including Microsoft, Meta and Amazon, are projected to pour an astonishing $320 billion into AI by 2025. As remarkable as these developments are, the technology’s swift evolution has exposed some significant challenges. Though these issues aren’t insurmountable, navigating them requires careful consideration and a smart strategy. Take data depletion, for example — one of the more pressing concerns fueled by AI’s rapid rise.

Also Read: The GPU Shortage: How It’s Impacting AI Development and What Comes Next?

AI systems are trained on enormous datasets, but they’re now consuming high-quality, human-generated data faster than it can be created. A shortage of diverse, reliable content could hinder the long-term sustainability of model training. Synthetic data offers one potential solution, but it comes with its own set of risks, including quality degradation and bias reinforcement. Another emerging path is agentic AI, which learns more like humans and adapts in real time without relying solely on static datasets.

Given all the options, high-tech companies’ eagerness to explore these emerging technologies is understandable, but it’s critical to avoid the bandwagon effect when considering new solutions. Before jumping headfirst into the AI race, organizations need to understand not just what’s possible, but what’s sustainable.

Develop a Clear AI Strategy to Pursue Right-Fit Solutions

It’s not just AI but the diverse potential of its applications that has enticed countless companies to jump on board; however, tales of instant success across the AI spectrum of offerings are rare. A baby-steps approach seems to be the rule rather than the exception, as indicated by a recent Deloitte survey that found only 4% of enterprises pursuing AI are actively piloting or implementing agentic AI systems. Organizations that adopt various forms of AI for trendiness rather than intention often find themselves stuck in the trial phase with little to show for their efforts. Scattered approaches lead to wasted resources, siloed projects and negligible ROI.

Businesses that align their initiatives with core objectives are better positioned to unlock AI’s potential. A successful strategy focuses on solving tangible problems, not indulging in alluring technology for appearance’s sake. Comprehensive plans should include solutions that automate routine tasks, such as document processing or repetitive workflows, and tools that enhance decision-making by leveraging advanced data models to predict outcomes.

AI strategies should also embrace technology as a way to strengthen the workforce by augmenting human intelligence rather than replacing it. For example, agentic AI can play a pivotal role in enhancing sales operations as agents can autonomously engage with prospects, answer questions and even close deals — all while collaborating with human colleagues. This human-AI partnership delivers greater efficiency and personalization. Unlike reactive bots, agentic models facilitate meaningful, refined outcomes while retaining emotional intelligence.

Strategies Should Combat Data Depletion and Protect Existing High-Quality Data 

AI’s ravenous appetite for data is raising alarms across industries. Researchers predict the supply of human-generated internet data suitable for training expansive AI models will be exhausted between 2026 and 2032, creating an innovation bottleneck with big potential implications.

AI strategies must recognize that the value lies in the technology’s ability to interpret complex scenarios and conditions. So without the right training data, AI’s outputs are at risk of becoming narrow, biased or obsolete. High-quality, diverse datasets are essential to building reliable models that reflect real-world diversity and nuance.

Amid the looming data drought, synthetic data offers a glimmer of hope. Companies can generate AI data that mirrors real-world situations to potentially offset proprietary content limitations and create task-specific datasets. While promising, synthetic data does come with its own set of drawbacks, such as quality decay, also known as model collapse. Continuously training AI on AI-generated content leads to degraded performance over time, similar to the way photocopying a photocopy repeatedly would erode the original image quality.

Also Read: Why Q-Learning Matters for Robotics and Industrial Automation Executives

Beyond exploring options to generate new data, high-tech businesses must also ensure their strategies prioritize the security of existing datasets. Poor data hygiene, errors and accidental deletions can derail AI operations and lead to costly setbacks. For example, Samsung Securities once issued $100 billion worth of phantom shares due to an input error. By the time the issue was caught, employees had already sold approximately $300 million in nonexistent stock, triggering a major financial and reputational fallout for Samsung.

Protecting data assets means building a sturdy governance framework that includes regular backups, fail-safe protocols and continuous data audits to create an operational safety net. Additionally, investing in advanced cybersecurity mitigates risks like data breaches or external attacks, safeguarding a company’s most valued digital assets.

Preparing for an AI-Driven Future 

The incoming wave of AI success belongs to organizations that blend innovation with intentionality. Businesses that resist hype and take a grounded approach to sustainable transformation stand the best chance of maximizing emerging technology’s potential.

The development of a true, proactive AI strategy hinges on the successful alignment of innovation with clear business objectives and measurable goals. Prioritizing high-quality, diverse datasets ensures accurate, unbiased AI decision-making, while exploring solutions like synthetic data can combat various risks, such as data depletion. AI is reshaping industries with unprecedented momentum. By acting deliberately and ethically, high-tech businesses can turn this technological watershed moment into a long-term competitive advantage.

[To share your insights with us, please write to psen@itechseries.com]



Source link

Continue Reading

Trending