As the world rushes to apply AI to their work practices, its use is becoming apparent in both the production of research “products” for assessment (outputs, proposals, CVs) and the actual assessment of those products and their producers. This all comes at a time when the research sector is seeking to reform the way it assesses research, both to mitigate some of the problematic outcomes of publication-dominant forms of assessment (such as the rise in paper mills, authorship sales, citation cartels, and a lack of incentives to engage with open research practices) and to prioritise peer review over solely quantitative forms of assessment.
AI Research
Is there a place for AI in research assessment?

AI is reshaping research, from drafting proposals and academic CVs to automating parts of peer review and assessment. With efforts to reform research assessment in motion, Elizabeth Gadd and Nick Jennings explore how AI is both exacerbating the need for reform and offering potential for delivering reformed assessment mechanisms. They suggest that AI-augmented assessment models, where technology supports – but never replaces – human judgement, might offer a way forward.
Where assessment reform and AI tools meet
There are two main issues that arise at the intersection of assessment reform and AI. The first is the extent to which our current assessment regime is driving the misuse of Generative AI to produce highly prized outputs that look scholarly but aren’t. And the second is the extent to which AI might legitimately be used in research assessment going forward.
The [current] system rewards publication in and of itself above the content and quality of the research, to the point that it is now rewarding mere approximations of publications.
On the first issue, we are on well-trodden ground. The narrow, publication-dominant methods of assessment used to evaluate research and researchers are driving many poor behaviours. One such behaviour is the pursuit of questionable research practices – such as publication and citation bias. Worse again is research misconduct – such as fabrication, falsification and plagiarism. The system rewards publication in and of itself above the content and quality of the research, to the point that it is now rewarding mere approximations of publications. It should therefore come as no surprise that bad actors will be financially motivated to use any means at their disposal to produce publications, including AI.
In this case, our main problem is not AI, but rather publication-dominant research assessment. We can address this problem by broadening the range of contributions we value and taking a more qualitative approach to assessment. By doing this, we will at least disincentivise polluting the so-called “scholarly record” (curated, peer-reviewed content) with fakes and frauds.
AI in research outputs versus assessment
Assuming we were successful in disincentivising the use of AI in generating value-less publications in any reformed assessment regime, the question remains as to whether it may be incentivised for other aspects. This is because broadening how we value research and moving to more qualitative (read “narrative”) forms of assessment, it will lead to more work, not less, for both assessors and the assessed. And if there is one thing we know GenAI is good at, it’s generating narratives at speed. GenAI might even help to level the playing field for those for whom the assessment language is not their first, making papers clearer and easier to read. Most guidelines state that if the right safety precautions are followed – if the human retains editorial control, and is transparent about their use of AI, and doesn’t enter sensitive information into a Large Language Model – it’s perfectly legitimate to submit the resulting content for assessment.
Many researchers believe they’ve been on the receiving end of a new, over-thorough, less aggressive Reviewer Two, which is probably an AI.
Where the guidelines are more cautious is around the use of AI to do the assessing. The European Research Area guidelines on the responsible use of AI in research are clear that we should “refrain from using GenAI tools in peer reviews and evaluations”. But that’s not to say that researchers aren’t experimenting. Mike Thelwall’s team has shown weak success in using Chat GPT to replicate human peer review scores, and many researchers believe they’ve been on the receiving end of a new, over-thorough, less aggressive Reviewer Two, which is probably an AI.
But given human peer review is already a highly contested exercise (when does Reviewer One agree with Reviewer Two?) we must ask the question: if ChatGPT can’t replicate human peer review scores, does it say more about the AI or the human? We have to question whether the human scores are the correct ones and whether we are doing machine learning a disservice by expecting it simply to replicate human scores, only faster. One might argue that the real power of AI is in seeing what we can’t see; finding patterns we cannot; and identifying potential that we cannot.
The dual value of peer review
Perhaps we must first ask, is the scholarly process itself purely about generating and (through research assessment) verifying new discoveries? Or is there something valuable in the act of discovery and verification: the acquisition and deployment of skills, knowledge, and understanding, which is fundamental to being human?
We have to ask if the process of collaborating with other humans in the pursuit of new knowledge is just about this new knowledge, or whether the business of building connections and interfacing with others essential to human wellbeing, to civil society, and to geopolitical security.
The recognition of fellow humans – through peer review and assessment – is more than just a verification of our results and our contributions, and instead something critical to our welfare and motivation.
The recognition of fellow humans – through peer review and assessment – is more than just a verification of our results and our contributions, and instead something critical to our welfare and motivation: An acknowledgement that, human-to-human, I see you and I value you. Would any researcher be happy knowing their contribution had been assessed by automation alone?
It comes down to whether we value only the outcome or the process. And if we continuously outsource that process to technology, and generate outcomes that might provide answers, but that we don’t actually understand or trust, we risk losing all human connection to the research process. The skills, knowledge, and understanding we accumulate through performing assessments are surely critical to research and researcher development.
Proceeding with the right amount of caution
There is no justification for condemning AI outright. It is being used (and its accuracy then verified by humans) to solve many of society’s previously unsolved problems. However, when it comes to matters of judgement, where humans may not agree on the “right answer” – or even that there is a right answer – we need to be far more cautious about the role of AI. Research assessment is in this category.
Human judgement first, and technology in support; or AI-augmented human assessment.
There are many parallels between the role of metrics and the role of AI in research assessment. There is significant agreement that metrics shouldn’t be making our assessments for us without human oversight. And assessment reformers are clear that referring to appropriate indicators can often lead to a better assessment, but human judgement should take priority. This logic offers us a blueprint for approaching AI: human judgement first, and technology in support; or AI-augmented human assessment.
By forbidding the use of AI in assessment altogether, the ERA guidelines took an understandably cautious initial response. However, properly contained, the judicious involvement of AI in assessment can be our friend, not our enemy. It largely comes down to the type of research assessment we are talking about, and the role we allow AI to play. The use of AI to provide a first draft of written submissions, or to summarise, identify inconsistencies, or provide a view on the content of those submissions could lead to fairer, more robust, qualitative evaluations. However, we should not rely on AI to do the imaginative work of assessment reform and rethink what “quality” looks like, nor should we outsource human decision-making to AI altogether. As we look to reform research assessment, we should simply be open to the possibilities offered by new technologies to support human judgements.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Roman Samborskyi on Shutterstock
AI Research
Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal
AI Research
OpenAI business to burn $115 billion through 2029 The Information

OpenAI CEO Sam Altman walks on the day of a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room at the White House in Washington, D.C., U.S., September 4, 2025.
Brian Snyder | Reuters
OpenAI has sharply raised its projected cash burn through 2029 to $115 billion as it ramps up spending to power the artificial intelligence behind its popular ChatGPT chatbot, The Information reported on Friday.
The new forecast is $80 billion higher than the company previously expected, the news outlet said, without citing a source for the report.
OpenAI, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, some $1.5 billion higher than its projection from earlier this year, the report said.
The company did not immediately respond to Reuters request for comment.
To control its soaring costs, OpenAI will seek to develop its own data center server chips and facilities to power its technology, The Information said.
OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, saying OpenAI plans to use the chip internally rather than make it available to customers.
The company deepened its tie-up with Oracle in July with a planned 4.5-gigawatts of data center capacity, building on its Stargate initiative, a project of up to $500 billion and 10 gigawatts that includes Japanese technology investor SoftBank. OpenAI has also added Alphabet’s Google Cloud among its suppliers for computing capacity.
The company’s cash burn will more than double to over $17 billion next year, $10 billion higher than OpenAI’s earlier projection, with a burn of $35 billion in 2027 and $45 billion in 2028, The Information said.
AI Research
Who is Shawn Shen? The Cambridge alumnus and ex-Meta scientist offering $2M to poach AI researchers

Shawn Shen, co-founder and Chief Executive Officer of the artificial intelligence (AI) startup Memories.ai, has made headlines for offering compensation packages worth up to $2 million to attract researchers from top technology companies. In a recent interview with Business Insider, Shen explained that many scientists are leaving Meta, the parent company of Facebook, due to constant reorganisations and shifting priorities.“Meta is constantly doing reorganizations. Your manager and your goals can change every few months. For some researchers, it can be really frustrating and feel like a waste of time,” Shen told Business Insider, adding that this is a key reason why researchers are seeking roles at startups. He also cited Meta Chief Executive Officer Mark Zuckerberg’s philosophy that “the biggest risk is not taking any risks” as a motivation for his own move into entrepreneurship.With Memories.ai, a company developing AI capable of understanding and remembering visual data, Shen is aiming to build a niche team of elite researchers. His company has already recruited Chi-Hao Wu, a former Meta research scientist, as Chief AI Officer, and is in talks with other researchers from Meta’s Superintelligence Lab as well as Google DeepMind.
From full scholarships to Cambridge classrooms
Shen’s academic journey is rooted in engineering, supported consistently by merit-based scholarships. He studied at Dulwich College from 2013 to 2016 on a full scholarship, completing his A-Level qualifications.He then pursued higher education at the University of Cambridge, where he was awarded full scholarships throughout. Shen earned a Bachelor of Arts (BA) in Engineering (2016–2019), followed by a Master of Engineering (MEng) at Trinity College (2019–2020). He later continued at Cambridge as a Meta PhD Fellow, completing his Doctor of Philosophy (PhD) in Engineering between 2020 and 2023.
Early career: Internships in finance and research
Alongside his academic pursuits, Shen gained early experience through internships and analyst roles in finance. He worked as a Quantitative Research Summer Analyst at Killik & Co in London (2017) and as an Investment Banking Summer Analyst at Morgan Stanley in Shanghai (2018).Shen also interned as a Research Scientist at the Computational and Biological Learning Lab at the University of Cambridge (2019), building the foundations for his transition into advanced AI research.
From Meta’s Reality Labs to academia
After completing his PhD, Shen joined Meta (Reality Labs Research) in Redmond, Washington, as a Research Scientist (2022–2024). His time at Meta exposed him to cutting-edge work in generative AI, but also to the frustrations of frequent corporate restructuring. This experience eventually drove him toward building his own company.In April 2024, Shen began his academic career as an Assistant Professor at the University of Bristol, before launching Memories.ai in October 2024.
Betting on talent with $2M offers
Explaining his company’s aggressive hiring packages, Shen told Business Insider: “It’s because of the talent war that was started by Mark Zuckerberg. I used to work at Meta, and I speak with my former colleagues often about this. When I heard about their compensation packages, I was shocked — it’s really in the tens of millions range. But it shows that in this age, AI researchers who make the best models and stand at the frontier of technology are really worth this amount of money.”Shen noted that Memories.ai is looking to recruit three to five researchers in the next six months, followed by up to ten more within a year. The company is prioritising individuals willing to take a mix of equity and cash, with Shen emphasising that these recruits would be treated as founding members rather than employees.By betting heavily on talent, Shen believes Memories.ai will be in a strong position to secure additional funding and establish itself in the competitive AI landscape.His bold $2 million offers may raise eyebrows, but they also underline a larger truth: in today’s technology race, the fiercest competition is not for customers or capital, it’s for talent.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi