In the space of a decade, the public perception of artificial intelligence has gone from a set of parameters governing the behavior of video game characters to a catch-all solution for almost every problem in the workplace. While AI is yet to advance beyond smart speakers in the home, governments are embracing it, highlighting one of the key areas in which AI is impacting life: in higher education.
It is in universities that AI has begun to fundamentally redefine both studies and research.
The ways in which knowledge was previously explored and disseminated have changed thanks to AI. Large Language Models (LLMs) and generative AI chatbots are now a primary tool in research, study, and assessment. But LLMs have played a far greater role in university research over the years, to such an extent that their presence in higher education is almost intrinsic.
The positives on offer to study and research are considerable, although some challenges still exist.
AI is supercharging research and productivity
Research has traditionally been slow. AI tools have changed this, introducing the possibility to automate processes and streamline time-intensive tasks. Data collection, mining, and analysis have all been “supercharged” with literary reviews also benefiting from AI productivity enhancements.
Previously tedious tasks are now handled almost wholly by AI. Platforms designed for university and research like Elicit, SciSpace, Jenni, and Inciteful can simplify texts, using Natural Language Processing (NLP). Related content can also be identified.
Vast volumes of data can be processed far quicker by AI than by humans. Where problems were once considered limited by human cognitive ability or computational challenges, they can now be solved relatively quickly. Rapid data processing and analysis can also be used to detect new patterns using AI algorithms. These are now capable of interpreting datasets beyond human capacity.
AI has also redefined the research and innovation cycle, with hypothesis generation significantly accelerated. Research and innovation is now faster, enabling greater output.
The Shotgun hypotheses
One of the most striking ways in which AI has impacted university research is with the “shotgun approach” to evaluating and selecting hypotheses. Rather than working through a small but carefully selected collection of candidate hypotheses, AI can generate multiple hypotheses simultaneously, and test them.
This represents an opportunity to find gaps in knowledge, explore previously ignored or overlooked hypotheses, and develop new theories.
It’s an exciting change to how research was previously conducted. Different theoretical approaches can be modeled. It’s an approach that is equally useful in economics, climate science, and cosmology. If it is a field of study that relies on numerous variables with interdependencies, the Shotgun approach offers a way to test as many as possible, quickly validating theories – or refuting them.
While traditionally considered to be an inefficient approach to researching hypotheses, coupling the shotgun approach with AI has led to some significant developments in genomics and drug discovery. In particular, AI tools have been used to generate datasets that can then be analyzed to discover patterns, and help build new understanding of biological functions, developing new treatments (including some new drugs) as a result.
Other areas of study may also benefit from this method.
How data analysis and visualization has been revolutionized
Data analysis is one of the most important aspects of university research, and AI has played a crucial role in revolutionizing and expanding the scope of data research.
Using NLP tools, complex datasets and documents have been deciphered, and relationships identified. This has been achieved at a scale far surpassing any traditional methods, enabling an upscaling of data visualization, too. Here, AI-based software has shed new light on existing research and unlocking a “bird’s eye view” of research. Complex and advanced data visualization has revealed intricate patterns and relationships, perhaps beyond the scope of human understanding. New insights and directions of research have also been uncovered.
Materials science (where material libraries are sifted through) and drug discovery in particular have benefited from AI-assisted data analysis. Business research, too, has been revolutionized, with psychological analysis employed to replicate human behavior.
Writing assistance, but it’s not what you think
One of the most concerning aspects of the proliferation of AI tools is the misuse of generative AI. While it is rightly considered to be a hotbed of plagiarism, Natural Language Processing can be used to provide feedback, aiming to improve the clarity of a paper and even remove language barrier issues for non-native English speakers.
Literature reviews can be automated with AI, while research papers are summarized, significantly reducing the effort required to read and digest them. Surprisingly, tools like Grammarly are used to provide constructive feedback, while citations and bibliographies are prepared by other AI tools. Streamlining referencing ensures inadvertent plagiarism can be avoided, maintaining the integrity of research papers.
AI tools can even accelerate the process of peer review, whether finding potential reviewers or detecting citation issues. By integrating AI with this stage of research, the speed at which it can be completed and shared benefits entire communities within the various disciplines.
Who is the researcher in an AI world?
When you input a prompt into an AI chatbot, you’re initiating a query that returns results derived from LLMs or a set of sources that you have already defined. Where the sources are already selected, the results are based on that self-curated selection. But where a tool like ChatGPT has been used, the results are wider, and are sourced by the AI.
This raises questions as to who the researcher is. Is it the AI?
The heavy lifting of manual research is now being done by AI, enabling researchers to focus on more analytical tasks, strategy, and creative work. Here, new research can be explored, hypotheses innovated, and sophisticated experiments conceived. Of course, there is also an AI element to these tasks, but the researcher remains in control, defining tasks and rules. As with many other areas, the AI acts as an agent for research.
Monitoring processes and overseeing AI is vital to ensuring the end results meet the demands of the research. AI isn’t yet capable of decision-making, for example, and has not been trained in all scenarios. Despite the ways in which AI is redefining university research, scientists, engineers, and other researchers need to maintain some healthy skepticism.
Like humans, AI has potential biases (perhaps as a result of training material) and is prone to inaccuracies. However, it can also act as a bridge between different and disparate academic disciplines, with tools developed for one field adapted for use in other areas of study and research.
A developing focus on ethics, authorship, and accuracy
Of similar concern are the questions of authorship, accuracy, and ethics.
One of the biggest concerns for academia is AI generated material being submitted as original work. This has a number of potential consequences, although the most significant challenge is reliability. Inconsistent output from AI should be taken as the norm in terms of grammatical accuracy and voice. But it has other issues, such as hallucinations, where incorrect output and conclusions can undermine students and researchers.
Over-dependence also presents a challenge, where students and researchers put too much weight on the results. This can lead to what an observer might call “laziness” – a reduced inclination towards critical thinking (“the AI can do it for me”), outsourcing creativity, and less effective problem-solving in humans. This over-reliance can also result in students becoming passive learners – they have the information, they can display learning through AI-produced output, but practical assessment is less convincing.
Unsurprisingly, AI systems in universities have data privacy and security implications, too, with access to student and research data being potentially misused. AI is redefining policy development as much as it is impact research.
AI is changing student perspectives
Over 57% of students surveyed have stated they use AI on a weekly basis, with 95.6% using it for academic purposes. The reasons for this include a belief that AI is helpful, and can save time on research, aiding with more efficient learning.
I have spoken off the record to both students and lecturers, and both have expressed frustration with the way in which AI technologies are being used. Can a medical student really learn from submitting an AI-generated assessment paper? Will a faculty head be satisfied with students apparently graduating with the knowledge they need to progress when it has been collated and written by an AI?
Is a research grant justified when the heavy lifting is being done not with a pile of publications, but with a single prompt in an AI chatbot?
AI has various perceived benefits for students. It is believed to be helpful for optimizing study time, and over 80% of students believe it has a positive contribution to exams, projects, and other aspects of academic performance. While AI’s ability to help simplify complex concepts is clearly an advantage, this is perhaps its most reliable strength.
Students have also reported issues with accuracy, with 48% having found false or inaccurate responses from AI. Meanwhile, studies into AI use by students have uncovered what is referred to as an “agency dilemma.” Research indicates that removing AI support from students improves their academic performance considerably.
Whatever the outcome of AI integration with study, it is certainly changing our reliance on machines.
Redefining responsible use and guidelines
The importance of academic research to the progress of everything from languages to sciences means that new methods of learning, whether underpinned by science or not, require considerable thought to responsible use and ethical guidelines.
It starts with spreading this awareness, and continues with encouraging academics to familiarize themselves with AI tools and data science skills. Can AI be a collaborative partner? Not in the same way other academics can, but its contribution can certainly seem that way.
Responsible use must be governed, and that is where guidelines are required. Not only has AI literacy become an important aspect of curricula, policies and rules with clear and transparent ethical foundations are being developed. AI’s use in university research is more advanced than its role in freshman study, so many of the challenges have already been overcome. But there is a need for ongoing dialogue among policymakers and scholars, and technologists, to ensure that AI remains an advanced tool, rather than a replacement for original thought and decision-making.
One important way in which AI is redefining research at university level is in research into AI itself, and its long-term effects.
A continued reliance on humans
With all of this redefining, AI is causing a positive disruption. But university research still relies on humans, and their intuition. After all, these are places of learning primarily for humans, not machines.
Various challenges are impacting how universities integrate AI. While the impact of artificial intelligence is largely positive, some downsides have been experienced, and budgetary issues are being felt. Funding AI to keep pace with both innovation and competing universities is one problem that will take time to overcome. As with all AI governance, it will require a human decision that puts both the establishment and its students first.
But integration is only one area where human intuition is supreme. Day to day, AI results must be balanced with human judgment, insight, and creativity. It is essential for AI’s development that humans continue to oversee results and make decisions.
Rebalancing the quest for knowledge
If we consider the new academic landscape as a natural home for AI, much of what we’ve seen so far makes perfect sense. From innovation, efficiency benefits, and learning support, artificial intelligence is at home in a university as a library, a lecture theater, or a common room. But unlike those stalwarts of academia, AI has some striking challenges that must be addressed relating not only to ethics and accuracy, but also to human agency.
Taking a balanced approach to successfully integrate AI solutions means centering on humans. University use of AI means prioritizing ethical considerations, and underlining the importance of critical thinking. AI results must be verified, and challenged, whether based in the technology’s weaknesses (hallucinations or delusions), or just plain factual inaccuracies. Improperly implemented AI policies can undermine learning, and this should clearly be avoided.
The AI revolution is changing university research, and will continue to do so as the technology evolves and complements the quest for knowledge.