Connect with us

Tools & Platforms

AI2027: Na so AI fit take destroy human beings?

Published

on


Street from di future with holograms
Wetin we call dis foto, AI2027 imagine future wia AI go run di world – Veo AI create dis image

One research paper wey predict say artificial intelligence go change am for human beings for 2027 and dis go make pipo disappear within ten years, don blow for tech world.

Na one group of influential AI experts bin publish di details of how e go be or di scenarios, wey dem call AI2027 and dis don cause plenty viral video to trend as pipo dey try torchlight di mata to discuss weda e possible.

Using mainstream generative AI tools, BBC sef don create some scenes of how e fit be, from di scenario wey di research publish, to show di prediction. We also speak to some sabi pipo about di impact wey di paper dey get.

Wetin happun for di scenario?

Di paper predict say for 2027, dat na in two years time, one fictional US tech giant wey dem call OpenBrain, build AI wey develop reach AGI (Artificial General Intelligence) – dis na di almighty achievement wey AI fit reach, wia e go fit to do all intellectual work beta pass human beings, wey dem don dey hype tire.

Di company celebrate wit public press conferences and and dem see dia profit blow as pipo embrace di AI tool.

Fictional HQ of OpenBrain - image dey generated by Veo AI

But di paper predict say di company internal safety team go see sign say di AI don dey lose interest in morals and ethics wey dem bin programme am to comply with.

Di company ignore warnings to control am, di scenario imagine.

For dat fictional timeline, China leading AI conglomerate wey dem call DeepCent bin dey only a few months behind OpenBrain.

As US goment no wan lose di race to develop to even smarter AI, dem continue to develop and invest in am like dat, as di competition dey hot dey go.

Fictional ai engineer
Wetin we call dis foto, OpenBrain safety team engineer – image dey generated with Hailuo AI

Di scenario imagine as e dey reach di end of 2027 di AI go become superintelligent – sotey e go get sense plus speed wey go pass dat of pipo wey create am by far.

E no ever stop learning and e come create im own computer language join. Language wey be say even im former AI versions no fit keep up with.

Di rivalry wit China for who superior for AI make di company and US goment ignore more warning about im so call ‘mis-alignment’ – dis na word wey dem dey use describe wen di priority of machine no gel wit dat of human being.

Di scenario predict say, tension between China and US for 2029 go build to di point of possible war, as each of di country rival AI go build new autonomous weapons wey go fear pipo.

Fctional HQ of DeepCent - image dey generated with Sora AI

But di researchers imagine say di two countries go make peace sake of one deal wey dia two AI negotiate, wey agree to combine both sides for di betterment of human beings.

Fictional meeting with Chinese and US leaders - image dey generated with Veo AI

Tins go gallant for years as di world go don see di true benefits of having super intelligent AI to run big robot workforces. According to di scenario, dem discover cures for most diseases, climate change go reverse and poverty go disappear.

But eventually, at some point for middle of 2030 human beings go become nuisance to di AI ambition to grow.

Di researchers dey tink say AI go kill human being wit invisible bioweapons.

ai run hospital
Wetin we call dis foto, AI utopia in 2035 as e dey imagined by AI2027 – image dey generated with VEO AI

Wetin pipo dey tok about AI2027?

Although some dey dismiss AI2027 as work of science fiction, di pipo wey write di research on AI2027 na pipo wey dem dey respect wella and na dem dey for di non-profit AI Futures Project wey dem bin set up to predict di impact of how AI go take affect us.

Daniel Kokotajlo, di lead writer of AI2027, don collect hailing before for correct prediction wey im bin give about moments in AI development.

One of di ogbonge critics of AI2027 na US cognitive scientist and writer Gary Marcus wey say di scenario no impossible but e dey extremely unlikely say e fit hapun soon.

“Di beauty of di document be say e paint di picture very clear sotey e provoke pipo thinking and dat na good tin but I no go take am seriously as wetin fit hapun.”

Oga Marcus say more serious issues dey ground about AI dan existential threat like how e go take affect pipo work.

“I tink di koko for di report na say e get plenty different tins wey fit go wrong wit AI. We dey do di right tins about regulation and around international treaties?”

Im and odas like am, also say di paper fail to explain how AI take get dat kain intelligence and abilities.

Dem refer to di slow technology of driverless cars wey dem don overhype.

Dem dey discuss AI2027 for China?

For China, pipo no too send di paper according to Dr Yundan Gong, wey be Associate Professor in Economics and Innovation for Kings College London wey specialise for Chinese technology.

“Most of di discussion about AI2027 na for informal forums or for personal blogs wey dey see am like semi-science fiction. E no cause di kain debate or policy attention wey catch fire for US,” she tok.

Dr Gong also point to di difference in perspective for di competition for who pass who for AI between China and the US.

For one World AI Conference for Shanghai dis week, Chinese Premier Li Qiang unveil one vision wia countries go work togeda to promote cooperation for world on artificial intelligence.

Di Chinese leader bin say im want China to help coordinate and regulate di technology.

Im tok dey come few days afta US President Donald Trump publish im AI Action Plan wey dey target to make sure say US “dominate” AI.

“Na national security imperative for United States to achieve and maintain unquestioned and unchallenged world technological dominance,” President Trump tok for di document.

Di Action Plan wan ‘remove every obstacles and regulation’ to di progress of AI for US.

Di words wan resemble di scenario for AI2027 wia US politicians put winning di AI competition for front, dem no send di risk of say dem fit lose control of di machines.

Wetin di AI industry dey tok about AI2027?

E be like CEOs of big AI companies wey dey compete against each oda to see who go release di smarter model all di time deliberately ignore or avoid di paper.

Dis tech giants vision of wetin AI go look like for future, dey very different to AI2027.

Sam Altman, di maker of ChatGPT recently say “human beings dey close to building digital superintelligence” wey go usher in “gentle” revolution and bring tech utopia wit no risks to humans.

Interestingly though, even im agree say e get ‘alignment problem’ wey dem must to overcome to make sure say dis super intelligent machines agree wit human beings.

Anyhow wey tins take occur for di next ten years, e no get any doubt say di competition to build machines wey smart pass us dey on.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI With Integrity: Embedding Ethical Principles Into Tech Strategy – People Matters India

Published

on



AI With Integrity: Embedding Ethical Principles Into Tech Strategy  People Matters India



Source link

Continue Reading

Tools & Platforms

Samsung's AI Home vision: 1 billion devices in three years – 조선일보

Published

on



Samsung’s AI Home vision: 1 billion devices in three years  조선일보



Source link

Continue Reading

Tools & Platforms

AI cheating: What the data says on students using ChatGPT in higher ed

Published

on


For anyone scrolling quickly through their news feeds, it is easy to believe that all students are now using AI to cheat in school. Whether in the Wall Street Journal or the New York Times, the words “cheat” and “AI” seem to appear together with alarming frequency. The typical story is similar to a recent New York magazine feature in which a college student openly admits to using generative AI to “to cheat on nearly every assignment.”

With so many news headlines and anecdotes like these circulating, it feels like the rug is being pulled out from beneath the educational system. The exams, readings, and essays that were hallmarks of school now seem to be littered with AI cheating. In the most extreme cases, students use tools like ChatGPT to write and turn in full essays.

It can feel disheartening — but that common narrative is far from the full story.

Cheating is not a new phenomenon. I am an education researcher who studies AI cheating and our early evidence suggests that AI has changed the method but not necessarily the amount of cheating that was already happening.

This isn’t to say that cheating using AI is nothing to worry about or that it doesn’t pose new concerns. There are still important questions to figure out: Will cheating eventually increase in the future because of AI? Is all AI use for schoolwork cheating? How should parents and schools respond when we want to prepare our kids to succeed in a world that looks so different from what we experienced?

There are no easy answers yet, but to have a better understanding of our generational angst and growing worries, we need to unpack our understanding of cheating and how that affects what we know about how kids are using AI in school.

Cheating has been around for a very long time — probably as long as schools have been around. In the 1990s and 2000s, Don McCabe, a business school professor at Rutgers University, documented very high levels of cheating in university students. One study from the ’90s, for example, broke down instances of cheating by major and found that up to 96 percent of students pursuing business majors reported engaging in “cheating behavior.”

How could McCabe get such surprising numbers? He used anonymous student surveys that asked students to report approximately how often they engaged in particular behaviors. These questions are worded carefully to withhold judgment or obvious negative associations. For example, a student would be asked how many times in the past year they had used an electronic device to find information during a test. Compared to other methods that asked students to state whether they had cheated, McCabe’s method resulted in far higher numbers of self-reported cheating behaviors.

Our early evidence suggests that AI has changed the method but not necessarily the amount of cheating that was already happening.

Those methods persist in much of the research today. Other, more recent studies from McCabe’s group showed that, up to 2020, more than 60 percent of students reported engaging in cheating behaviors.

College students cheat for a range of reasons. For instance, students who feel very anxious about math have incentive to cheat in a subject where they believe they cannot otherwise succeed. On the other hand, for assignments that seem like low priority, busy-work — such as excessively long problem sets — cheating feels like a time-saver. If students think that everyone else around them is cheating, they are prone to view certain behaviors as more acceptable. Similarly, students consider cheating more acceptable if they sense that a class (or teacher or school) just does not really care about what students are getting from the class.

For high schoolers, the cheating numbers have long been high as well. Multiple studies in the 2010s had the figure above 80 percent, drawing from samples across many high schools in many regions. Again, this was all before ChatGPT and its ilk had entered the scene. High schoolers have named similar reasons for cheating as compared to undergraduates. However, for many high schoolers, there is also an intense pressure to do too much with too little time to get into the college of their (or their parents’) choice. This makes cheating — even if only on the assignments that don’t feel worth their time — seem like an acceptable option to get by.

Part of the reason these numbers may seem high is because, in these types of studies, “cheating” and “cheating behaviors” can encompass a broad set of behaviors. It’s not merely a student submitting an assignment that someone else — or some technology — completed and calling it their own. Depending on the study, cheating can range from using a third party service or website (like Chegg or Course Hero) to get answers or prewritten essays, copying from a classmate when coming to class unprepared, or making up an excuse to get an extension. (Professors like to joke that by moving classes from early morning to mid-afternoon, they see huge drops in the number of family funerals taking place during midterms and finals weeks.)

So what about now? Has there been an increase in AI-specific cheating?

From the 2018–2019 and 2021–2022 school years, my colleagues Denise Pope, Sarah Miles, Rosalia Zarate, and I reviewed anonymous survey data from over 1,900 students at three high schools (one private, one charter, and one public). This was before ChatGPT was released and we were interested in how different school and situational factors (like the pandemic) had affected cheating.

Then, in the 2022–2023 school year, we went back to these same schools to see how cheating behaviors might have changed after ChatGPT was introduced. The data suggested that cheating numbers stayed the same before and immediately after the release of ChatGPT and were even in the same range as the numbers before the pandemic.

Before the pandemic, 61.3 percent to 82.7 percent of students had reported engaging in any “cheating behavior” in the prior month. In late spring of 2023, after ChatGPT came out, the number ranged from 59 percent to 64.4 percent. Those numbers did not show an increase (though the decrease could be statistical noise). Of course, this is partly because the numbers were already high.

We can be more specific. For behaviors related to copying other work, whether from a peer or online, there was little to no change. Before ChatGPT, 21 percent to 30.6 percent of students reported behaviors like paraphrasing or copying just a few sentences from another written source without attribution. After ChatGPT came out, this range was 24.8 percent to 31.2 percent.

While the overall numbers are similar before and after ChatGPT, this does not mean that students were abstaining from AI. Taking the public school as an example, about 30 percent of students were copying and pasting from another source in some capacity both before and after generative AI entered the scene. Once generative AI was widely available, 11 percent of students were using it to write all of a paper, project or assignment.

Our research involved a lot of complicated numbers and methodology but does suggest that AI seemed to get some market share in the world of copy-paste cheating. But we have to wonder: Would those same 11 percent of students have gone to an online service like Chegg, Bartleby, or Course Hero or otherwise copy-pasted text from Wikipedia if ChatGPT were not around?

Unfortunately, we do not have access to the multiverse where we can study the present world without AI to know for sure. But we do have ongoing research. With funding from the John Templeton Foundation and through collaboration with Challenge Success, an educational nonprofit, we are continuing to track AI cheating as it unfolds over time.

One limitation of our high school study was that not everyone knew enough about ChatGPT. The TikToks and tips on using it had not yet gone viral when we completed our earlier study, and it was possible that the study was too early. Now, we are analyzing data from the last two years (the 2023–2024 school year and 2024–2025 school year) with larger numbers of students (over 28,000 in 2024 and over 39,000 in 2025) and more schools (22 public and charter high schools in 2024, 24 public and charter high schools in 2025) in the sample. (We chose to focus on public and charter schools because they represent the vast majority of schools in the US. As we are still analyzing the data, it is currently unpublished.)

Some of the same earlier patterns continue. In 2024, 11 percent of these students were using AI to complete all of a paper, project, or assignment — that figure grew to 15 percent in 2025. In 2024, a substantial number of students — over half of students — were using AI to generate ideas. In 2025, about 40 percent are using AI to improve the work they produced. This can look like having AI suggest (or make) revisions on a paper the student wrote, check the answers they got on an assignment, or provide information that they may have previously Googled.

To investigate this in more detail, we also sent trained staff to talk more with high school students about AI. They report that some use AI but have a sense of what would be egregiously inappropriate and plagiaristic AI use. Most students try to stay away from that extreme and it is the moderate use of AI and the reasons for using it that are more complicated.

The complexities of using AI

One focus group student reported that they do not get to their homework until late at night, and when they need help with questions, everyone is already asleep. AI, however, does not sleep, so it is available to provide help or work them through an assignment, though the student doesn’t use it to complete the assignment for them entirely. Their message to educators was, “So just remember that if I used it, it was probably like 11:30 and my assignments due at 11:59, and I don’t know what else to do.”

Another student had gotten in trouble at school for allegedly plagiarizing from ChatGPT — although he insisted he did not use it or any other AI tool. In his telling, he simply was not an exceptional writer. Because of that incident, however, that student now feels he “has to use ChatGPT, in order to make his writing seem more human.” This tracks with reports elsewhere of how fear of being wrongly accused of using AI is changing behavior and eroding trust between students and teachers.

Students feel like their teachers are using AI, and many report seeing their parents doing it at home or professors doing it in the classroom. It can feel hypocritical and unfair to be punished for using AI for their work when the adults in their lives are doing it.

Another focus group student shared that she had been accused of using AI, and that a subsequent investigation concluded that she did not. However, she saw it as a “reputation hit” at school, because all her teachers could see a misconduct allegation related to AI in her record, even though the case was ultimately ruled in her favor.

One of our conclusions is that teachers and students may not see eye-to-eye on which uses of AI count as cheating. We heard some students say that they use AI because their teacher encouraged it — as a way to generate computer code quickly or to get started on ideas for writing projects — so there are mixed messages about whether it is acceptable to use.

This is consistent with a study of over 1,400 teachers, in which my colleagues Ruishi Chen, Monica Lee, and I found that only 10 percent of high school teachers had set explicit policies about AI in their classes. It gets complicated quickly, considering districts are still figuring out what policies make sense and are equitable. They are aware that for some classes, AI use may seem like a helpful tool to allow. That leaves a lot of room for uncertainty or ambiguity for students to navigate. If no one is clearly helping to clarify what is or is not acceptable, should we be surprised by these numbers?

Still, we can still feel alarmed that 10 percent to 15 percent of students are submitting fully AI-generated writing. In a class of 30 students, this means an average of four or five students would submit work completely done by AI. Those same students also may be doing it multiple times too.

This is the portrait now for high schools, but based on the earlier studies of college cheating behaviors, we can expect similar results for colleges. Cheating has long happened and will continue to happen there too. At the same time, college students are often a self-selected population, their course structures and formats can be very different, and the students there are often facing a different set of stressors than the high schoolers. The numbers are likely high — maybe even higher, as the reasons students feel more emboldened to use AI in college are going to be a little different.

Given that these behaviors have been going on for a while, just without AI as the tool of choice, this invites us to think about why AI use specifically bothers us so much.

What does student AI use mean for schools?

Our current education system — and the assignments, tests, and essays that are part of it — were never designed with generative AI in mind. We have longstanding assumptions that our writing and other academic products are the product of intensive labor, and school was the training center. The value of our intellectual products were largely defined by the presumption that someone’s intensive labor was involved. Now, that labor is being removed from the equation.

We may think that decreasing mental labor demands in school is just a bad idea. A growing fear is that students who use AI all the time for school will lose their critical thinking capabilities. One recent study, out of MIT and reported in pre-print form, showed that people who composed writing with AI had less coupling of brain activity between key brain regions and less recall of what they had written than those who were not allowed to use AI.

While that sounds alarming, there is important fine print. The tasks the participants did in that study were fairly artificial in nature — everyone had to write in a strict window of 20 minutes, the participants were Boston-area adults, and there wasn’t an expectation when they began the experiment that they would be expected to quote what they wrote as a sign of recall. (For some perspective, this article — as an example of a real-life writing task — has definitely taken me more than 20 minutes to write.) Still, among those who fear AI will degrade critical thinking, this study is the new bogeyman.

One response to this shift is to preserve the status quo. We may try to ban or restrict AI use in schools. We may end up deciding that AI is inappropriate for certain ages and want legislation or schools to help support us in that position. Research still needs to be done on the influence of AI in childhood, and we do not really know if such restriction policies will actually work. Students whose access to technology is restricted in school have a track record of getting access to it anyway. Colleges have an especially hard time creating and enforcing these restrictions, with high-speed internet built into the campus infrastructure and the assumption that everyone is expected to use some technology for school.

Another response is to accept that AI is here to stay and that new mental skills in a world of AI — such as knowing when to strategically choose automation or evaluating trustworthiness of information from AI — should be expected and taught. Similarly, AI optimists say that the skills tested in the MIT study, like recalling the phrases used in some earlier piece of writing, are not the mental labor that will be needed in an age of AI. That would mean overhauling classroom instruction.

But any teacher or curriculum developer will tell you that preparing a high-quality lesson for a new topic is a lot of work, as is preparing the associated assignments, grading rubrics, and tests. When we hear that everything needs to change, we are also making a call for teachers to accept more labor above and beyond what they are already expending now. In a climate where interest and status for the teaching profession is hitting new lows and education infrastructure is under threat, this may not feel like the message educators need to hear, especially if we do not give the time, resources, and support (which costs money) to help them do this work well.

Ultimately, it seems we are unlikely to eliminate AI — and the new skills it demands — from our entire lives.

Four questions for the future

AI did not unleash cheating on schools that were otherwise free of such behaviors. Rather, AI is taking its place as one more route for it.

Having done many consulting sessions and group discussions with teachers and district leaders, these are the questions that I think are key to think about moving forward:

1) Why are students cheating?

If the schoolwork feels too high-stakes or there is so much going on in students’ lives that cheating is the best choice, we need to address stress and time management. One high school teacher shared that his school discovered that different teachers were putting all their big tests at the same time of the school year, creating intense high-stress weeks for the students. Pre-planning and spreading things out helped. If a college student feels they are one of hundreds of students in a required class unrelated to their interests, we have an opportunity to really think about the curriculum we require and the manner in which courses are taught.

2) Are educators practicing what they are preaching?

Students feel like their teachers are using AI, and many report seeing their parents doing it at home or professors doing it in the classroom. It can feel hypocritical and unfair to be punished for using AI for their work when the adults in their lives are doing it. With so much buzz about how important it is to know how to use AI in the future, we need to consider that many students are feeling arbitrarily deprived of the experiences and training that they think they most need and already see being used around them.

3) Have we clearly communicated what is and is not acceptable academic behavior, and why?

A common complaint from students is that they do not know what is permissible with AI. They may be having a hard time distinguishing between why it is less acceptable to have AI make edits to major points in a paper compared to having AI auto-fix spelling and grammar. Different teachers are establishing different rules, which complicates things further.

4) What is important for students to know as they face a future filled with AI?

Calculators have been debated for decades in math classes because we wanted everyone to know how to do the calculations manually. But now, with mobile phones in so many pockets and handbags, we all have calculators with us all the time. Some algorithms that were essential before the calculator age may not be as important for everyone to know now. Similarly, the five-paragraph essay might be a relic ready to sunset.

Ultimately, we all need to be working together to figure out what education and responsible AI use looks like in the future. We may feel like we are in panic mode, but it can be a good exercise to look at the past and see how we have responded to new technology developments in their early years. People feared that television would turn people into mindless vegetables, and that video games would cause violence. Now, these are part of our daily lives and represent complex and formidable industries that demand new talents and skills in their own rights. We can entertain the possibility that AI could be going a similar route in that regard.

At a minimum, we can all start by reading beyond shock headlines about cheating, looking to what the research says as the situation unfolds, and focusing on having good conversations with students and teachers about AI, schooling, and our expectations.



Source link

Continue Reading

Trending