Connect with us

AI Research

AI slashes African ethics review times from months to weeks

Published

on


 Image: Linda Nordling for Research Professional News

Sarima 2025: Online platform RHInnO reports dramatic efficiency gains after artificial intelligence integration

Artificial intelligence is slashing the time it takes African committees to review clinical trial protocols, the Southern African Research and Innovation Management Association conference in Stellenbosch heard on 4 September.

Francis Kombe (pictured), chief executive of EthiXPERT, a South Africa-registered non-profit company that supports research ethics capacity on the continent, said it had enhanced the RHInnO Ethics online review platform, which it co-developed, with AI earlier this year to support administrators and reviewers.

Early results suggest a marked improvement in turnaround, he said. Ordinarily, reviews could take six months or more, but with AI support this process has been cut to one to four weeks, he said.

Reducing pressure

More than 30 committees in 10 countries are using RHInnO, which was developed with European partners. It replaces paper and email systems, easing document management and reducing burdens on administrators.

Kombe explained that AI technology can produce concise summaries of lengthy protocols, identify potential risks and benefits, and even suggest possible decisions for consideration. While the final call remains with human reviewers, AI offers structured support throughout the process, he noted.

“The AI also gives power to the administrator once assigned this protocol…it allows the reviewer to accept the suggestions from the AI, to edit, to reject them altogether,” he said.

Positive response

Committees piloting the platform have responded positively, Kombe said. “We have seen a very big increase in terms of speed improvement…people were very fascinated by how the AI was able to make very reasonable suggestions.”

But financial barriers remain, he said. About a dozen African research ethics committees are intermittent users of the platform because they cannot always afford the annual subscription cost.

“Some committees cannot continue once grant funding ends,” Kombe noted. Even so, he argued, the AI-enabled platform could be “pivotal” for accelerating health research while maintaining ethical standards.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

As they face conflicting messages about AI, some advice for educators on how to use it responsibly

Published

on


When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.

One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.

I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.

Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.

First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.

We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.

You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.

Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.

Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.

Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.

AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.

When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.

In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?

Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show

Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.

Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.

For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.

I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.

Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.

Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.

Contact the opinion editor at opinion@hechingerreport.org.

This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

AI Research

Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal

Published

on



Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal  The CSR Journal



Source link

Continue Reading

AI Research

OpenAI business to burn $115 billion through 2029 The Information

Published

on


OpenAI CEO Sam Altman walks on the day of a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room at the White House in Washington, D.C., U.S., September 4, 2025.

Brian Snyder | Reuters

OpenAI has sharply raised its projected cash burn through 2029 to $115 billion as it ramps up spending to power the artificial intelligence behind its popular ChatGPT chatbot, The Information reported on Friday.

The new forecast is $80 billion higher than the company previously expected, the news outlet said, without citing a source for the report.

OpenAI, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, some $1.5 billion higher than its projection from earlier this year, the report said.

The company did not immediately respond to Reuters request for comment.

To control its soaring costs, OpenAI will seek to develop its own data center server chips and facilities to power its technology, The Information said.

OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, saying OpenAI plans to use the chip internally rather than make it available to customers.

The company deepened its tie-up with Oracle in July with a planned 4.5-gigawatts of data center capacity, building on its Stargate initiative, a project of up to $500 billion and 10 gigawatts that includes Japanese technology investor SoftBank. OpenAI has also added Alphabet’s Google Cloud among its suppliers for computing capacity.

The company’s cash burn will more than double to over $17 billion next year, $10 billion higher than OpenAI’s earlier projection, with a burn of $35 billion in 2027 and $45 billion in 2028, The Information said.

Read the complete report by The Information here.



Source link

Continue Reading

Trending