Connect with us

Cloudflare Debuts Bot Blocker to Help ‘Internet Survive Age of AI’

Published

on


Software firm Cloudflare has introduced a tool to block bot crawlers from accessing web content without permission.

The new offering, announced Tuesday (July 1), lets website owners decide if they want artificial intelligence (AI) crawlers to access their content, and determine how AI firms can use it. It also lets site owners set a price for access via a “pay per crawl” model.

“For decades, the Internet has operated on a simple exchange: search engines index content and direct users back to original websites, generating traffic and ad revenue for websites of all sizes,” the company said in a news release. “This cycle rewards creators that produce quality content with money and a following, while helping users discover new and relevant information.”

But that model, Cloudflare contended, is broken, with AI crawlers collecting things like words and images to generate answers without sending visitors to the initial source, robbing creators of revenue and the satisfaction of knowing someone is viewing their work.

“If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone — creators, consumers, tomorrow’s AI founders, and the future of the web itself,” said Matthew Prince, co-founder and CEO of Cloudflare.

“Original content is what makes the Internet one of the greatest inventions in the last century, and it’s essential that creators continue making it,” Prince added. “AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate.”

Writing about this issue last year, PYMNTS noted the significant financial implications of content scraping, as each company invests heavily in researching, writing and publishing website content. Experts argued that allowing bots to scrape this material freely undermines this work while leading to derivative content that potentially outranks the original on search engines.

“Beyond content theft, scraping can have detrimental effects on website performance,” that report said. “Unchecked bot activity may overload servers, slow down websites and skew analytics data, potentially increasing operational costs. These consequences underscore the urgency of many content providers implementing robust protective measures.”

All the same, that report said, experts have been divided on effectiveness of new anti-scraping tools, with some cautioning that their track record is still unproven, and others more optimistic about their potential.

At the time, Cloudflare had just introduced another tool to fight AI-data harvesting, which Pankaj Kumar, CEO of Naxisweb, acknowledged in an interview with PYMNTS.

“Its purposeful blockage focuses exclusively on AI bots so that people can still visit the site or search engine robots can continue to crawl it. Search engine optimization (SEO) performance is not compromised, while unauthorized scraping is prevented by selective blocking,” Kumar said.



Source link

AI Research

Physicians Lose Cancer Detection Skills After Using Artificial Intelligence

Published

on


Artificial intelligence shows great promise in helping physicians improve both their diagnostic accuracy of important patient conditions. In the realm of gastroenterology, AI has been shown to help human physicians better detect small polyps (adenomas) during colonoscopy. Although adenomas are not yet cancerous, they are at risk for turning into cancer. Thus, early detection and removal of adenomas during routine colonoscopy can reduce patient risk of developing future colon cancers.

But as physicians become more accustomed to AI assistance, what happens when they no longer have access to AI support? A recent European study has shown that physicians’ skills in detecting adenomas can deteriorate significantly after they become reliant on AI.

The European researchers tracked the results of over 1400 colonoscopies performed in four different medical centers. They measured the adenoma detection rate (ADR) for physicians working normally without AI vs. those who used AI to help them detect adenomas during the procedure. In addition, they also tracked the ADR of the physicians who had used AI regularly for three months, then resumed performing colonoscopies without AI assistance.

The researchers found that the ADR before AI assistance was 28% and with AI assistance was 28.4%. (This was a slight increase, but not statistically significant.) However, when physicians accustomed to AI assistance ceased using AI, their ADR fell significantly to 22.4%. Assuming the patients in the various study groups were medically similar, that suggests that physicians accustomed to AI support might miss over a fifth of adenomas without computer assistance!

This is the first published example of so-called medical “deskilling” caused by routine use of AI. The study authors summarized their findings as follows: “We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

Consider the following non-medical analogy: Suppose self-driving car technology advanced to the point that cars could safely decide when to accelerate, brake, turn, change lanes, and avoid sudden unexpected obstacles. If you relied on self-driving technology for several months, then suddenly had to drive without AI assistance, would you lose some of your driving skills?

Although this particular study took place in the field of gastroenterology, I would not be surprised if we eventually learn of similar AI-related deskilling in other branches of medicine, such as radiology. At present, radiologists do not routinely use AI while reading mammograms to detect early breast cancers. But when AI becomes approved for routine use, I can imagine that human radiologists could succumb to a similar performance loss if they were suddenly required to work without AI support.

I anticipate more studies will be performed to investigate the issue of deskilling across multiple medical specialties. Physicians, policymakers, and the general public will want to ask the following questions:

1) As AI becomes more routinely adopted, how are we tracking patient outcomes (and physician error rates) before AI, after routine AI use, and whenever AI is discontinued?

2) How long does the deskilling effect last? What methods can help physicians minimize deskilling, and/or recover lost skills most quickly?

3) Can AI be implemented in medical practice in a way that augments physician capabilities without deskilling?

Deskilling is not always bad. My 6th grade schoolteacher kept telling us that we needed to learn long division because we wouldn’t always have a calculator with us. But because of the ubiquity of smartphones and spreadsheets, I haven’t done long division with pencil and paper in decades!

I do not see AI completely replacing human physicians, at least not for several years. Thus, it will be incumbent on the technology and medical communities to discover and develop best practices that optimize patient outcomes without endangering patients through deskilling. This will be one of the many interesting and important challenges facing physicians in the era of AI.



Source link

Continue Reading

Funding & Business

Watch EU Foreign Ministers Meet to Discuss Trade Agreement

Published

on





Source link

Continue Reading

Education

AI in the classrooms: How Bangladeshi schools are adapting to a new digital era

Published

on


The recent explosion of Artificial Intelligence (AI) has pervaded numerous industries, going from a futuristic concept to an everyday reality. However, the impact of AI on schooling has been exceptionally staggering. 

From helping students complete assignments to reshaping the way teachers think about homework and exams, AI is beginning to redefine education all over the world. 

Bangladesh is no different.

Artificial Intelligence isn’t just coming to Bangladeshi classrooms—it’s already here. While its promise of convenience and quick solutions is quite alluring to students, the ever-growing presence of AI in schools has raised difficult questions: is learning actually taking place anymore or is it being replaced by answers generated not from thought, but from machines?

In schools across Bangladesh, AI tools like ChatGPT have quietly revolutionised how students complete their homework, how teachers prepare lessons, and how institutions rethink education altogether.

Is it a blessing or a bane? 

Students have quickly adapted to using advanced AI chatbots like ChatGPT, making AI an unavoidable and integral part of academic life. From essays to homework, students are increasingly finding ways to rely on AI not just to work faster, but to sidestep studying altogether.

Many schools and educators have now been forced to accept that resisting AI is no longer an option. Schools must adapt to the new reality or risk becoming redundant.

Yafa Rahman, Vice Principal and Senior Business Studies Teacher of Adroit International School, told The Business Standard, “Talks about integrating AI in the school curriculum is a global concern, and my school has had meetings with Pearson Education on how to do that in the best possible manner as well as train teachers to use AI in a beneficial way while being able to spot unethical AI use. This is an ongoing discussion, and we will see many changes soon.”

Yafa explained that her school also employs AI tools to structure assignments and class content. Rather than banning AI altogether, she believes in channelling students’ fascination with technology into meaningful learning. “Students rely on technology so much that if we incorporate any technology into the learning process, students instantly become more interested,” she said.

Rethinking the curriculum

The convenience of AI comes with a heavy cost. Teachers are reporting a surge in AI-generated assignments. Entire essays, reports, and even personal reflections are being turned in with no human touch. And it’s getting harder to spot the difference.

Educators have responded by rethinking the very structure of education in the country. Oral assessments, in-class essays, and presentations have become increasingly common, as schools seek to test students’ independent thinking rather than their ability to reproduce AI-generated answers.

“For assignments meant to show knowledge and understanding, I’ve returned to using pencil and paper to prevent AI use. For reflective assignments, I encourage students to use AI but remind them to think critically. You do not always have to agree with what AI generated, and key facts and figures must be checked with reliable sources,” said Olivier Gautheron, a Science Teacher at International School Dhaka (ISD) who has earned the “AI Essentials for Educators” certification from Edtech Teachers in the US.  

This hybrid approach reflects a wider consensus among educators that AI should not be ignored but incorporated responsibly, encouraging students to refine their critical faculties alongside their digital literacy.

It’s no longer just about stopping AI from being used. It’s about guiding how it’s used.

AI detection

Detecting AI-generated work isn’t straightforward. In universities, plagiarism software and AI detectors are standard. But in schools, teachers often rely on their personal knowledge of each student’s writing style and capability, using their instincts to identify when a student’s writing does not look like their own.

But Gautheron warns against over-reliance on intuition, preferring restraint over wrongful accusations.

“I believe it all comes down to knowing your students and their abilities,” he said. “There’s a high chance of mistakenly identifying student work as AI-generated when it’s not.”

He recalled an incident when he suspected a student of using AI, only to learn that the child had simply used software to improve grammar without altering the ideas. “This is perfectly acceptable, as the purpose of the assignment was for students to generate their own ideas,” he added.

He believes the solution lies not in advanced software but in dialogue. “Although software exists to detect AI, there are other softwares to make them undetectable. I believe that the best way to detect inappropriate use of AI is asking your students directly. If I feel that a student’s work quality is very different from previous tasks, simply asking them to clarify a few ideas of their work is enough.”

For resource-constrained schools, this approach is also pragmatic, since not every institution can afford detection software. 

AI for teachers

Just like students, teachers are also increasingly turning to AI for lesson planning and content creation

Emran Taher, Cambridge examiner and senior English instructor at Mastermind School, sees AI as a game-changer.

“It is not just the students who use AI. Teachers and schools are using it too. I can keep my syllabus up-to-date and incorporate more relevant topics and examples instead of just relying on textbooks. This helps grab students’ interest while reducing issues like bunking classes.”

He also uses AI for personalised instruction. By feeding student data—age, class level, strengths, and weaknesses—into AI tools, he receives tailored recommendations that help him address individual needs. “There are no bad students, only bad teachers,” he said. 

Striking the right balance

AI’s presence in schools reveals a tension: the same tool that can personalise learning and spark creativity can also be used to bypass real thinking. This balancing act between embracing innovation and preserving the essence of education appears to be the defining challenge of AI use.

However, there is no turning back. AI is already embedded in how schools operate. What matters now is how educators choose to respond. As Bangladeshi schools navigate this shift, teacher training, investment in digital infrastructure, and the development of ethical guidelines will all be crucial. 

Some see AI as a threat to academic honesty. Others see it as a catalyst for overdue change in the old, rigid education system. But everyone realises that the role of teachers must evolve to address the new digital landscape. 





Source link

Continue Reading

Trending