Connect with us

Tools & Platforms

Parliament panel seeks tech, legal solutions to check AI-based fake news

Published

on


The Standing Committee on Communications and Information Technology, headed by BJP MP Nishikant Dubey, in its draft report, suggested a balanced approach for deploying AI to curb fake news, noting that the technology is being used to detect misinformation but can be a source of misinformation as well.

Last Updated : 14 September 2025, 23:46 IST



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI will be the most transformative force in human history

Published

on



‘My advice to young people is to study STEM and experiment with AI because you will always be better off understanding how new technologies work and how they can be used,’ Demis Hassabis, CEO of DeepMind Technologies, tells Kathimerini’s Executive Editor Alexis Papachelas. [Nikos Kokkalias]

From the age of 4, he had already demonstrated remarkable talent in chess. By 17, he had created his first video game. In 2024, at 48, Demis Hassabis was awarded the Nobel Prize in Chemistry, alongside his colleague John M. Jumper, for their groundbreaking AI research in protein structure prediction.

Born in London to a Greek-Cypriot father and a mother of Singaporean heritage, Hassabis is now regarded as one of the leading pioneers in artificial intelligence. He is the chief executive officer and co-founder of DeepMind, acquired by Google in 2014.

Last week, Hassabis visited Athens to meet with Prime Minister Kyriakos Mitsotakis and discuss AI, ethics and democracy within the framework of the Athens Innovation 2025 conference. On the occasion of his discussion at the Odeon of Herodes Atticus on Friday, he spoke to Kathimerini about the hopes and concerns surrounding “intelligent” technologies. For the first time, he also revealed how the trauma of displacement from Famagusta in 1974 has shaped both his family and himself.

You have an amazing personal story. Based on your experience, what advice would you give to a young person growing up today?

The only certainty is that there will be incredible change over the next 10 years. My advice to young people is to study STEM [science, technology, engineering and mathematics] and experiment with AI because you will always be better off understanding how new technologies work and how they can be used. But don’t neglect meta-skills, like learning how to learn, creativity, adaptability and resilience. They will all be critical for the next generation.

You often talk about the AI revolution being much bigger than the Industrial Revolution. You are one of the few people that can describe our future 10 years down the road. What will the fundamental changes be?

There will be profound change as AI advances. Universal assistants will perform mundane tasks, freeing us up for more creative pursuits. AI tools will help personalize education and curate information for us, allowing us to protect our attention and mindspace from the bombardment of the digital world. AI will also help us design new medicines and materials faster, giving us better batteries and new sources of clean energy. All of this could lead to an era of radical abundance by eliminating the scarcity of water, food, energy and other resources, allowing for maximum human flourishing. But this amazing future depends on society stewarding AI safely and responsibly. Just as with industrialization, the transition will come with challenges. The Industrial Revolution was a net good for society and propelled the world forward. I’m hopeful AI can deliver a similar leap for humanity.

Will an AI “creature” be able to hold a Socratic dialogue on abstract ideas with a real philosopher during our lifetime?

It’s plausible and perhaps even likely. Today’s AI systems are impressive, but they lack some key capabilities for a true Socratic dialogue. They don’t have a deep, conceptual understanding to explain their reasoning, and they can’t pose their own novel questions to explore ideas. Right now, we question and they answer. In the future, they will have to be able to do both in a way that doesn’t mimic us but pushes us to be creative and takes us down new avenues of thought.

There hasn’t been a commercially available AI app that has been proven really profitable… Do you share the concern that AΙ expectations have created a stock market bubble similar to the dot-com one?

In the short term, there is a lot of hype around AI – probably too much – because even though today’s systems are extremely impressive, they also have lots of flaws. Many near-term promises are being made that aren’t really scientifically grounded. But in the medium to longer term, the monumental impact AI is going to have is still underappreciated. It will be the most transformative moment in human history, so I think the investments we’re making are well-justified.

Is China really ahead of the US in terms of AI development? Do you see a new cold war emerging with competing AI platforms? Can Europe become a serious player on this frontier?

The US and the West are ahead of China on AI development currently but China’s domestic AI ecosystem is strong and catching up fast, as shown by recent model releases. Europe (including the UK) can be a serious player. It has real strengths in AI through its history of scientific discovery, incredible academic institutions and strong startup environment. There’s an important role for Europe in working with close allies like the US to shape the responsible development and governance of AI globally. But this will require it to remain innovative, dynamic and at the technical forefront.

You often paint an almost utopian picture of the future, with AI providing solutions to almost every challenge. Does your prediction run the risk of being too optimistic since AI will also create huge disruptions because of massive unemployment and energy depletion?

I’m a cautious optimist. I think artificial general intelligence (AGI) will be one of the most beneficial technologies ever invented. But there are significant risks that have to be managed and there is a high degree of uncertainty. There are technical ways to anticipate and mitigate these risks, but as a society, we should be trying to better understand and prepare for them. We need economists, social scientists, philosophers and other experts to be thinking about the implications of a post-AGI world. A technology with the potential for such profound societal impact must be handled with exceptional care and foresight.

AI is dramatically changing our business, the media business. Any thoughts on how solid news reporting and analysis can survive in the AI era? And do we run the risk of generations of “lazy minds” who will just look for ready, fully digestible answers to everything on their smartphone? Will AI-provided information be controlled by a few info-bosses?

AI can be a powerful tool for journalists, helping them handle more mundane information-gathering tasks so they can spend more time on valuable reporting. Misinformation and deepfakes are real risks but technical solutions exist, like invisible watermarking, to help people distinguish between real and fake information. Universal digital assistants will help us be more productive at work and in our personal lives, freeing up time for creativity and deep thinking. By helping to synthesize and understand information, they could enable us to learn faster. They could also enrich our lives by making better, more personalized recommendations for books, music and other ways we like to spend our time. Ensuring fair and equal access to AI requires careful management and cooperation between governments, academia, civil society, philosophers and the public.

Your father’s family had to abandon their home in Cyprus in 1974. Was this a traumatic moment and an important part of your growing up?

It was a devastating moment for my grandparents because they lost literally everything. They were working in the UK at the time but were sending all their money back to Cyprus to try to build their family home in Famagusta and then eventually go back to live there. They lost everything and I don’t think they ever really fully recovered from it. Obviously, it loomed large as a big part of my upbringing, a sort of unspoken thing always in the background.





Source link

Continue Reading

Tools & Platforms

Best Practices for Responsible Innovation

Published

on


Dr. Heather Bassett, Chief Medical Officer at Xsoli

Our patients can’t afford to wait on officials in Washington, DC, to offer guidance around responsible applications of AI in the healthcare industry. The healthcare community needs to stand up and continue to put guardrails in place so we can roll out AI responsibly in order to maximize its evolving potential. 

Responsible AI, for example, should include reducing bias in access to and authorization of care, protecting patient data, and making sure that outputs are continually monitored. 

With the heightened need for industry-specific regulations to come from the bottom up — as opposed to top-down — let’s take a closer look at the AI best practices currently dominating conversations among the key stakeholders in healthcare. 

Responsible AI without squashing innovation

How can healthcare institutions and their tech industry partners continue innovating for the benefit of patients? That must be the question guiding the innovators moving AI forward. On a basic level of security and legal compliance, that means companies developing AI technologies for payers and providers should be aware of HIPAA requirements. De-identifying any data that can be linked back to patients is an essential component to any protocol whenever data-sharing is involved.

Beyond the many regulations that already apply to the healthcare industry, innovators must be sensitive to the consensus forming around the definition of “responsible AI use.” Too many rules around which technologies to pursue, and how, could potentially slow innovation. Too few rules can yield ethical nightmares. 

Stakeholders on both the tech industry and healthcare industry sides will offer different perspectives on how to balance risks and benefits. Each can contribute a valuable perspective on how to reduce bias within the populations they serve, being careful to listen to concerns from any populations not represented in high-level discussions.

The most pervasive pain point being targeted by AI innovators

Rampant clinician burnout has persisted as an issue within hospitals and health systems for years. In 2024, a national survey revealed the physician burnout rate dipped below 50 percent for the first time since the COVID-19 pandemic. The American Medical Association’s “Joy of Medicine” program, now in its sixth year, is one of many efforts to combat the reasons for physician burnout — lack of work/life balance, the burden of bureaucratic tasks, etc. — by providing guidelines for health system leaders interested in implementing programs and policies that actively support well-being.

To that end, ambient-listening AI tools in the office are helping save time by transforming conversations between the provider and patient into clinical notes that can be added to electronic health records. Previously, manual note-taking would have to be done during the appointment, reducing the quality of face-to-face time between provider and patient, or after appointments during a physician’s “free time,” when the information gleaned from the patient was not front of mind.

Other AI tools can help combat the second-order effects of burnout. Armed with the critical information needed to recommend a diagnostic test available to them in the patient’s electronic health record (EHR), a doctor still might not think to recommend a needed test. AI tools can scan an EHR — prior visit information, lab results — to analyze potentially large volumes of information and make recommendations based on the available data. In this way the AI reader acts like a second pair of eyes to interpret a lab result, or year’s worth of lab results, for something the physician might have missed.

Administrative tasks outside the clinical setting can save burned-out healthcare workers (namely, revenue cycle managers) time and bandwidth as well.

Private-sector vs. public-sector transparency 

How can we trust whether an institution is disclosing how it uses AI when the federal government doesn’t require it to? This is where organizations like CHAI (the Coalition for Health AI) come in. Its membership is composed of a variety of healthcare industry stakeholders who are promoting transparency and open-source documentation of actual AI use-cases in healthcare settings.

Healthcare is not the only industry facing the question of how to foster public trust in how it uses AI. In general, the key question is whether there’s a human in the loop when an AI-influenced process affects a human. It ought to be easy for consumers to interrogate that to their own satisfaction. For its part, CHAI has developed an “applied model card” — like a fact sheet that acts as a nutrition label for an AI model. Making these facts more readily available can only further the goal of fostering both clinician and patient trust.

Individual states have their own AI regulations. Most exist to curb profiling, the use of the technology to sort people into categories to make it easier to sell them products or services or to make hiring, insurance coverage and other business decisions about them. In December, California passed a law that prohibits insurance companies from using AI to deny healthcare coverage. It effectively requires a human in the loop (“a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand”) when any denials decisions are made. 

By vendors and health systems making their AI use transparent — following evolving recommendations on how we define and communicate transparency, and promoting how data is protected to end users and patients alike — hospitals and health systems have nothing to lose and plenty to gain.


About Dr. Heather Bassett 

Dr. Heather Bassett is the Chief Medical Officer with Xsolis, the AI-driven health technology company with a human-centered approach. With more than 20 years’ experience in healthcare, Dr. Bassett provides oversight of Xsolis’ data science team, denials management team and its physician advisor program. She is board-certified in internal medicine.



Source link

Continue Reading

Tools & Platforms

SA to roll out ChatGPT-style AI app in all high schools as tech experts urge caution

Published

on


Tech experts have welcomed the rollout of a ChatGPT-style app in South Australian classrooms but say the use of the learning tool should be managed to minimise potential drawbacks, and to ensure “we don’t dumb ourselves down”.

The app, called EdChat, has been developed by Microsoft in partnership with the state government, and will be made available across SA public high schools next term, Education Minister Blair Boyer said.

“It is like ChatGPT … but it is a version of that that we have designed with Microsoft, which has a whole heap of other safeguards built in,” Mr Boyer told ABC Radio Adelaide.

“Those safeguards are to prevent personal information of students and staff getting out, to prevent any nasties getting in.

AI is well and truly going be part of the future of work and it’s I think it’s on us as an education system, instead of burying our head in the sand and pretending it will go away, to try and tackle it.

EdChat was initially launched in 2023 and was at the centre of a trial involving 10,000 students, while all principals, teachers and pre-school staff have had access to the tool since late 2024.

The government said the purpose of the broader rollout was to allow children to “safely and productively” use technology of a kind that was already widespread.

SA Education Minister Blair Boyer says the technology has built-in safeguards. (ABC News: Justin Hewitson)

Mr Boyer said student mental health had been a major consideration during the design phase.

“There’s a lot of prompts set up — if a student is to type something that might be around self-harm or something like that — to alert the moderators to let them know that that’s been done so we can provide help,” he said.

“One of the things that came out [of the trial] which I have to say is an area of concern is around some students asking you know if it [EdChat] would be their friend, and I think that’s something that we’ve got to look at really closely.

“It basically says; ‘Thank you for asking. While I’m here to assist you and support your work, my role is that of an AI assistant, not a friend. That said, I’m happy to provide you with advice and answer your questions and help with your tasks’.”

The government said the app was already being used by students for tasks such as explaining solutions to difficult maths problems, rephrasing instructions “when they are having trouble comprehending a task”, and quizzing them on exam subjects.

“The conversational aspect I think is sometimes underplayed with these tools,” RMIT computing expert Michael Cowling said.

“You can ask it for something, and then you can ask it to clarify a question or to refine it, and of course you can also then use it as a teacher to ask you questions or to give you a pop quiz or to check your understanding for things.”

Adelaide Botanic High School students sit at a wooden table with their laptops

Adelaide Botanic High School students were involved in the trial of EdChat, which is rolling out across all SA high schools next term. (ABC News: Brant Cumming)

Adelaide Botanic High School principal Sarah Chambers, whose school participated in the trial of the app, described it as “an education equaliser”.

“It does provide students with a tool that is accessible throughout the day and into the evening,” she said.

“It is like using any sort of search tool on the internet — it is limited by the skill of the people using it, and really we need to work on building that capacity for our young people [by] teaching them to ask good questions.”

Ms Chambers said year levels 7 to 12 had access to the app, and year 11 student Sidney said she used it on a daily basis.

“I can use it to manage my time well and create programs of study guides, and … for scheduling so I don’t procrastinate,” the student said.

“A lot of students were already using other AI platforms so having EdChat as a safe platform that we can use was beneficial for all our learning, inside school and outside school.”

EdChat is similar to an app that has been trialled in New South Wales.

Toby Walsh, a man wearing yellow framed spectacles and a navy blazer, poses for a photo in front of an ochre coloured background

University of NSW artificial intelligence expert Toby Walsh says AI has a place in modern learning, but urges caution. (Supplied)

University of NSW artificial intelligence expert Toby Walsh said while generative AI very much had a place in modern learning, educators had “to be very mindful” of the way in which it was used.

“We have to be very careful that we don’t dumb ourselves down by using this technology as a crutch,” Professor Walsh said.

“It’s really important that people do learn how to write an essay themselves and not rely upon ChatGPT to write the essay, because there are important skills we learn in writing the essay — command of a particular domain of knowledge, ability to construct arguments and think critically.

We want to make sure that we don’t lose those skills or never even acquire those skills.

Professor Cowling said while generative AI tools came with the risk of plagiarism, they could in fact strengthen critical skills, if used appropriately.

“We’ve been very focused on the academic integrity concerns but I do think we can also use these tools for things like brainstorming and starting ideas,” he said.

“As long as we anchor ourselves in the idea that we need to know how to prompt these tools properly and that we need to carefully evaluate the output of these tools, I’m not entirely convinced that critical thinking is going to be an issue.

“In fact I would argue that the opposite may be true, and that critical thinking is actually something that will develop more with these gen AI tools.”



Source link

Continue Reading

Trending