Tools & Platforms
Godfather of AI Geoffrey Hinton says, he is scared that AI may develop its own language and …

Dr. Geoffrey Hinton also popularly known as ‘Godfather of AI’ has voiced new concerns about the future of artificial intelligence. As reported by Business Insider, Hinton said that he is scared that in the future AI may develop its own language which humans will not able to understand. He warned that advanced AI systems may start to develop their own language which humans will not able to understand or comprehend. “Now it gets more scary if they develop their own internal languages for talking to each other,” Hinton said. Hinton, who helped pioneer deep learning and neural networks, has been increasingly vocal about the risks of unpredictable behaviour in large language models and multi-agent systems.
Geoffrey Hinton’s primary fear related to AI systems
As reported by Business Insider, speaking at the One Decision podcast, Hinton mentioned that his primary fear is that AI systems may become increasingly advanced and interconnected. If this happens, then they may start communicating with each other in a new language model created by them. “I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking,” Hinton said. Hinton also mentioned that some experts believe that AI will become smarter than humans and at some point in the future it will become impossible for humans to understand what AI systems are planning or doing.
Geoffrey Hinton questions the idea of AI creating new jobs
In the past also Hinton raised concerns about AI’s potential to create a post-truth world through misinformation and to upend labor markets. Hilton, who left Google in 2023 to openly discuss AI’s dangers, said the impact on job is already being felt. “I think the joblessness is a fairly urgent short-term threat to human happiness. If you make lots and lots of people unemployed — even if they get universal basic income — they are not going to be happy,” he told host Steven Bartlett.During the podcast, he also questioned the idea that new roles created due to AI will balance out the jobs lost. “This is a very different kind of technology,” he said. “If it can do all mundane intellectual labour, then what new jobs is it going to create? You would have to be very skilled to have a job that it couldn’t just do.”
Tools & Platforms
New AI Tool Predicts Treatments That Reverse Cell Disease

In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.
Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.
The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Sept. 9 in Nature Biomedical Engineering, was supported in part by federal funding.
By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.
“Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect,” said study senior author Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at HMS. “PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”
The traditional drug-discovery approach — which focuses on activating or inhibiting a single protein — has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades — think immune checkpoint inhibitors and CAR T-cell therapies — work by targeting disease processes in cells.
The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.
How PDGrapher works: Mapping complex linkages and effects
PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another.
In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.
PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”
“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?’” Zitnik said.
Advantages of the new model
The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.
Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.
The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A — an enzyme already targeted by approved chemotherapies — as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.
The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.
What this AI advance spells for the future of medicine
The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.
This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.
Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.
Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work — offering new biological insights that could propel biomedical discovery even further.
The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.
“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.
Reference: Gonzalez G, Lin X, Herath I, Veselkov K, Bronstein M, Zitnik M. Combinatorial prediction of therapeutic perturbations using causally inspired neural networks. Nat Biomed Eng. 2025:1-18. doi: 10.1038/s41551-025-01481-x
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.
Tools & Platforms
Driving the Way to Safer and Smarter Cars

A new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs.
As autonomous vehicles have only begun to appear on limited public roads, it has become clear that achieving widespread adoption will take longer than early predictions suggested. With Level 3 systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it’s going to be a long climb. Much of the technology that got the industry to Level 3 will not scale in all the needed dimensions—performance, memory usage, interconnect, chip area, and power consumption.
This paper looks at the challenges waiting down the road, including increasing AI operations while decreasing power consumption in realizable solutions. It introduces a new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs that can help solve them.
Read more here.
Tools & Platforms
Tech companies are stealing our books, music and films for AI. It’s brazen theft and must be stopped | Anna Funder and Julia Powles

Today’s large-scale AI systems are founded on what appears to be an extraordinarily brazen criminal enterprise: the wholesale, unauthorised appropriation of every available book, work of art and piece of performance that can be rendered digital.
In the scheme of global harms committed by the tech bros – the undermining of democracies, the decimation of privacy, the open gauntlet to scams and abuse – stealing one Australian author’s life’s work and ruining their livelihood is a peccadillo.
But stealing all Australian books, music, films, plays and art as AI fodder is a monumental crime against all Australians, as readers, listeners, thinkers, innovators, creators and citizens of a sovereign nation.
The tech companies are operating as imperialists, scouring foreign lands whose resources they can plunder. Brazenly. Without consent. Without attribution. Without redress. These resources are the products of our minds and humanity. They are our culture, the archives of our collective imagination.
If we don’t refuse and resist, not just our culture but our democracy will be irrevocably diminished. Australia will lose the wondrous, astonishing, illuminating outputs of human creative toil that delight us by exploring who we are and what we can be. We won’t know ourselves any more. The rule of law will be rendered dust. Colony indeed.
Tech companies have valorised the ethos “move fast and break things”, in this case, the law and all it binds. To “train” AI, they started by “scraping” the internet for publicly available text, a lot of which is rubbish. They quickly realised that to get high-quality writing, thinking and words they would have to steal our books. Books, as everyone knows, are property. They are written, often over years, licensed for production to publishers and the rental returns to authors are called royalties. No one will write them if they can be immediately stolen.
Copyright law rightfully has its critics, but its core protections have enabled the flourishing of book creation and the book business, and the wide (free but not “for free”) transmission of ideas. Australian law says you can quote a limited amount from a book, which must be attributed (otherwise it’s plagiarism). You cannot take a book, copy it entirely and become its distributor. That is illegal. If you did, the author and the publisher would take you to court.
Yet what is categorically disallowed for humans is being seriously discussed as acceptable for the handful of humans behind AI companies and their (not yet profit-making) machines.
To the extent they care, tech companies try to argue the efficiency or necessity of this theft rather than having to negotiate consent, attribution, appropriate treatment and a fee, as copyright and moral rights require. No kidding. If you are setting up a business, in farming or mining or manufacturing or AI, it will indeed be more efficient if you can just steal what you need – land, the buildings someone else constructed, the perfectly imperfect ideas honed and nourished through dedicated labour, the four corners of a book that ate a decade.
Under the banner of progress, innovation and, most recently, productivity, the tech industry’s defence distils to “we stole because we could, but also because we had to”. This is audacious and scandalous, but it is not surprising. What is surprising is the credulity and contortions of Australia’s political class in seriously considering retrospectively legitimising this flagrantly unlawful behaviour.
The Productivity Commission’s proposal for legalising this theft is called “text and data mining” or TDM. Socialised early in the AI debate by a small group of tech lobbyists, the open secret about TDM is that even its proponents considered it was an absolute long shot and would not be taken seriously by Australian policymakers.
Devised as a mechanism primarily to support research over large volumes of information, TDM is entirely ill-suited to the context of unlawful appropriation of copyright works for commercial AI development. Especially when it puts at risk the 5.9% of Australia’s workforce in creative industries and, speaking of productivity, the $160bn national contribution they generate. The net effect if adopted would be that the tech companies can continue to take our property without consent or payment, but additionally without the threat of legal action for breaking the law.
Let’s look at just who the Productivity Commission would like to give this huge free-kick to.
Big Tech’s first fortunes were made by stealing our personal information, click by click. Now our emails can be read, our conversations eavesdropped on, our whereabouts and spending patterns tracked, our attention frayed, our dopamine manipulated, our fears magnified, our children harmed, our hopes and dreams plundered and monetised.
The values of the tech titans are not only undemocratic, they are inhumane. Mark Zuckerberg’s empathy atrophied as his algorithm expanded. He has said, “A squirrel dying in front of your house may be more relevant to you right now than people dying in Africa.” He now openly advocates “a culture that celebrates aggression” and for even more “masculine energy” in the workplace. Eric Schmidt, former head of Google, has said, “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”
The craven, toadying, data-thieving, unaccountable broligarchs we saw lined up on inauguration day in the US have laid claim to our personal information, which they use for profit, for power and for control. They have amply demonstrated that they do not have the flourishing of humans and their democracies at heart.
And now, to make their second tranche of fortunes under the guise of AI, this sector has stolen our work.
Our government should not legalise this outrageous theft. It would be the end of creative writing, journalism, long-form nonfiction and essays, music, screen and theatre writing in Australia. Why would you work if your work can be stolen, degraded, stripped of your association, and made instantly and universally available for free? It will be the end of Australian publishing, a $2bn industry. And it will be the end of us knowing ourselves by knowing our own stories.
Copyright is in the sights of the technology firms because it squarely protects Australian creators and our national engine of cultural production, innovation and enterprise. We should not create tech-specific regulation to give it away to this industry – local or overseas – for free, and for no discernible benefit to the nation.
The rub for the government is that much of the mistreatment of Australian creators involves acts outside Australia. But this is all the more reason to reinforce copyright protection at home. We aren’t satisfied with “what happens overseas stays overseas” in any other context – whether we’re talking about cars or pharmaceuticals or modern slavery. Nor should we be when it comes to copyright.
Over the last quarter-century, tech firms have honed the art of win-win legal exceptionalism. Text and data mining is a win if it becomes law, but it’s a win even if it doesn’t – because the debate itself has very effectively diverted attention, lowered expectations, exhausted creators, drained already meagerly resourced representatives and, above all, delayed copyright enforcement in a case of flagrant abuse.
So what should the government do? It should strategise, not surrender. It should insist that any AI product made available to Australian consumers demonstrate compliance with our copyright and moral rights regime. It should require the deletion of stolen work from AI offerings. And it should demand the negotiation of proper – not token or partial – consent and payment to creators. This is a battle for the mind and soul of our nation – let’s imagine and create a future worth having.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi