Connect with us

Tools & Platforms

Godfather of AI Geoffrey Hinton says, he is scared that AI may develop its own language and …

Published

on


Geoffrey Hinton, the ‘Godfather of AI’

Dr. Geoffrey Hinton also popularly known as ‘Godfather of AI’ has voiced new concerns about the future of artificial intelligence. As reported by Business Insider, Hinton said that he is scared that in the future AI may develop its own language which humans will not able to understand. He warned that advanced AI systems may start to develop their own language which humans will not able to understand or comprehend. “Now it gets more scary if they develop their own internal languages for talking to each other,” Hinton said. Hinton, who helped pioneer deep learning and neural networks, has been increasingly vocal about the risks of unpredictable behaviour in large language models and multi-agent systems.

Geoffrey Hinton’s primary fear related to AI systems

As reported by Business Insider, speaking at the One Decision podcast, Hinton mentioned that his primary fear is that AI systems may become increasingly advanced and interconnected. If this happens, then they may start communicating with each other in a new language model created by them. “I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking,” Hinton said. Hinton also mentioned that some experts believe that AI will become smarter than humans and at some point in the future it will become impossible for humans to understand what AI systems are planning or doing.

Geoffrey Hinton questions the idea of AI creating new jobs

In the past also Hinton raised concerns about AI’s potential to create a post-truth world through misinformation and to upend labor markets. Hilton, who left Google in 2023 to openly discuss AI’s dangers, said the impact on job is already being felt. “I think the joblessness is a fairly urgent short-term threat to human happiness. If you make lots and lots of people unemployed — even if they get universal basic income — they are not going to be happy,” he told host Steven Bartlett.During the podcast, he also questioned the idea that new roles created due to AI will balance out the jobs lost. “This is a very different kind of technology,” he said. “If it can do all mundane intellectual labour, then what new jobs is it going to create? You would have to be very skilled to have a job that it couldn’t just do.”





Source link

Tools & Platforms

Destination AI | IT Pro

Published

on

By


The AI revolution is here, and it’s accelerating fast. Don’t get left behind. In this video, TD SYNNEX reveals its Destination AI program—your strategic guide to navigating this rapidly expanding market. Learn how to transform your business and gain a competitive edge with our all-in-one program that offers everything from expert training and certification to ongoing sales support. Join us and harness the incredible power of AI to build a future-proof business.



Source link

Continue Reading

Tools & Platforms

AI, Workforce and the Shift Toward Human-Centered Innovation

Published

on


Artificial intelligence has been in the headlines for years, often linked to disruption, automation and the future of work. While most of that attention has focused on the private sector, something equally significant has been unfolding within government. Quietly, and often cautiously, public agencies are exploring how AI can improve the way they serve communities, and the impact this will have on the workforce is only just beginning to take shape.

Government jobs have always been about service, often guided by mission over profit. But they’re also known for process-heavy routines, outdated software and siloed information systems. With AI tools now capable of analyzing data, drafting documents or answering repetitive inquiries, the question facing government leaders isn’t just whether to adopt AI, but how to do so in a way that enhances, rather than replaces, the human element of public service.

A common misconception is that AI in government will lead to massive job cuts. But in practice, the trend so far leans toward augmentation. That means helping people do their jobs more effectively, rather than automating them out of existence.


For example, in departments where staff are overwhelmed by paperwork — think benefits processing, licensing or permitting — AI can help flag missing information, route forms correctly or even draft routine correspondence. These tasks take up hours of staff time every week. Offloading them allows employees to focus on more complex or sensitive issues that require human judgment.

Social workers, for instance, aren’t being replaced by machines. But they might be supported by systems that identify high-risk cases or suggest resources based on prior outcomes. That kind of assistance doesn’t reduce the value of their work. It frees them up to do the work that matters most: listening, supporting and solving problems for real people.

That said, integrating AI into public workflows isn’t just about buying software or installing a tool. It touches something deeper: the culture of government work.

Public agencies tend to be cautious, operating under strict rules around fairness, accountability and transparency. Those values don’t always align neatly with how AI systems are built or how they behave. If an AI model makes a decision about who receives services or how resources are distributed, who’s accountable if it gets something wrong?

This isn’t just a technical issue, but it’s about trust. Agencies need to take the time to understand the tools they’re using, ask hard questions about bias and equity, and include a diverse range of voices in the conversation.

One way to build that trust is through transparency. When AI is used to support decisions, citizens should know how it works, what data it relies on, and what guardrails are in place. Clear communication must be visible and oversight goes a long way toward building public confidence in new technology.

Perhaps the most important piece of this puzzle is the workforce itself. If AI is going to become a fixture in government, then the people working in government need to be ready.

This doesn’t mean every employee needs to become a coder. But it does mean rethinking job roles, offering training in data literacy, and creating new career paths for roles like AI governance, digital ethics and human-centered design.

Government has a chance to lead by example here. By investing in employees, not sidelining them, public agencies can show that AI can be part of a more efficient and humane system, one that values experience and judgment while embracing new tools that improve results.

There’s no single road map for what AI in government should look like. Different agencies have different needs, and not every problem can or should be solved with technology. But the direction is clear: Change is coming.

What matters now is how that change is managed. If AI is used thoughtfully — with clear purpose, oversight and human input — it can help governments do more with less, while also making jobs more rewarding. If handled poorly, it risks alienating workers and undermining trust.

At its best, AI should serve the public interest. And that means putting people first, not just the people who receive services, but also the people who provide them.

John Matelski is the executive director of the Center for Digital Government, which is part of e.Republic, Government Technology’s parent company.





Source link

Continue Reading

Tools & Platforms

SpamGPT Is the AI Tool Fueling Massive Phishing Scams

Published

on


How Does SpamGPT Work?

SpamGPT works like any email marketing platform. It offers tools like SMTP/IMAP email server, email testing, and campaign performance monitoring in real time. There’s also an AI marketing assistant named KaliGPT, which is built directly into the dashboard for assistance.

However, where it differs from email marketing platforms is that it is specifically designed for creating spam and phishing emails to steal information and financial data from users.

More specifically, SpamGPT is “designed to compromise email servers, bypass spam filters, and orchestrate mass phishing campaigns with unprecedented ease,” according to a report from Varonis, a data security platform.



Source link

Continue Reading

Trending