Connect with us

AI Insights

Q&A: Guelph author warns AI could soon outthink humanity

Published

on


It’s been nearly three years since Chat GPT was first released to the world. Since then, artificial intelligence has rapidly become a regular part of many people’s daily lives and has continued to improve at a breakneck speed.

Guelph-based ethicist Christopher Di Carlo is worried that artificial intelligence may soon surpass human intelligence.

Di Carlo wrote about his concerns for an AI future in his new book Building a God: The Ethics of Artificial Intelligence and the Race to Control It.

Di Carlo joined CBC K-W’s The Morning Edition host Craig Norris to talk about the promises, perils and ethical dilemmas of AI. 

Audio of this interview can be found at the bottom of this story. This interview has been edited for length and clarity.

Craig Norris: In the book you talk about artificial general intelligence and artificial narrow intelligence. What’s the difference?

Christopher Di Carlo: Right now we have ANI, or artificial narrow intelligence. That’s Siri, Alexa, autonomous vehicles, even your Roomba. These are narrow in the sense that they can really not do anything outside of what their very limited algorithms tell them to do. But AGI or artificial general intelligence, well, that’s the Holy Grail of all of business right now in the world. 

That’s what about five billionaire tech bros are racing to try to achieve. That’s the level at which AI becomes very human-like. It’s more generalized. It doesn’t just do one thing, it can do many things at a time, and it can do it 1,000 times better than any human.

Craig Norris: How could a government control a computer that is smarter than humans? 

Christopher Di Carlo: Nobody knows right now, and that’s the problem. These tech bros are racing ahead and the guardrails aren’t in place. You have a gentleman like Mr. [Donald] Trump who ripped up the Harris-Biden executive order.  

He’s opened up the floodgates to drill, baby drill with AI. But, the fact of the matter is we don’t have the guardrails in place and we very much need to have them in place before we get to the point at which AGI becomes real.

Craig Norris: Like any technological advancement, obviously there’s good and bad. What good can come out of advancements in AI?

Christopher Di Carlo: There’s a lot of good. The great thing about AI is we’re living in this unique time period where we’re right in between how things used to be done and how they’re about to be done. And AI is going to bring a lot of great things to this world, especially in medicine. 

We’re already seeing fast improvements in diagnostic abilities, medicine, development, treatment for the elderly, reconstructing tissues, growing organs, fixing the climate problem, working on world hunger, and coming up with new energy sources. AI has an awful lot to offer, and it will in many ways make the world a far better place.

Craig Norris: Was there a moment where you went, ‘oh man, I’ve got to write about this?’ What prompted this for you?

Christopher Di Carlo: So back in the 90s, I tried to build this machine, this big brain called The Oz Talk Project, the Onion Skin Theory of Knowledge. It was basically a model and information theory. I approached a number of politicians and university presidents about why not put Canada on the map?

We have telescopes to see farther into the universe and microscopes to see down to the atom. Why aren’t we building a big brain to help us think through difficult issues? I got nothing but positive feedback from everybody, but I couldn’t get a single dime from any of them to fund this project. 

At that point, I realized it was going to be built at some point. So I drafted up a universal global constitution basically for the world, for when this thing actually does get built. I thought I’m going to put this aside and I’m going to focus more on critical thinking and try to get that into the high schools. Then November 2022 comes along and Sam Altman releases GPT 3. So the turning point was these large language models using neural nets.

Craig Norris: Advancements in AI have many people worried about the future, their jobs, and where they’re going to fit in. Talk a bit about what you think this will do to a person’s mental health.

Christopher Di Carlo: It’s going to generate anxiety, depression, and psychosis on a number of levels. My colleague Jonathan Hyde at NYU has done extensive studies on how the smart phone has affected our youth, and it has, I mean greatly.

Now, once we get AI, look at the things we’re seeing now with AI psychosis and chat bots. This is where people believe they’re actually alive. That’s just one area. My prediction is once the world finds out that these tech bros are racing toward building essentially a super intelligent machine God that we might not be able to control, I believe that’s going to generate a kind of a global angst similar to how people reacted when nuclear weaponry was introduced.

LISTEN | The ethics behind AI creation:

The Morning Edition – K-WThe ethics behind AI creation

Guelph-based ethicist, Christopher DiCarlo, talks about his concerns that artificial intelligence may soon surpass human intelligence in his new book Building a God: The Ethics of Artificial Intelligence and the Race to Control It.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Robinhood CEO says just like every company became a tech company, every company will become an AI company

Published

on


Earlier advances in software, cloud, and mobile capabilities forced nearly every business—from retail giants to steel manufacturers—to invest in digital transformation or risk obsolescence. Now, it’s AI’s turn.

Companies are pumping billions of dollars into AI investments to keep pace with a rapidly changing technology that’s transforming the way business is done.

Robinhood CEO Vlad Tenev told David Rubenstein this week on Bloomberg Wealth that the race to implement AI in business is a “huge platform shift” comparable to the mobile and cloud transformations in the mid-2000s, but “perhaps bigger.”

“In the same way that every company became a technology company, I think that every company will become an AI company,” he explained. “But that will happen at an even more accelerated rate.”

Tenev, who co-founded the brokerage platform in 2013, pointed out that traders are not just trading to make money, but also because they love it and are “extremely passionate about it.”

“I think there will always be a human element to it,” he added. “I don’t think there’s going to be a future where AI just does all of your thinking, all of your financial planning, all the strategizing for you. It’ll be a helpful assistant to a trader and also to your broader financial life. But I think the humans will ultimately be calling the shots.”

Yet, Tenev anticipates AI will change jobs and advised people to become “AI native” quickly to avoid being left behind during an August episode of the Iced Coffee Hour podcast. He added AI will be able to scale businesses far faster than previous tech booms did. 

“My prediction over the long run is you’ll have more single-person companies,” Tenev said on the podcast. “One individual will be able to use AI as a huge accelerant to starting a business.”

Global businesses are banking on artificial intelligence technologies to move rapidly from the experimental stage to daily operations, though a recent MIT survey found that 95% of pilot programs failed to deliver.

U.S. tech giants are racing ahead, with the so-called hyperscalers planning to spend $400 billion on capital expenditures in the coming year, and most of that is going to AI.

Studies show AI has already permeated a majority of businesses. A recent McKinsey survey found 78% of organizations use AI in at least one business function, up from 72% in early 2024 and 55% in early 2023. Now, companies are looking to continually update cutting-edge technology.

In the finance world, JPMorgan Chase’s Jamie Dimon believes AI will “augment virtually every job,” and described its impact as “extraordinary and possibly as transformational as some of the major technological inventions of the past several hundred years: think the printing press, the steam engine, electricity, computing, and the Internet.”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

AI Insights

California Lawmakers Once Again Challenge Newsom’s Tech Ties with AI Bill

Published

on


Last year, California Governor Gavin Newsom vetoed a wildly popular (among the public) and wildly controversial (among tech companies) bill that would have established robust safety guidelines for the development and operation of artificial intelligence models. Now he’ll have a second shot—this time with at least part of the tech industry giving him the green light. On Saturday, California lawmakers passed Senate Bill 53, a landmark piece of legislation that would require AI companies to submit to new safety tests.

Senate Bill 53, which now awaits the governor’s signature to become law in the state, would require companies building “frontier” AI models—systems that require massive amounts of data and computing power to operate—to provide more transparency into their processes. That would include disclosing safety incidents involving dangerous or deceptive behavior by autonomous AI systems, providing more clarity into safety and security protocols and risk evaluations, and providing protections for whistleblowers who are concerned about the potential harms that may come from models they are working on.

The bill—which would apply to the work of companies like OpenAI, Google, xAI, Anthropic, and others—has certainly been dulled from previous attempts to set up a broad safety framework for the AI industry. The bill that Newsom vetoed last year, for instance, would have established a mandatory “kill switch” for models to address the potential of them going rogue. That’s nowhere to be found here. An earlier version of SB 53 also applied the safety requirements to smaller companies, but that has changed. In the version that passed the Senate and Assembly, companies bringing in less than $500 million in annual revenue only have to disclose high-level safety details rather than more granular information, per Politico—a change made in part at the behest of the tech industry.

Whether that’s enough to satisfy Newsom (or more specifically, satisfy the tech companies from whom he would like to continue receiving campaign contributions) is yet to be seen. Anthropic recently softened on the legislation, opting to throw its support behind it just days before it officially passed. But trade groups like the Consumer Technology Association (CTA) and Chamber for Progress, which count among its members companies like Amazon, Google, and Meta, have come out in opposition to the bill. OpenAI also signaled its opposition to regulations California has been pursuing without specifically naming SB 53.

After the Trump administration tried and failed to implement a 10-year moratorium on states implementing regulations on AI, California has the opportunity to lead on the issue—which makes sense, given most of the companies at the forefront of the space are operating within its borders. But that fact also seems to be part of the reason Newsom is so shy to pull the trigger on regulations despite all his bluster on many other issues. His political ambitions require money to run, and those companies have a whole lot of it to offer.



Source link

Continue Reading

AI Insights

Will Smith allegedly used AI in concert footage. We’re going to see a lot more of this…

Published

on


Earlier this month, footage was released of one of Will Smith’s gigs which was allegedly AI-generated.

Snopes agreed that the crowd shots featured ‘some AI manipulation’. You can watch the video below:





Source link

Continue Reading

Trending