Connect with us

AI Insights

Dick Yarbrough: A semi-intelligent look at artificial intelligence

Published

on


Dick Yarbrough

Syndicated columnist

Dr. Geoffrey Hinton is a British-Canadian cognitive psychologist and computer scientist who won the Nobel Prize in Physics last year “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Between you and me, I got hosed. I should have been a winner.

The Nobel committee obviously overlooked my own entry entitled, “One molecule of glucose bound to one molecule of fructose will make sugar and winning the Nobel Prize sure would be sweet.” I don’t think they know a lot about physics over there in Norway.

Just as I am known as a modest yet much-beloved columnist who bears an uncanny resemblance to a young Brad Pitt, Dr.

Hinton, who looks nothing like Brad Pitt, young or old, is considered the Godfather of Artificial Intelligence. That’s like being Godfather of the Mafia. Only worse.

If somebody in the Mafia got out of hand, you would just shoot them or put them in a tub of concrete and deposit them in the East River. According to Dr. Hinton, artificial intelligence is likely to get rid of anybody left in the Mafia and the rest of us as well, and it won’t need a gun or a sack of concrete to do it.

“It’s not inconceivable,” he has stated, “that artificial intelligence could wipe out humanity,” saying that there was a “10 to 20 percent chance” that AI would be the cause of human extinction within the following three decades. In fact, many experts expect AI to advance, probably in the next 20 years, to be “smarter than people.”

Admittedly, I am not the go-to person on the subjects of cognitive psychology and computer science (although knowing how sugar is made is pretty impressive), but I would posit that it is not going to take 20 years for artificial intelligence to get smarter than people.

That’s already occurred in some instances. Just look at Congress.

Can you see a computer saying, “Beep! Beep!

Hey, I want to suck up to Donald Trump. I think I will propose changing the name of Greenland to Red, White and Blueland and then he will get me elected to the Senate where I can do other dumb stuff. Boop!” There are some things a computer won’t do, even if a member of Congress will.

Dr. Hinton also worries about the impact of AI on religion. He says, “I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they’re robots, we may start realizing we’re less special than we thought.

And the idea that we’re very special and we were made in the image of God, that idea may go out the window.” An interesting observation.

Theologically speaking, if computers become robots, will there be girl robots and boy robots?

If so, will boy robots let girl robots in the pulpit?

Or will the boy robots tell other robots that if they think girl robots should be allowed to preach, they will be condemned to spend eternity in an electronic waste disposal bin at Best Buys?

As to whether or not we are made in the image of God, I believe that’s God’s call, not mine. Creation is His thing. I will say that had God asked me, there are a few people He created that I think we could just as soon done without. I couldn’t find His image in them with a flashlight. Maybe He just put them here to show us He has a sense of humor.

I probably won’t be around to see how all this plays out, but despite the Godfather of AI’s ominous warning, no robot will ever make me feel less special. I’ve got a family that loves me more than I deserve. I have friends that have stood with me through the good times and the bad. I had a rewarding career. I am blessed to live in this special state in this special country.

Most of all, thanks to a benevolent editor willing overlook misplaced commas and grammatical errors (Is it who or whom?), I have the opportunity to share my thoughts with you each week and to receive your feedback. That may come in the form of a kudo or a rap on the knuckles. I suspect robots won’t give a flying algorithm for you or your opinions. I do.

And there is nothing artificial about that.

You can reach Dick Yarbrough at dick@dickyarbrough. com or at P.O. Box 725373, Atlanta, Georgia 31139.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Robinhood CEO says just like every company became a tech company, every company will become an AI company

Published

on


Earlier advances in software, cloud, and mobile capabilities forced nearly every business—from retail giants to steel manufacturers—to invest in digital transformation or risk obsolescence. Now, it’s AI’s turn.

Companies are pumping billions of dollars into AI investments to keep pace with a rapidly changing technology that’s transforming the way business is done.

Robinhood CEO Vlad Tenev told David Rubenstein this week on Bloomberg Wealth that the race to implement AI in business is a “huge platform shift” comparable to the mobile and cloud transformations in the mid-2000s, but “perhaps bigger.”

“In the same way that every company became a technology company, I think that every company will become an AI company,” he explained. “But that will happen at an even more accelerated rate.”

Tenev, who co-founded the brokerage platform in 2013, pointed out that traders are not just trading to make money, but also because they love it and are “extremely passionate about it.”

“I think there will always be a human element to it,” he added. “I don’t think there’s going to be a future where AI just does all of your thinking, all of your financial planning, all the strategizing for you. It’ll be a helpful assistant to a trader and also to your broader financial life. But I think the humans will ultimately be calling the shots.”

Yet, Tenev anticipates AI will change jobs and advised people to become “AI native” quickly to avoid being left behind during an August episode of the Iced Coffee Hour podcast. He added AI will be able to scale businesses far faster than previous tech booms did. 

“My prediction over the long run is you’ll have more single-person companies,” Tenev said on the podcast. “One individual will be able to use AI as a huge accelerant to starting a business.”

Global businesses are banking on artificial intelligence technologies to move rapidly from the experimental stage to daily operations, though a recent MIT survey found that 95% of pilot programs failed to deliver.

U.S. tech giants are racing ahead, with the so-called hyperscalers planning to spend $400 billion on capital expenditures in the coming year, and most of that is going to AI.

Studies show AI has already permeated a majority of businesses. A recent McKinsey survey found 78% of organizations use AI in at least one business function, up from 72% in early 2024 and 55% in early 2023. Now, companies are looking to continually update cutting-edge technology.

In the finance world, JPMorgan Chase’s Jamie Dimon believes AI will “augment virtually every job,” and described its impact as “extraordinary and possibly as transformational as some of the major technological inventions of the past several hundred years: think the printing press, the steam engine, electricity, computing, and the Internet.”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

AI Insights

California Lawmakers Once Again Challenge Newsom’s Tech Ties with AI Bill

Published

on


Last year, California Governor Gavin Newsom vetoed a wildly popular (among the public) and wildly controversial (among tech companies) bill that would have established robust safety guidelines for the development and operation of artificial intelligence models. Now he’ll have a second shot—this time with at least part of the tech industry giving him the green light. On Saturday, California lawmakers passed Senate Bill 53, a landmark piece of legislation that would require AI companies to submit to new safety tests.

Senate Bill 53, which now awaits the governor’s signature to become law in the state, would require companies building “frontier” AI models—systems that require massive amounts of data and computing power to operate—to provide more transparency into their processes. That would include disclosing safety incidents involving dangerous or deceptive behavior by autonomous AI systems, providing more clarity into safety and security protocols and risk evaluations, and providing protections for whistleblowers who are concerned about the potential harms that may come from models they are working on.

The bill—which would apply to the work of companies like OpenAI, Google, xAI, Anthropic, and others—has certainly been dulled from previous attempts to set up a broad safety framework for the AI industry. The bill that Newsom vetoed last year, for instance, would have established a mandatory “kill switch” for models to address the potential of them going rogue. That’s nowhere to be found here. An earlier version of SB 53 also applied the safety requirements to smaller companies, but that has changed. In the version that passed the Senate and Assembly, companies bringing in less than $500 million in annual revenue only have to disclose high-level safety details rather than more granular information, per Politico—a change made in part at the behest of the tech industry.

Whether that’s enough to satisfy Newsom (or more specifically, satisfy the tech companies from whom he would like to continue receiving campaign contributions) is yet to be seen. Anthropic recently softened on the legislation, opting to throw its support behind it just days before it officially passed. But trade groups like the Consumer Technology Association (CTA) and Chamber for Progress, which count among its members companies like Amazon, Google, and Meta, have come out in opposition to the bill. OpenAI also signaled its opposition to regulations California has been pursuing without specifically naming SB 53.

After the Trump administration tried and failed to implement a 10-year moratorium on states implementing regulations on AI, California has the opportunity to lead on the issue—which makes sense, given most of the companies at the forefront of the space are operating within its borders. But that fact also seems to be part of the reason Newsom is so shy to pull the trigger on regulations despite all his bluster on many other issues. His political ambitions require money to run, and those companies have a whole lot of it to offer.



Source link

Continue Reading

AI Insights

Will Smith allegedly used AI in concert footage. We’re going to see a lot more of this…

Published

on


Earlier this month, footage was released of one of Will Smith’s gigs which was allegedly AI-generated.

Snopes agreed that the crowd shots featured ‘some AI manipulation’. You can watch the video below:





Source link

Continue Reading

Trending