Connect with us

AI Insights

Dick Yarbrough: A semi-intelligent look at artificial intelligence

Published

on


Dick Yarbrough

Syndicated columnist

Dr. Geoffrey Hinton is a British-Canadian cognitive psychologist and computer scientist who won the Nobel Prize in Physics last year “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Between you and me, I got hosed. I should have been a winner.

The Nobel committee obviously overlooked my own entry entitled, “One molecule of glucose bound to one molecule of fructose will make sugar and winning the Nobel Prize sure would be sweet.” I don’t think they know a lot about physics over there in Norway.

Just as I am known as a modest yet much-beloved columnist who bears an uncanny resemblance to a young Brad Pitt, Dr.

Hinton, who looks nothing like Brad Pitt, young or old, is considered the Godfather of Artificial Intelligence. That’s like being Godfather of the Mafia. Only worse.

If somebody in the Mafia got out of hand, you would just shoot them or put them in a tub of concrete and deposit them in the East River. According to Dr. Hinton, artificial intelligence is likely to get rid of anybody left in the Mafia and the rest of us as well, and it won’t need a gun or a sack of concrete to do it.

“It’s not inconceivable,” he has stated, “that artificial intelligence could wipe out humanity,” saying that there was a “10 to 20 percent chance” that AI would be the cause of human extinction within the following three decades. In fact, many experts expect AI to advance, probably in the next 20 years, to be “smarter than people.”

Admittedly, I am not the go-to person on the subjects of cognitive psychology and computer science (although knowing how sugar is made is pretty impressive), but I would posit that it is not going to take 20 years for artificial intelligence to get smarter than people.

That’s already occurred in some instances. Just look at Congress.

Can you see a computer saying, “Beep! Beep!

Hey, I want to suck up to Donald Trump. I think I will propose changing the name of Greenland to Red, White and Blueland and then he will get me elected to the Senate where I can do other dumb stuff. Boop!” There are some things a computer won’t do, even if a member of Congress will.

Dr. Hinton also worries about the impact of AI on religion. He says, “I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they’re robots, we may start realizing we’re less special than we thought.

And the idea that we’re very special and we were made in the image of God, that idea may go out the window.” An interesting observation.

Theologically speaking, if computers become robots, will there be girl robots and boy robots?

If so, will boy robots let girl robots in the pulpit?

Or will the boy robots tell other robots that if they think girl robots should be allowed to preach, they will be condemned to spend eternity in an electronic waste disposal bin at Best Buys?

As to whether or not we are made in the image of God, I believe that’s God’s call, not mine. Creation is His thing. I will say that had God asked me, there are a few people He created that I think we could just as soon done without. I couldn’t find His image in them with a flashlight. Maybe He just put them here to show us He has a sense of humor.

I probably won’t be around to see how all this plays out, but despite the Godfather of AI’s ominous warning, no robot will ever make me feel less special. I’ve got a family that loves me more than I deserve. I have friends that have stood with me through the good times and the bad. I had a rewarding career. I am blessed to live in this special state in this special country.

Most of all, thanks to a benevolent editor willing overlook misplaced commas and grammatical errors (Is it who or whom?), I have the opportunity to share my thoughts with you each week and to receive your feedback. That may come in the form of a kudo or a rap on the knuckles. I suspect robots won’t give a flying algorithm for you or your opinions. I do.

And there is nothing artificial about that.

You can reach Dick Yarbrough at dick@dickyarbrough. com or at P.O. Box 725373, Atlanta, Georgia 31139.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI Can Generate Code. Is That a Threat to Computer Science Education?

Published

on


Some of Julie York’s high school computer science students are worried about what generative artificial intelligence will mean for future careers in the tech industry. If generative AI can code, then what is left for them to do? Will those jobs they are working toward still be available by the time they graduate? Is it still worth it to learn to code?

They are “worried about not being necessary anymore,” said York, who teaches at South Portland High School in South Portland, Maine. “The biggest fear is, if the computer can do this, then what can I do?”

The anxieties are fueled by the current landscape of the industry: Many technology companies are laying off employees, with some linking the layoffs to the rise of AI. CEOs are embracing AI tools, making public statements that people don’t need to learn to code anymore and that AI tools can replace lower or mid-level software engineers.

However, many computer science education experts disagree with the idea that AI will make learning to code obsolete.

Technology CEOs “have an economic interest in making that argument,” said Philip Colligan, the chief executive officer of the Raspberry Pi Foundation, a U.K.-based global nonprofit focused on computer science education. “But I do think that argument is not only wrong, but it’s also dangerous.”

While computer science education experts acknowledged the uncertainty of the job market right now, they argued it’s still valuable to learn to code along with foundational computer science principles, because those are the skills that will help them better navigate an AI-powered world.

Why teaching and learning coding is still important, even if AI can spit out code

The Raspberry Pi Foundation published a position paper in June outlining five arguments why kids still need to learn to code in the age of AI. In an interview with Education Week, Colligan described them briefly:

  1. We need skilled human programmers who can guide, control, and critically evaluate AI outputs.
  2. Learning to code is an essential part of learning to program. “It is through the hard work of learning to code that [students] develop computational thinking skills,” Colligan said.
  3. Learning to code will open up more opportunities in the age of AI. It’s likely that as AI seeps into other industries, it will lead to more demand for computer science and coding skills, Colligan said.
  4. Coding is a literacy that helps young people have agency in a digital world. “Lots of the decisions that affect our lives are already being taken by AI systems,” Colligan said, and with computer science literacy, people have “the ability to challenge those automated decisions.”
  5. The kids who learn to code will shape the future. They’ll get to decide what technologies to build and how to build them, Colligan said.

Hadi Partovi, the CEO and founder of Code.org, agreed that the value of computer science isn’t just economic. It’s also about “equipping students with the foundation to navigate an increasingly digital world,” he wrote in a LinkedIn blog post. These skills, he said, matter even for students who don’t pursue tech careers.

“Computer science teaches problem-solving, data literacy, ethical decision-making and how to design complex systems,” Partovi wrote. “It empowers students not just to use technology but to understand and shape it.”

With her worried students, York said it’s her job as a teacher to reassure them that their foundational skills are still necessary, that AI can’t do anything on its own, that they still need to guide the tools.

“By teaching those foundational things, you’re able to use the tools better,” York said.

Computer science education should evolve with emerging technologies

If foundational computer science skills are even more valuable in a world increasingly powered by AI, then does the way teachers teach them need to change? Yes, according to experts.

“There is a new paradigm of computing in the world, which is this probabilistic, data-driven model, and that needs to be integrated into computer science classes,” said Colligan.

The Computer Science Teachers Association this year released its AI learning priorities: All students should understand how AI technologies work and where they might be used, the association asserted; students should be able to use and critically evaluate AI systems, including their societal impacts and ethical considerations; students should be able to create and not just consume AI technologies responsibly; and students should be innovative and persistent in solving problems with AI.

Some computer science teachers are already teaching about and modeling AI use with their students. York, for instance, allows her students to use large language models for brainstorming, to troubleshoot bugs in their code, or to help them get unstuck in a problem.

“It replaced the coding ducks,” York said. “It’s a method in computer science classes where you put a rubber duck in front of the student, and they talk through their problem to the duck. The intention is that, when you talk to a duck and you explain your problem, you kind of figure out what you want to say and what you want to do.”

The rise of generative AI in K-12 could also mean that educators need to rethink their assignments and assessments, said Allen Antoine, the director of computer science education strategy for the Texas Advanced Computing Center at the University of Texas at Austin.

“You need to do small tweaks of your lesson design,” Antoine said. “You can’t just roll out the same lesson you’ve been doing in CS for the last 20 years. Keep the same learning objective. Understand that the students need to learn this thing when they walk out. But let’s add some AI to have that discussion, to get them hooked into the assignment but also to help them think about how that assignment has changed now that they have access to these 21st century tools.”

But computer science education and AI literacy shouldn’t just be confined to computer science classes, experts said.

“All young people need to be introduced to what AI systems are, how they’re built, their potential, limitations and so on,” Colligan said. “The advent of AI technologies is opening up many more opportunities across the economy for kids who understand computers and computer science to be able to change the world for the better.”

What educators need in order to prepare students for what’s next

The challenge in making AI literacy and computer science cross-curricular is not new in education: Districts need more funding to provide teachers with the resources they need to teach AI literacy and other computer science skills, and educators need dedicated time to attend professional development opportunities, experts said.

“There are a lot of smart people across the nation who are developing different projects, different teacher professional development ideas,” Antoine said. “But there has to be some kind of a commitment from the top down to say that it’s important.”

The Trump administration has made AI in education a focus area: President Donald Trump, in April, signed an executive order that called for infusing AI throughout K-12 education. The U.S. Department of Education, in July, added advancing the use of AI in education as one of its proposed priorities for discretionary grant programs. And in August, first lady Melania Trump launched the Presidential AI Challenge for students and teachers to solve problems in their schools and communities with the help of AI.

The Trump administration’s AI push comes amid its substantial cuts to K-12 education and research.

Still, Antoine said he’s “optimistic that really good things are going to come from the new focus on AI.”





Source link

Continue Reading

AI Insights

Google’s top AI scientist says ‘learning how to learn’ will be next generation’s most needed skill

Published

on


ATHENS, Greece — A top Google scientist and 2024 Nobel laureate said Friday that the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace.

Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google’s DeepMind, said rapid technological change demands a new approach to learning and skill development.

“It’s very hard to predict the future, like 10 years from now, in normal cases. It’s even harder today, given how fast AI is changing, even week by week,” Hassabis told the audience. “The only thing you can say for certain is that huge change is coming.”

The neuroscientist and former chess prodigy said artificial general intelligence — a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can — could arrive within a decade. This, he said, will bring dramatic advances and a possible future of “radical abundance” despite acknowledged risks.

Hassabis emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.

“One thing we’ll know for sure is you’re going to have to continually learn … throughout your career,” he said.

The DeepMind co-founder, who established the London-based research lab in 2010 before Google acquired it four years later, shared the 2024 Nobel Prize in chemistry for developing AI systems that accurately predict protein folding — a breakthrough for medicine and drug discovery.

Greece’s Prime Minister Kyriakos Mitsotakis, left, and Demis Hassabis, CEO of Google’s artificial intelligence research company DeepMind discuss the future of AI, ethics and democracy during an event at the Odeon of Herodes Atticus, in Athens, Greece, Friday, Sept. 12, 2025. Credit: AP/Thanassis Stavrakis

Greek Prime Minister Kyriakos Mitsotakis joined Hassabis at the Athens event after discussing ways to expand AI use in government services. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality.

“Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical,” he said. “And if they see … obscene wealth being created within very few companies, this is a recipe for significant social unrest.”

Mitsotakis thanked Hassabis, whose father is Greek Cypriot, for rescheduling the presentation to avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.

_



Source link

Continue Reading

AI Insights

Artificial Intelligence Cheating – The Quad-City Times

Published

on



Artificial Intelligence Cheating  The Quad-City Times



Source link

Continue Reading

Trending