Connect with us

AI Insights

AI Can Generate Code. Is That a Threat to Computer Science Education?

Published

on


Some of Julie York’s high school computer science students are worried about what generative artificial intelligence will mean for future careers in the tech industry. If generative AI can code, then what is left for them to do? Will those jobs they are working toward still be available by the time they graduate? Is it still worth it to learn to code?

They are “worried about not being necessary anymore,” said York, who teaches at South Portland High School in South Portland, Maine. “The biggest fear is, if the computer can do this, then what can I do?”

The anxieties are fueled by the current landscape of the industry: Many technology companies are laying off employees, with some linking the layoffs to the rise of AI. CEOs are embracing AI tools, making public statements that people don’t need to learn to code anymore and that AI tools can replace lower or mid-level software engineers.

However, many computer science education experts disagree with the idea that AI will make learning to code obsolete.

Technology CEOs “have an economic interest in making that argument,” said Philip Colligan, the chief executive officer of the Raspberry Pi Foundation, a U.K.-based global nonprofit focused on computer science education. “But I do think that argument is not only wrong, but it’s also dangerous.”

While computer science education experts acknowledged the uncertainty of the job market right now, they argued it’s still valuable to learn to code along with foundational computer science principles, because those are the skills that will help them better navigate an AI-powered world.

Why teaching and learning coding is still important, even if AI can spit out code

The Raspberry Pi Foundation published a position paper in June outlining five arguments why kids still need to learn to code in the age of AI. In an interview with Education Week, Colligan described them briefly:

  1. We need skilled human programmers who can guide, control, and critically evaluate AI outputs.
  2. Learning to code is an essential part of learning to program. “It is through the hard work of learning to code that [students] develop computational thinking skills,” Colligan said.
  3. Learning to code will open up more opportunities in the age of AI. It’s likely that as AI seeps into other industries, it will lead to more demand for computer science and coding skills, Colligan said.
  4. Coding is a literacy that helps young people have agency in a digital world. “Lots of the decisions that affect our lives are already being taken by AI systems,” Colligan said, and with computer science literacy, people have “the ability to challenge those automated decisions.”
  5. The kids who learn to code will shape the future. They’ll get to decide what technologies to build and how to build them, Colligan said.

Hadi Partovi, the CEO and founder of Code.org, agreed that the value of computer science isn’t just economic. It’s also about “equipping students with the foundation to navigate an increasingly digital world,” he wrote in a LinkedIn blog post. These skills, he said, matter even for students who don’t pursue tech careers.

“Computer science teaches problem-solving, data literacy, ethical decision-making and how to design complex systems,” Partovi wrote. “It empowers students not just to use technology but to understand and shape it.”

With her worried students, York said it’s her job as a teacher to reassure them that their foundational skills are still necessary, that AI can’t do anything on its own, that they still need to guide the tools.

“By teaching those foundational things, you’re able to use the tools better,” York said.

Computer science education should evolve with emerging technologies

If foundational computer science skills are even more valuable in a world increasingly powered by AI, then does the way teachers teach them need to change? Yes, according to experts.

“There is a new paradigm of computing in the world, which is this probabilistic, data-driven model, and that needs to be integrated into computer science classes,” said Colligan.

The Computer Science Teachers Association this year released its AI learning priorities: All students should understand how AI technologies work and where they might be used, the association asserted; students should be able to use and critically evaluate AI systems, including their societal impacts and ethical considerations; students should be able to create and not just consume AI technologies responsibly; and students should be innovative and persistent in solving problems with AI.

Some computer science teachers are already teaching about and modeling AI use with their students. York, for instance, allows her students to use large language models for brainstorming, to troubleshoot bugs in their code, or to help them get unstuck in a problem.

“It replaced the coding ducks,” York said. “It’s a method in computer science classes where you put a rubber duck in front of the student, and they talk through their problem to the duck. The intention is that, when you talk to a duck and you explain your problem, you kind of figure out what you want to say and what you want to do.”

The rise of generative AI in K-12 could also mean that educators need to rethink their assignments and assessments, said Allen Antoine, the director of computer science education strategy for the Texas Advanced Computing Center at the University of Texas at Austin.

“You need to do small tweaks of your lesson design,” Antoine said. “You can’t just roll out the same lesson you’ve been doing in CS for the last 20 years. Keep the same learning objective. Understand that the students need to learn this thing when they walk out. But let’s add some AI to have that discussion, to get them hooked into the assignment but also to help them think about how that assignment has changed now that they have access to these 21st century tools.”

But computer science education and AI literacy shouldn’t just be confined to computer science classes, experts said.

“All young people need to be introduced to what AI systems are, how they’re built, their potential, limitations and so on,” Colligan said. “The advent of AI technologies is opening up many more opportunities across the economy for kids who understand computers and computer science to be able to change the world for the better.”

What educators need in order to prepare students for what’s next

The challenge in making AI literacy and computer science cross-curricular is not new in education: Districts need more funding to provide teachers with the resources they need to teach AI literacy and other computer science skills, and educators need dedicated time to attend professional development opportunities, experts said.

“There are a lot of smart people across the nation who are developing different projects, different teacher professional development ideas,” Antoine said. “But there has to be some kind of a commitment from the top down to say that it’s important.”

The Trump administration has made AI in education a focus area: President Donald Trump, in April, signed an executive order that called for infusing AI throughout K-12 education. The U.S. Department of Education, in July, added advancing the use of AI in education as one of its proposed priorities for discretionary grant programs. And in August, first lady Melania Trump launched the Presidential AI Challenge for students and teachers to solve problems in their schools and communities with the help of AI.

The Trump administration’s AI push comes amid its substantial cuts to K-12 education and research.

Still, Antoine said he’s “optimistic that really good things are going to come from the new focus on AI.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Tech industry successfully blocks ambitious California AI bill | MLex

Published

on


By Amy Miller ( September 15, 2025, 23:52 GMT | Insight) — The deep-pocketed tech industry has proven once again that it can block efforts to regulate artificial intelligence, even in California. Even though California legislators approved more than a dozen bills aimed at regulating AI, from chatbot safety, to transparency, to data centers, several proposals attempting to put guardrails around AI died after facing concerted opposition, including the closely watched Automated Decisions Safety Act, which would have set new rules for AI systems that make consequential decisions about individuals.The deep-pocketed tech industry has proven once again that it can block efforts to regulate artificial intelligence, even in California….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

AI Insights

These fields could see job cuts because of artificial intelligence, federal data says

Published

on


Artificial intelligence has some excited and others scared, as the rapidly evolving technology impacts the job market.

Lucas Shriver is working hard at LEMA in St. Paul. A solar-powered battery station can now be used as a power source in a desert. It’s a project and a job that’s been a long time coming.

“I think I was about 7 years old when I built a tree house by myself,” Shriver said. 

He earned his engineering degree from the University of St. Thomas in June. As a full-time employee, he is one of the lucky ones.

“In my own searching for jobs and my friends, the job market right now is quite difficult, and it does seem like people are looking for someone with five years of experience,” Shriver said.

His professor, John Abraham, agrees.

“The jobs at the bottom rung of a ladder for people to climb up to a corporation. Those are going away in the last two years,” Abraham said. “There’s 35% fewer entry-level, you’re a recent college graduate and you’re looking for a job, you’re up a creek, you’re up a creek.”

Federal data suggests three fields that will feel potential cuts because of AI: Insurance adjusting, credit analysis and paralegals. The data also suggests growth could come in the software, personal finance and engineering fields. 

For job seekers of any age or field, Abraham suggests learning how to use artificial intelligence.

“This is a tool that increases effectiveness so much, you just have to know it if you’re going to compete,” he said.

And Shriver has the job to prove it.

“I have no idea where this is going, but as for today, I am gonna use AI,” he said.

Abraham says jobs with empathy, like counseling and health care may be safer from AI; he also says the trades will likely still be in demand.



Source link

Continue Reading

AI Insights

Good governance holds the key to successful AI innovation

Published

on


Organizations often balk at governance as an obstacle to innovation. But in the fast-moving world of artificial intelligence (AI), a proper governance strategy is crucial to driving momentum, including building trust in the technology and delivering use cases at scale.

Building trust in AI, in particular, is a major hurdle for AI adoption and successful business outcomes. Employees are concerned about AI’s impact on their job, and the risk management team worries about safe and accurate use of AI. At the same time, customers are hesitant about how their personal data is being leveraged. Robust governance strategies help address these trust issues while laying the groundwork for standardized processes and frameworks that support AI use at scale. Governance is also essential to compliance — an imperative for companies in highly regulated industries such as financial services and healthcare.

“Done right, governance isn’t putting on the brakes as it’s often preconceived,” says Camilla Austerberry, director at KPMG and co-lead of the Trusted AI capability, which helps organizations accelerate AI adoption and safe scaling through the implementation of effective governance and controls across the AI life cycle. “Governance can actually be a launchpad, clearing the path for faster, safer, and more scalable innovation.”

Best practices for robust AI governance

Despite its role as a crucial AI enabler, most enterprises struggle with governance, in part because of the fast-moving technology and regulatory climate as well as an out-of-sync organizational culture. According to Foundry’s AI Priorities Study 2025, governance, along with IT integration and security, ranks among the top hurdles for AI implementations, cited by 47% of the responding organizations.

To be strategic about AI governance, experts recommend the following:

Focus on the basics. Because AI technologies and regulations are evolving so quickly, many organizations are overwhelmed by how to build a formal governance strategy. It’s important to create consensus on how AI strategy aligns with business strategy while establishing the proper structure and ownership of AI governance. “My advice is to be proportionate,” Austerberry says. “As the use of AI evolves, so will your governance, but you have to start somewhere. You don’t have to have it all baked in from the start.”

Include employees in the process. It’s important to give people easy access to the technology and encourage widespread use and experimentation. Companywide initiatives that gamify AI encourage adoption and promote feedback for AI governance frameworks. Establishing ambassador or champion programs is another way to engage employees by way of trusted peers, and an AI center of excellence can play a role in developing a foundational understanding of AI’s potential as well as the risks.

“Programs that are successful within organizations go that extra mile of human touch,” says Steven Tiell, global head of AI Governance Advisory at SAS Institute. “The more stakeholders you include in that conversation early, the better.”

Emphasize governance’s relationship to compliance. Effective governance means less friction, especially when it comes to regulators and risk auditors slowing down AI implementation. Given the varied global regulatory climate, organizations should take a forward stance and think beyond compliance to establish governance with lasting legs. “You don’t want to have to change business strategy or markets when a government changes regulations or adds new ones,” says Tiell. “You want to be prepared for whatever comes your way.”

To learn more, watch this webinar.



Source link

Continue Reading

Trending