Connect with us

Tools & Platforms

NSU expands cybersecurity, AI programs to meet growing job demand

Published

on


As cybersecurity threats and artificial intelligence continue reshaping the job market, Northeastern State University is stepping up its efforts to prepare students for these in-demand fields.

With programs targeting both K-12 engagement and college-level degrees, NSU is positioning itself as a key player in Oklahoma’s tech talent pipeline.

Cybersecurity: Training the Next Generation

NSU is working to meet the rising need for cybersecurity professionals by launching educational initiatives for students at multiple levels. Dr. Stacey White, the university’s cybersecurity program coordinator, says young people are especially suited for these roles because of their comfort with technology.

That’s why NSU is hosting cybersecurity camps and has built hands-on facilities like a cybersecurity lab to introduce students to real-world applications.

“When I first started in technology and the cyber world, it was usernames and passwords,” Dr. White said. “Today, it’s much more intricate than that.”

The Scope of the Problem

Cybercrime is a growing threat that shows no signs of slowing down. According to Dr. White, everyone should have a basic understanding of cybersecurity, but the greatest need lies in training new professionals who can keep up with evolving threats.

Currently, there are nearly 450,000 open cybersecurity jobs nationwide — including almost 4,200 in Oklahoma alone.

New AI Degree Launching This Fall

This fall, NSU is introducing a new degree in Artificial Intelligence and Data Analytics. Dr. Janet Buzzard, dean of the College of Business and Technology, says the program combines technical knowledge with business insight — a skill set that employers across many industries are seeking.

“All of our graduates in our College of Business and Technology need that skill set of artificial intelligence,” Dr. Buzzard said. “Not just the one major and degree that we’re promoting here.”

The new degree is designed to respond to student interest and market demand, offering versatile career paths in fields such as finance, logistics, and technology development.

Encouraging Early Engagement

Dr. Buzzard adds that exposing students to artificial intelligence and cybersecurity early in their academic careers helps them see these paths as viable and exciting career options.

This is one of the reasons NSU Broken Arrow is hosting a cybersecurity camp for middle school-aged students today and June 8. Campers will learn from industry professionals and experienced educators about the importance of cybersecurity, effective communication in a rapidly evolving digital world and foundational concepts in coding and encoding. 

NSU’s efforts to modernize its programs come at a crucial time, with both AI and cybersecurity jobs seeing major growth. For students and professionals alike, the university is building opportunities that align with the future of work.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

IT Summit focuses on balancing AI challenges and opportunities — Harvard Gazette

Published

on


Exploring the critical role of technology in advancing Harvard’s mission and the potential of generative AI to reshape the academic and operational landscape were the key topics discussed during University’s 12th annual IT Summit. Hosted by the CIO Council, the June 11 event attracted more than 1,000 Harvard IT professionals.

“Technology underpins every aspect of Harvard,” said Klara Jelinkova, vice president and University chief information officer, who opened the event by praising IT staff for their impact across the University.

That sentiment was echoed by keynote speaker Michael D. Smith, the John H. Finley Jr. Professor of Engineering and Applied Sciences and Harvard University Distinguished Service Professor, who described “people, physical spaces, and digital technologies” as three of the core pillars supporting Harvard’s programs. 

In his address, “You, Me, and ChatGPT: Lessons and Predictions,” Smith explored the balance between the challenges and the opportunities of using generative AI tools. He pointed to an “explainability problem” in generative AI tools and how they can produce responses that sound convincing but lack transparent reasoning: “Is this answer correct, or does it just look good?” Smith also highlighted the challenges of user frustration due to bad prompts, “hallucinations,” and the risk of overreliance on AI for critical thinking, given its “eagerness” to answer questions. 

In showcasing innovative coursework from students, Smith highlighted the transformative potential of “tutorbots,” or AI tools trained on course content that can offer students instant, around-the-clock assistance. AI is here to stay, Smith noted, so educators must prepare students for this future by ensuring they become sophisticated, effective users of the technology. 

Asked by Jelinkova how IT staff can help students and faculty, Smith urged the audience to identify early adopters of new technologies to “understand better what it is they are trying to do” and support them through the “pain” of learning a new tool. Understanding these uses and fostering collaboration can accelerate adoption and “eventually propagate to the rest of the institution.” 

The spirit of innovation and IT’s central role at Harvard continued throughout the day’s programming, which was organized into four pillars:  

  • Teaching, Learning, and Research Technology included sessions where instructors shared how they are currently experimenting with generative AI, from the Division of Continuing Education’s “Bot Club,” where instructors collaborate on AI-enhanced pedagogy, to the deployment of custom GPTs and chatbots at Harvard Business School.
  • Innovation and the Future of Services included sessions onAI video experimentation, robotic process automation, ethical implementation of AI, and a showcase of the University’s latest AI Sandbox features. 
  • Infrastructure, Applications, and Operations featured a deep dive on the extraordinary effort to bring the new David Rubenstein Treehouse conference center to life, including testing new systems in a physical “sandbox” environment and deploying thousands of feet of network cabling. 
  • And the Skills, Competencies, and Strategies breakout sessions reflected on the evolving skillsets required by modern IT — from automation design to vendor management — and explored strategies for sustaining high-functioning, collaborative teams, including workforce agility and continuous learning. 

Amid the excitement around innovation, the summit also explored the environmental impact of emerging technologies. In a session focused on Harvard’s leadership in IT sustainability — as part of its broader Sustainability Action Plan — presenters explored how even small individual actions, like crafting more effective prompts, can meaningfully reduce the processing demands of AI systems. As one panelist noted, “Harvard has embraced AI, and with that comes the responsibility to understand and thoughtfully assess its impact.” 



Source link
Continue Reading

Tools & Platforms

Tennis players criticize AI technology used by Wimbledon

Published

on


Some tennis players are not happy with Wimbledon’s new AI line judges, as reported by The Telegraph. 

This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC).

Numerous players criticized the AI technology, mostly for making incorrect calls, leading to them losing points. Notably, British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, the Telegraph reported. 

Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was “100 percent accurate.”

Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn’t hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not. 

The technology also met a blip at a key point during a match this weekend between British player Sonay Kartal and the Russian Anastasia Pavlyuchenkova, where a ball went out, but the technology failed to make the call. The umpire had to step in to stop the rally and told the players to replay the point because the ELC failed to track the point. Wimbledon later apologized, saying it was a “human error,” and that the technology was accidentally shut off during the match. It also adjusted the technology so that, ideally, the mistake could not be repeated.

Debbie Jevans, chair of the All England Club, the organization that hosts Wimbledon, hit back at Raducanu and Draper, saying, “When we did have linesmen, we were constantly asked why we didn’t have electronic line calling because it’s more accurate than the rest of the tour.” 

We’ve reached out to Wimbledon for comment.

This is not the first time the AI technology has come under fire as tennis tournaments continue to either partially or fully adopt automated systems. Alexander Zverev, a German player, called out the same automated line judging technology back in April, posting a picture to Instagram showing where a ball called in was very much out. 

The critiques reveal the friction in completely replacing humans with AI, making the case for why a human-AI balance is perhaps necessary as more organizations adopt such technology. Just recently, the company Klarna said it was looking to hire human workers after previously making a push for automated jobs. 



Source link

Continue Reading

Tools & Platforms

AI Technology-Focused Training Campaigns : Raspberry Pi Foundation

Published

on


The Raspberry Pi Foundation has issued a compelling report advocating for sustained emphasis on coding education despite the rapid advancement of AI technologies. The educational charity challenges emerging arguments that AI’s growing capability to generate code diminishes the need for human programming skills, warning against potential deprioritization of computer science curricula in schools.

The Raspberry Pi Foundation’s analysis presents coding as not merely a vocational skill but a fundamental literacy that develops critical thinking, problem-solving abilities, and technological agency — competencies argued to be increasingly vital as AI systems permeate all aspects of society. The foundation emphasizes that while AI may automate certain technical tasks, human oversight remains essential for ensuring the safety, ethics, and contextual relevance of computer-generated solutions.

For educators, parents, and policymakers, this report provides timely insights into preparing younger generations for an AI-integrated future.

Image Credit: Raspberry Pi Foundation



Source link

Continue Reading

Trending