AI Research
AI And Entrepreneurship Education: Preparing Students To Lead

preparing students to lead in an AI-driven world
Research indicates a fivefold increase in demand for AI skills, yet most schools still ban the use of ChatGPT. A recent survey found that 70% of graduates believe generative AI should be integrated into coursework, and more than half said they felt unprepared for the workforce. At the same time, 66% of teens aged 13-17 express interest in starting their own businesses, according to Junior Achievement data.
The disconnect is apparent: students want to build careers around emerging technology, but traditional education isn’t teaching them how. While schools debate AI policies, forward-thinking programs are already training middle schoolers to launch AI-powered ventures and solve real problems. They’re not preparing students for tomorrow’s job market. They’re teaching them to create it.
Real-World Learning Replaces Theoretical Education
The most effective programs abandon traditional classroom simulations in favor of authentic business creation. Students don’t earn grades—they gain customers, revenue, and practical skills that transfer directly to college applications and future careers.
At WIT (Whatever It Takes), which I started in 2009, teens launch actual businesses and social movements that address real community problems. In the college-credit programs, students pitch for actual prize money, receive real-time coaching from successful entrepreneurs, and develop presentations that have landed participants in major publications.
We ask participants one question: “What problem are you passionate about solving?” We then provide the tools, mentorship, and structure to help them build effective solutions.
WIT has worked with over 10,000 young people, providing leadership and entrepreneurial education through hands-on experience. The results speak volumes—our alumni consistently report higher confidence levels, stronger college applications, and clearer career direction compared to peers who only engage in traditional academic activities or simulation business programs.
This shift toward authentic learning experiences isn’t limited to K-12 education. As the demand for AI skills explodes across industries, universities are also abandoning traditional lecture-based models in favor of programs that prepare students to create rather than just consume technology.
Universities Embrace AI Integration
University of South Florida (USF) made history as the first university in Florida—and among the first nationally—to create an entire college dedicated to AI and cybersecurity. The Bellini College of Artificial Intelligence, Cybersecurity and Computing will welcome 3,000 students this fall, with plans to double enrollment in the first five years.
The timing reflects urgent market demands. Research indicates a fivefold increase in demand for AI skills in U.S. jobs, while more than 40% of organizations report being unable to find enough qualified cybersecurity professionals. The National Science Foundation awarded over $800 million for AI-related research in a single year.
“As AI and cybersecurity quickly evolve, the demand for professionals skilled in these areas continues to grow,” USF President Rhea Law explained. “Through the expertise of our faculty and our strong partnerships with the business community, the University of South Florida is strategically positioned to be a global leader in these fields.”
Dr. John Licato, Associate Professor at The Bellini College of Artificial Intelligence, Cybersecurity and Computing, puts this educational shift in perspective: “AI and cybersecurity already touch every single job on earth. Universities everywhere are trying to incorporate these technologies into their programs so students can practically leverage them, but at the same time further develop their own critical thinking and reasoning.”
USF Provost Dr. Prasant Mohapatra told me, “We’re not just producing job seekers—we’re producing job creators.” The college leverages USF’s existing strengths—approximately 200 faculty members already conduct research in related disciplines—while positioning the Tampa Bay region as a technology hub.
USF’s bold move breaks from traditional models of higher education. Most universities incorporate AI courses into their existing programs. USF built an entire college around emerging technologies, combining technical training with business education because students need both skills to succeed.
Bridging the K-12 AI Knowledge Gap
Teenagers already use AI tools regularly. Data shows 63% of U.S. teens use chatbots and text generators for schoolwork. Yet most schools ban these tools or label them as cheating. This creates a problem: students learn AI exists, but not how to use it ethically.
WIT created WITY to fill this gap. Our AI platform helps teens develop business ideas and conduct market research to inform their entrepreneurial endeavors. Students learn to work with AI without losing their creativity or critical thinking abilities.
USF also works with younger students. The Bellini College offers workshops for K-12 students through partnerships with education programs. These sessions introduce kids to AI concepts through hands-on projects.
Dr. Mohapatra shared his philosophy with me: “We want to show kids that AI isn’t something to fear. It’s something they can learn to use responsibly and creatively.”
AI Success Metrics That Matter
Programs that successfully prepare students for an AI-driven economy share several characteristics:
Authentic challenges: Students tackle real problems with genuine consequences, not hypothetical scenarios designed for assessment.
Interdisciplinary approach: Effective programs integrate technology, business, ethics, and social impact rather than teaching these subjects in isolation.
Confidence development: Students learn self-advocacy, self-worth, and self-value through entrepreneurial experiences. These capabilities transfer to college applications, job interviews, and leadership roles.
Early exposure: Rather than waiting until senior year, these programs introduce innovative thinking in middle school and early high school.
Research supports this approach. A 2022 Gallup survey found that students involved in entrepreneurship programs were 34% more likely to develop leadership skills and 41% more likely to report feeling prepared for future careers.
The AI Competitive Advantage
Students emerging from these programs possess advantages that traditional education alone cannot provide. They understand how to identify market opportunities, collaborate effectively with AI tools, and communicate their ideas clearly to diverse audiences.
College admissions officers increasingly recognize entrepreneurship as a marker of leadership, innovation, and problem-solving ability. Students who can demonstrate how they built something from the ground up bring more than just an application; they get a track record of action.
These experiences provide rich material for personal statements and interviews while demonstrating the initiative and resilience that colleges value in their incoming classes.
Building Tomorrow’s AI-Driven Economy Today
Programs that combine AI literacy with entrepreneurial education create an exponential multiplier effect. Students don’t just learn to use existing tools—they develop the creative mindset to identify problems that AI can solve and the business acumen to turn those solutions into viable ventures.
The students graduating from these programs represent a new breed of innovator. They’re not just prepared for an AI-driven economy—they’re actively architecting it, armed with both deep technological fluency and the entrepreneurial skills to transform breakthrough ideas into market-changing impact. This represents a fundamental shift in educational philosophy—from preparing students for predetermined career paths in a static economy to empowering them to create entirely new industries and opportunities in our rapidly evolving technological landscape.
AI Research
Senator Cruz Unveils AI Framework and Regulatory Sandbox Bill

On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership. In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act (S. 2750), which would establish a federal AI regulatory sandbox program that would waive or modify federal agency regulations and guidance for AI developers and deployers. Collectively, the AI framework and the SANDBOX Act mark the first congressional effort to implement the recommendations of AI Action Plan the Trump Administration released on July 23.
- Light-Touch AI Regulatory Framework
Senator Cruz’s AI framework, titled “A Legislative Framework for American Leadership in Artificial Intelligence,” calls for the United States to “embrace its history of entrepreneurial freedom and technological innovation” by adopting AI legislation that promotes innovation while preventing “nefarious uses” of AI technology. Echoing President Trump’s January 23 Executive Order on “Removing Barriers to American Leadership in Artificial Intelligence” and recommendations in the AI Action Plan, the AI framework sets out five pillars as a “starting point for discussion”:
- Unleashing American Innovation and Long-Term Growth. The AI framework recommends that Congress establish a federal AI regulatory sandbox program, provide access to federal datasets for AI training, and streamline AI infrastructure permitting. This pillar mirrors the priorities of the AI Action Plan and President Trump’s July 23 Executive Order on “Accelerating Federal Permitting of Data Center Infrastructure.”
- Protecting Free Speech in the Age of AI. Consistent with President Trump’s July 23 Executive Order on “Preventing Woke AI in the Federal Government,” Senator Cruz called on Congress to “stop government censorship” of AI (“jawboning”) and address foreign censorship of Americans on AI platforms. Additionally, while the AI Action Plan recommended revising the National Institute of Standards & Technology (“NIST”)’s AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change,” this pillar calls for reforming NIST’s “AI priorities and goals.”
- Prevent a Patchwork of Burdensome AI Regulation. Following a failed attempt by Congressional Republicans to enact a moratorium on the enforcement of state and local AI regulations in July, the AI Action Plan called on federal agencies to limit federal AI-related funding to states with burdensome AI regulatory regimes and on the FCC to review state AI laws that may be preempted under the Communications Act. Similarly, the AI framework calls on Congress to enact federal standards to prevent burdensome state AI regulation, while also countering “excessive foreign regulation” of Americans.
- Stop Nefarious Uses of AI Against Americans. In a nod to bipartisan support for state digital replica protections – which ultimately doomed Congress’s state AI moratorium this summer – this pillar calls on Congress to protect Americans against digital impersonation scams and fraud. Additionally, this pillar calls on Congress to expand the principles of the federal TAKE IT DOWN Act, signed into law in May, to safeguard American schoolchildren from nonconsensual intimate visual depictions.
- Defend Human Value and Dignity. This pillar appears to expand on the policy of U.S. “global AI dominance in order to promote human flourishing” established by President Trump’s January 23 Executive Order by calling on Congress to reinvigorate “bioethical considerations” in federal policy and to “oppose AI-driven eugenics and other threats.”
- SANDBOX Act
Consistent with recommendations in the AI Action Plan and AI Framework, the SANDBOX Act would direct the White House Office of Science & Technology Policy (“OSTP”) to establish and operate an “AI regulatory sandbox program” with the purpose of incentivizing AI innovation, the development of AI products and services, and the expansion of AI-related economic opportunities and jobs. According to Senator Cruz’s press release, the SANDBOX Act marks a “first step” in implementing the AI Action Plan, which called for “regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools.”
Program Applications. The AI regulatory sandbox program would allow U.S. companies and individuals, or the OSTP Director, to apply for a “waiver or modification” of one or more federal agency regulations in order to “test, experiment, or temporarily provide” AI products, AI services, or AI development methods. Applications must include various categories of information, including:
- Contact and business information,
- A description of the AI product, service, or development method,
- Specific regulation(s) that the applicant seeks to have waived or modified and why such waiver or modification is needed,
- Consumer benefits, business operational efficiencies, economic opportunities, jobs, and innovation benefits of the AI product, service, or development method,
- Reasonably foreseeable risks to health and safety, the economy, and consumers associated with the waiver or modification, and planned risk mitigations,
- The requested time period for the waiver or modification, and
- Each agency with jurisdiction over the AI product, service, or development method.
Agency Reviews and Approvals. The bill would require OSTP to submit applications to federal agencies with jurisdiction over the AI product, service, or development method within 14 days. In reviewing AI sandbox program applications, federal agencies would be required to solicit input from the private sector and technical experts on whether the applicant’s plan would benefit consumers, businesses, the economic, or AI innovation, and whether potential benefits outweigh health and safety, economic, or consumer risks. Agencies would be required to approve or deny applications within 90 days, with a record documenting reasonably foreseeable risks, the mitigations and consumer protections that justify agency approval, or the reasons for agency denial. Denied applicants would be authorized to appeal to OSTP for reconsideration. Approved waivers or modifications would be granted for a term of two years, with up to four additional two-year terms if requested by the applicant and approved by OSTP.
Participant Terms and Requirements. Participants with approved waivers or modifications would be immune from federal criminal, civil, or agency enforcement of the waived or modified regulations, but would remain subject to private consumer rights of action. Additionally, participants would be required to report incidents of harm to health and safety, economic damage, or unfair or deceptive trade practices to OSTP and federal agencies within 72 hours after the incident occurs, and to make various disclosures to consumers. Participants would also be required to submit recurring reports to OSTP throughout the term of the waiver or modification, which must include the number of consumers affected, likely risks and mitigations, any unanticipated risks that arise during deployment, adverse incidents, and the benefits of the waiver or modification.
Congressional Review. Finally, the SANDBOX Act would require the OSTP Director to submit to Congress any regulations that the Director recommends for amendment or repeal “as a result of persons being able to operate safely” without those regulations under the sandbox program. The bill would establish a fast-track procedure for joint resolutions approving such recommendations, which, if enacted, would immediately repeal the regulations or adopt the amendments recommended by OSTP.
The SANDBOX Act’s regulatory sandbox program would sunset in 12 years unless renewed. The introduction of the SANDBOX Act comes as states have pursued their own AI regulatory sandbox programs – including a sandbox program established under the Texas Responsible AI Governance Act (“TRAIGA”), enacted in June, and an “AI Learning Laboratory Program” established under Utah’s 2024 AI Policy Act. The SANDBOX Act would require OSTP to share information these state AI sandbox programs if they are “similar or comparable” to the SANDBOX Act, in addition to coordinating reviews and accepting “joint applications” for participants with AI projects that would benefit from “both Federal and State regulatory relief.”
AI Research
AI scientist says ‘learning how to learn’ will be next generation’s most needed skill

ATHENS, Greece — A top Google scientist and 2024 Nobel laureate said Friday that the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace.
Speaking at an ancient Roman theater at the foot of the Acropolis in Athens, Demis Hassabis, CEO of Google’s DeepMind, said rapid technological change demands a new approach to learning and skill development.
“It’s very hard to predict the future, like 10 years from now, in normal cases. It’s even harder today, given how fast AI is changing, even week by week,” Hassabis told the audience. “The only thing you can say for certain is that huge change is coming.”
The neuroscientist and former chess prodigy said artificial general intelligence — a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can — could arrive within a decade. This, he said, will bring dramatic advances and a possible future of “radical abundance” despite acknowledged risks.
Hassabis emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.
“One thing we’ll know for sure is you’re going to have to continually learn … throughout your career,” he said.
The DeepMind co-founder, who established the London-based research lab in 2010 before Google acquired it four years later, shared the 2024 Nobel Prize in chemistry for developing AI systems that accurately predict protein folding — a breakthrough for medicine and drug discovery.
Greek Prime Minister Kyriakos Mitsotakis joined Hassabis at the Athens event after discussing ways to expand AI use in government services. Mitsotakis warned that the continued growth of huge tech companies could create great global financial inequality.
“Unless people actually see benefits, personal benefits, to this (AI) revolution, they will tend to become very skeptical,” he said. “And if they see … obscene wealth being created within very few companies, this is a recipe for significant social unrest.”
Mitsotakis thanked Hassabis, whose father is Greek Cypriot, for rescheduling the presentation to avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.
____
Kelvin Chan in London contributed to this story.
AI Research
Should You Forget Nvidia and Buy These 2 Artificial Intelligence (AI) Stocks Instead?

Both AMD and Broadcom have an opportunity to outperform in the coming years.
Nvidia is the king of artificial intelligence (AI) infrastructure, and for good reason. Its graphics processing units (GPUs) have become the main chips for training large language models (LLMs), and its CUDA software platform and NVLink interconnect system, which helps its GPUs act like a single chip, have helped create a wide moat.
Nvidia has grown to become the largest company in the world, with a market cap of over $4 trillion. In Q2, it held a whopping 94% market share for GPUs and saw its data center revenue soar 56% to $41.1 billion. That’s impressive, but those large numbers may be why there could be some better opportunities in the space.
Two stocks to take a closer look at are Advanced Micro Devices (AMD 1.91%) and Broadcom (AVGO 0.19%). Both are smaller players in AI chips, and as the market shifts from training toward inference, they’re both well positioned. The reality is that while large cloud computing and other hyperscalers (companies with large data centers) love Nvidia’s GPUs they would prefer more alternatives to help reduce costs and diversify their supply chains.
1. AMD
AMD is a distant second to Nvidia in the GPU market, but the shift to inference should help it. Training is Nvidia’s stronghold, and where its CUDA moat is strongest. However, inference is where demand is accelerating, and AMD has already started to win customers.
AMD management has said one of the largest AI model operators in the world is using its GPUs for a sizable portion of daily inference workloads and that seven of the 10 largest AI model companies use its GPUs. That’s important because inference isn’t a one-time event like training. Every time someone asks a model a question or gets a recommendation, GPUs are providing the power for these models to get the answer. That’s why cost efficiency matters more than raw peak performance.
That’s exactly where AMD has a shot to take some market share. Inference doesn’t need the same libraries and tools as training, and AMD’s ROCm software platform is more than capable of handling inference workloads. And once performance is comparable, price becomes more of a deciding factor.
AMD doesn’t need to take a big bite out of Nvidia’s share to move the needle. Nvidia just posted $41.1 billion in data center revenue last quarter, while AMD came in at $3.2 billion. Even small wins can have an outsize impact when you start from a base that is a fraction of the size of the market leader.
On top of that, AMD helped launch the UALink Consortium, which includes Broadcom and Intel, to create an open interconnect standard that competes with Nvidia’s proprietary NVLink. If successful, that would break down one of Nvidia’s big advantages and allow customers to build data center clusters with chips from multiple vendors. That’s a long-term effort, but it could help improve the playing field.
With inference expected to become larger than training over time, AMD doesn’t need to beat Nvidia to deliver strong returns; it just needs a little bigger share.
Image source: Getty Images.
2. Broadcom
Broadcom is attacking the AI opportunity from another angle, but the upside may be even more compelling. Instead of designing off-the-shelf GPUs, Broadcom is helping customers make their own customer AI chips.
Broadcom is a leader in helping design application-specific integrated circuits, or ASICs, and it has taken that expertise and applied it to making custom AI chips. Its first customer was Alphabet, which it helped design its highly successful Tensor Processing Units (TPUs) that now help power Google Cloud. This success led to other design wins, including with Meta Platforms and TikTok owner ByteDance. Combined, Broadcom has said these three customers represent a $60 billion to $90 billion serviceable addressable market by its fiscal 2027 (ending October 2027).
However, the news got even better when the company revealed that a fourth customer, widely believed to be OpenAI, placed a $10 billion order for next year. Designing ASICs is typically not a quick process. Alphabet’s TPUs took about 18 months from start to finish, which at the time was considered quick. But this newest deal shows it can keep this fast pace. This also bodes well with future deals, as late last year it was revealed that Apple will be a fifth customer.
Custom chips have clear advantages for inference. They’re designed for specific workloads, so they deliver better power efficiency and lower costs than off-the-shelf GPUs. As inference demand grows larger than training, Broadcom’s role as the go-to design partner becomes more valuable.
Now, custom chips have large upfront costs to design and aren’t for everyone, but this is a huge potential opportunity for Broadcom moving forward.
The bottom line
Nvidia is still the dominant player in AI infrastructure, and I don’t see that changing anytime soon. However, both AMD and Broadcom have huge opportunities in front of them and are starting at much smaller bases. That could help them outperform in the coming years.
Geoffrey Seiler has positions in Alphabet. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Apple, Meta Platforms, and Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi