Tools & Platforms
Bill Gates’ bold prediction: AI won’t replace this career even after 100 years; can you guess it |

Bill Gates has spent almost fifty years pushing software forward, so when he says a single profession will stay human long after artificial intelligence rewires the rest of the economy, people listen. Speaking in early July, the Microsoft co-founder told interviewers that programming will “remain a human job for at least a century”. He is not brushing off AI’s power as the WEF (World Economic Forum) 2025 report projects automation could erase 92 million roles by 2030, while creating about 170 million new ones, but he draws a clear line around code. Writing software, he argues, is less about typing syntax and more about spotting unseen patterns, judging trade-offs and making a leap no algorithm can anticipate. AI can already draft snippets, debug routine errors and suggest architectural templates, yet the spark that turns a half-baked idea into working logic still comes from a person at a keyboard.
Bill Gates’ latest AI prediction
Gates shared the view in separate conversations with The Economic Times and The Tonight Show, then echoed it during a podcast with Zerodha’s Nikhil Kamath. Each time he circled back to the same point: tools like Copilot and ChatGPT are power chisels, not replacement carpenters. They shorten grunt work but leave the blueprint to us.
Why programming, of all careers, stays human
Code often starts with an ill-formed idea—say, turning sensor data into a flood-prediction dashboard. A machine can crunch numbers, yet deciding which signals matter and how the user will act on the output involves judgment, negotiation and a few flashes of intuition. Gates calls that the “creative leap” AI cannot copy.Large models spit out plausible text, but a misplaced bracket or misunderstood requirement can crash a critical system. Spotting edge cases takes domain insight and lived experience, qualities still thin in training data. Gates says AI can help with “boring stuff like debugging,” but final responsibility sits with a human reviewer.APIs are deprecated, laws shift, and users click in ways nobody predicted. The best programmers keep adapting code to fit messy reality. AI excels at frozen snapshots; humans excel at moving targets.
Other jobs that Bill Gates thinks are safer
Gates singles out biology and energy as disciplines where scientific curiosity, ethical trade-offs and crisis management require a person in charge. Sports, he jokes, will also stay human because nobody wants to watch robots play baseball.
How does this prediction fit broader job-market numbers
The World Economic Forum projects a net gain of +78 million jobs by 2030, despite losses in clerical and routine design roles. Programming sits on the creation side of that ledger, not because AI is weak, but because software problems keep mutating faster than the models that try to automate them.
Related FAQs
1. What job did Bill Gates say AI will not replace?
- He said programming will remain human for at least the next hundred years.
2. Why does he think coding is safe?
- Gates argues that programming relies on creativity, judgment and deep problem-solving, traits he believes AI cannot fully mimic.
3. Can AI still help programmers?
- Yes. Tools can draft boilerplate code, suggest fixes and catch simple bugs, but humans guide architecture and make final calls.
4. Are any other careers safe according to Gates?
- He also lists biologists, energy experts and professional athletes as roles likely to stay human-led.
5. Does Gates dismiss AI risks?
- No. He acknowledges AI could displace millions of workers and says society must rethink how people use their newfound free time.
Tools & Platforms
AI data provider Invisible raises $100M at $2B+ valuation

Invisible Technologies Inc., a startup that provides training data for artificial intelligence projects, has raised $100 million in funding.
Bloomberg reported today that the deal values the company at more than $2 billion. Newly formed venture capital firm Vanara Capital led the round with participation from Acrew Capital, Greycroft and more than a half dozen others.
AI training datasets often include annotations that summarize the records they contain. A business document, for example, might include an annotation that explains the topic it discusses. Such explanations make it easier for the AI model being trained to understand the data, which can improve its output quality.
Invisible provides enterprises with access to experts who can produce custom training data and annotations for their AI models. Those experts also take on certain other projects. Notably, they can create data for RLHF, or reinforcement learning from human feedback, initiatives. .
RLHF is a post-training method, which means it’s used to optimize AI models that have already been trained. The process involves giving the model a set of prompts and asking human experts to rate the quality of its responses. The experts’ ratings are used to train a neural network called a reward model. This model, in turn, provides feedback to the original AI model that helps it generate more useful prompt responses.
Invisible offers a tool called Neuron that helps customers manage their training datasets. The software can combine annotated data with external information, including both structured and structured records. It also creates an ontology in the process. This is a file that explains the different types of records in a training dataset and the connections between them.
Another Invisible tool, Atomic, enables companies to collect data on how employees perform repetitive business tasks. The company says that this data makes it possible to automate manual work with AI agents. Additionally, Invisible offers a third tool called Synapse that helps developers implement automation workflows.
“Our software platform, combined with our expert marketplace, enables companies to organize, clean, label, and map their data,” said Invisible Chief Executive Officer Matthew Fitzpatrick. “This foundation enables them to build agentic workflows that drive real impact.”
Today’s funding round follows a period of rapid growth for the company. Between 2020 and 2024, Invisible’s annual revenue increased by a factor of over 48 to $134 billion. This year, the data provider doubled the size of its engineering group and refreshed its leadership team.
Invisible will use the new capital to enhance its software tools. The investment comes amid rumors that a competing provider of AI training data, Surge AI Inc., may also raise funding at a multibillion-dollar valuation
Image: Invisible
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
Tools & Platforms
Anthropic Taps Higher Education Leaders for Guidance on AI

The artificial intelligence company Anthropic is working with six leaders in higher education to help guide how its AI assistant Claude will be developed for teaching, learning and research. The new Higher Education Advisory Board, announced in August, will provide regular input on educational tools and policies.
According to a news release from Anthropic, the board is tasked with ensuring that AI “strengthens rather than undermines learning and critical thinking skills” through policies and products that support academic integrity and student privacy.
As teachers adapt to AI, ed-tech leaders have called for educators to play an active role in aligning AI to educational standards.
“Teachers and educators and administrators should be in the decision-making seat at every critical decision-making point when AI is being used in education,” Isabella Zachariah, formerly a fellow at the U.S. Department of Education’s Office of Educational Technology, said at the EDUCAUSE conference in October 2024. The Office of Educational Technology has since been shuttered by the Trump administration.
To this end, advisory boards or councils involving educators have emerged in recent years among ed-tech companies and institutions seeking to ground AI deployments in classroom experiences. For example, the K-12 software company Otus formed an AI advisory board earlier this year with teachers, principals, instructional technology specialists and district administrators representing more than 20 school districts across 11 states. Similarly, software company Frontline Education launched an AI advisory council last month to allow district leaders to participate in pilots and influence product design choices.
The Anthropic board taps experts in the education, nonprofit and technology sectors, including two former university presidents and three campus technology leaders. Rick Levin, former president of Yale University and CEO of Coursera, will serve as board chair. Other members include:
- David Leebron, former president of Rice University
- James DeVaney, associate vice provost for academic innovation at the University of Michigan
- Julie Schell, assistant vice provost of academic technology at the University of Texas at Austin
- Matthew Rascoff, vice provost for digital education at Stanford University
- Yolanda Watson Spiva, president of Complete College America
The board contributed to a recent trio of AI fluency courses for colleges and universities, according to the news release. The online courses aim to give students and faculty a foundation in the function, limitations and potential uses of large language models in academic settings.
Schell said she joined the advisory board to explore how technology can address persistent challenges in learning.
“Sometimes we forget how cognitively taxing it is to really learn something deeply and meaningfully,” she said. “Throughout my career, I’ve been excited about the different ways that technology can help accentuate best practices in teaching or pedagogy. My mantra has always been pedagogy first, technology second.”
In her work at UT Austin, Schell has focused on responsible use of AI and engaged with faculty, staff, students and the general public to develop guiding principles. She said she hopes to bring the feedback from the community, as well as education science, to regular meetings. She said she participated in vetting existing Anthropic ed-tech tools, like Claude Learning mode, with this in mind.
In the weeks since the board’s announcement, the group has met once, Schell said, and expects to meet regularly in the future.
“I think it’s important to have informed people who understand teaching and learning advising responsible adoption of AI for teaching and learning,” Schell said. “It might look different than other industries.”
Tools & Platforms
Duke AI program emphasizes critical thinking for job security :: WRAL.com

Duke’s AI program is spearheaded by a professor who is not just teaching, he also built his own AI model.
Professor Jon Reifschneider says we’ve already entered a new era of teaching and learning across disciplines.
He says, “We have folks that go into healthcare after they graduate, go into finance, energy, education, etc. We want them to bring with them a set of skills and knowledge in AI, so that they can figure out: ‘How can I go solve problems in my field using AI?'”
He wants his students to become literate in AI, which is a challenge in a field he describes as a moving target.
“I think for most people, AI is kind of a mysterious black box that can do somewhat magical things, and I think that’s very risky to think that way, because you don’t develop an appreciation of when you should use it and when you shouldn’t use it,” Reifschneider told WRAL News.
Student Harshitha Rasamsetty said she is learning the strengths and shortcomings of AI.
“We always look at the biases and privacy concerns and always consider the user,” she said.
The students in Duke’s engineering master’s programs come from all backgrounds, countries, even ages. Jared Bailey paused his insurance career in Florida to get a handle on the AI being deployed company-wide.
He was already using AI tools when he wondered, “What if I could crack them open and adjust them myself and make them better?”
John Ernest studied engineering in undergrad, but sought job security in AI.
“I hear news every day that AI is replacing this job, AI is replacing that job,” he said. “I came to a conclusion that I should be a part of a person building AI, not be a part of a person getting replaced by AI.”
Reifschneider thinks warnings about AI taking jobs are overblown.
In fact, he wants his students to come away understanding that humans have a quality AI can’t replace. That’s critical thinking.
Reifschneider says AI “still relies on humans to guide it in the right direction, to give it the right prompts, to ask the right questions, to give it the right instructions.”
“If you can’t think, well, AI can’t take you very far,” Bailey said. “It’s a car with no gas.”
Reifschneider told WRAL that he thinks children as young as elementary school students should begin learning how to use AI, when it’s appropriate to do so, and how to use it safely.
WRAL News went inside Wake County schools to see how it is being used and what safeguards the district is using to protect students. Watch that story Wednesday on WRAL News.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries