Connect with us

AI Insights

SoftBank founder Son makes his biggest bet by staking the future on AI

Published

on


Masayoshi Son, chairman and chief executive officer of SoftBank Group Corp., speaks at the SoftBank World event in Tokyo, Japan, on Wednesday, July 16, 2025.

Kiyoshi Ota | Bloomberg | Getty Images

Masayoshi Son is making his biggest bet yet: that his brainchild SoftBank will be the center of a revolution driven by artificial intelligence.

Son says artificial superintelligence (ASI) — AI that is 10,000 times smarter than humans — will be here in 10 years. It’s a bold call — but perhaps not surprising. He’s made a career out of big plays; notably, one was a $20 million investment into Chinese e-commerce company Alibaba in 2000 that has made billions for SoftBank.

Now, the billionaire is hoping to replicate that success with a series of investments and acquisitions in AI firms that will put SoftBank at the center of a fundamental technological shift.

While Son has been outspoken about his vision over the last year, his thinking precedes much of his recent bullishness, according to two former executives at SoftBank.

“I vividly remember the first time he invited me to his home for dinner and sitting on his porch over a glass of wine, he started talking to me about singularity – the point at which machine intelligence overtakes human intelligence,” Alok Sama, a former finance chief at SoftBank until 2016 and and president until 2019, told CNBC.

SoftBank’s big AI plays

For Son, AI seems personal.

“SoftBank was founded for what purpose? For what purpose was Masa Son born? It may sound strange, but I think I was born to realize ASI,” Son said last year.

That may go some way to explain what has been an aggressive drive over the past few years — but especially the last two — to put SoftBank at the center of the AI story.

In 2016, SoftBank acquired chip designer Arm in a deal worth about $32 billion at the time. Today, Arm is valued at more than $145 billion. While Arm blueprints form the basis of the designs for nearly all the world’s smartphones, these days, the company is looking to position itself as a key player in AI infrastructure. Arm-based chips are part of Nvidia’s systems that go into data centers.

In March, SoftBank also announced plans to acquire another chip designer, Ampere Computing, for $6.5 billion.

ChatGPT maker OpenAI is another marquee investment for SoftBank, with the Japanese giant saying recently that planned investments in the company will reach about 4.8 trillion Japanese yen ($32.7 billion).

SoftBank has also invested in a number of other companies related to AI across its portfolio.

“SoftBank’s AI strategy is comprehensive, spanning the entire AI stack from foundational semiconductors, software, infrastructure, and robotics to cutting-edge cloud services and end applications across critical verticals such as enterprise, education, health, and autonomous systems,” Neil Shah, co-founder at Counterpoint Research, told CNBC.

“Mr. Son’s vision is to cohesively connect and deeply integrate these components, thereby establishing a powerful AI ecosystem designed to maximize long-term value for our shareholders.” 

Stock Chart IconStock chart icon

SoftBank’s stock performance since 2017, the year that its first Vision Fund was founded.

There is a common theme behind SoftBank’s investments in AI companies that comes directly from Son — namely, that these firms should be using advanced intelligence to be more competitive, successful, to make their product better and their customers happy, a person familiar with the company told CNBC. They could only comment anonymously because of the sensitivity of the matter.

It started with and brain computers and robots

As SoftBank launched “SoftBank’s Next 30-Year Vision” in 2010, Son spoke about “brain computers” during a presentation. He described these computers as systems that could learn and program themselves eventually.

And then came robots. Major tech figures like Nvidia CEO Jensen Huang and Tesla boss Elon Musk are now talking about robotics as a key application of AI — but Son was thinking about this more than a decade ago.

In 2012, SoftBank took a majority stake in a French company called Aldebaran. Two years later, the two companies launched a humanoid robot called Pepper, which they billed as “the world’s first personal robot that can read emotions.”

Later, Son said: “In 30 years, I hope robots will become one of the core businesses in generating profits for the SoftBank group.”

SoftBank’s bet on Pepper ultimately flopped for the company. SoftBank slashed jobs at its robotics unit and stopped producing Pepper in 2020. In 2022, German firm United Robotics Group agreed to acquire Aldebaran from SoftBank.

But Son’s very early interest in robots underscored his curiosity for AI applications of the future.

“He was in very early and he has been thinking about this obsessively for a long time,” Sama, who is author of “The Money Trap,” said.

In the background, Son was cooking up something bigger: a tech fund that would make waves in the investing world. He founded the Vision Fund in 2017 with a massive $100 billion in deployable capital.

SoftBank aggressively invested in companies across the world with some of the biggest bets on ride hailing players like Uber and Chinese firm Didi.

But investments in Chinese technology companies and some bad bets on firms like WeWork soured sentiment for the Vision Fund as it racked up billions of dollars of losses by 2023.

Vision but bad timing

The market questioned some of Son’s investments in companies like Uber and Didi, which were burning through cash at the time and had unclear unit economics.

But even those investments spoke to Son’s AI view, according to the former partner at the SoftBank Vision Fund.

“His thought back then was the first advent of AI would be self-driving cars,” the source told CNBC.

Again this could be seen as a case of being too early. Uber created a driverless car unit only to sell it off. Instead, the company has focused on other self-driving car companies to bring them onto the Uber platform. Even now, driverless cars are not widespread on roads, though commercial services like those of Waymo are available.

SoftBank still has investments in driverless car companies, such as British startup Wayve.

Timing clearly wasn’t on Son’s side. After record losses at the Vision Fund in 2022, Son declared SoftBank would go into “defense” mode, significantly reducing investments and being more prudent. It was at this time that companies like OpenAI were beginning to gain steam, but still before the launch of ChatGPT that would put the company on the map.

“When those companies came to head in 2021, 2022, Masa would have been in a perfect place but he had used all his ammunition on other companies,” the former Vision Fund exec said.

“When they came to age in 21, 22, the Vision Fund had invested in five or six hundred different companies and he was not in a position to invest in AI and he missed that.”

Son himself said this year that SoftBank wanted to invest in OpenAI as early as 2019, but it was Microsoft that ended up becoming the key investor. Fast forward to 2025, the Vision Fund — of which there are now two — has a portfolio stacked full of AI focused companies.

But that period was tough for investors across the board. The Covid-19 pandemic, booming inflation and rising rates hit public and private markets across the board after years of loose monetary policy and a tech bull run.

SoftBank didn’t see that time as a missed opportunity to invest in AI, a person familiar with the company said.

Instead, the the company is of the view that it is still very early in the AI investing cycle, the source added.

Risk and reward

AI technology is fast-moving, from the chips that run the software to the models that underpin popular applications.

Tech giants in the U.S. and China are battling it out to produce ever-advancing AI models with the aim of reaching artificial general intelligence (AGI) — a term with different definitions depending on who you speak to, but one that broadly refers to AI that is smarter than humans. With billions of dollars of investment going into the technology, the risk is high, and the rewards could be even higher.

But disruption can come out of no where.

This year, Chinese firm DeepSeek made waves after releasing a so-called reasoning model that appeared to be developed more cheaply than its U.S. rivals. The fact that a Chinese company managed the feat, despite all the export restrictions for advanced tech in place, rocked global financial markets that were betting the U.S. had an unassailable AI lead.

While markets have since recovered, the potential of surprise advances in technology at such an early stage in AI remains a big risk for the likes of SoftBank.

“As with most technology investments the key challenge is to invest in the winning technologies. Many of the investments SoftBank has made are in the current leaders but AI is still in its relative infancy so other challengers could still rear up from nowhere,” Dan Baker, senior equity analyst at Morningstar, told CNBC.

Still, Son has made it clear he wants to set SoftBank up with DNA that will see it survive and thrive for 300 years, according to the company’s website.

That may go some way to explain the big risks that Son takes, and his conviction when it comes to particular themes and companies — and the valuations he’s willing to pay.

“He (Son) made some mistakes, but directionally he is going in the same driection, which is — he wants to be sure that he is a real player in AI and he is making it happen,” the former Vision Fund exec said.



Source link

AI Insights

DVIDS – News – ARMY INTELLIGENCE

Published

on



by Holly Comanse

Artificial intelligence (AI) may be trending, but it’s nothing new. Alan Turing, an English mathematician said to be the “father of theoretical computer science,” conceptualized the term algorithm to solve mathematical problems and test machine intelligence. Not long after, the very first AI chatbot, called ELIZA, was released in 1966. However, it was the generative pre-trained transformer (GPT) model, first introduced in 2018, that provided the foundation for modern AI chatbots.

Both AI and chatbots continue to evolve with many unknown variables related to accuracy and ethical concerns, such as privacy and bias. Job security and the environmental impact of AI are other points of contention. While the unknown may be distressing, it’s also an opportunity to adjust and adapt to an evolving era.

The 1962 television cartoon “The Jetsons” presented the concept of a futuristic family in the year 2062, with high-tech characters that included Rosie the Robot, the family’s maid, and an intelligent robotic supervisor named Uniblab. Such a prediction isn’t much of a stretch anymore. Robotic vacuums, which can autonomously clean floors, and other personal AI assistant devices are now commonplace in the American household. A recent report valued the global smart home market at $84.5 billion in 2024 and predicted it would reach $116.4 billion by 2029.

ChatGPT, released in 2022, is a widely used AI chat-based application with 400 million active users. It independently browses the internet, allowing for more up-to-date and automatic results, but it’s used for conversational topics, not industry-specific information. The popularity and limitations of ChatGPT led some organizations to develop their own AI chatbots with the ability to reflect current events, protect sensitive information and deliver search results for company-specific topics.

One of those organizations is the U.S. Army, which now has an Army-specific chatbot known as CamoGPT. Currently boasting 75,000 users, CamoGPT started development in the fall of 2023 and was first deployed in the spring of 2024. Live data is important for the Army and other companies that choose to implement their own AI chat-based applications. CamoGPT does not currently have access to the internet because it is still in a prototype phase, but connecting to the net is a goal. Another goal for the platform is to accurately respond to questions that involve current statistics and high-stakes information. What’s more, CamoGPT can process classified data on SIPRNet and unclassified information on NIPRNet.

THE MORE THE MERRIER
Large language models (LLMs) are a type of AI chatbot that can understand and generate human language based on inputs. LLMs undergo extensive training, require copious amounts of data and can be tedious to create. They can also process and respond to information as a human would. Initially, the information fed to the bot must be input individually and manually by human beings until a pattern is established, at which point the computer can take over. Updating facts can be a daunting task when considering the breadth of data from around the world that AI is expected to process.

Aidan Doyle, a data engineer at the Army Artificial Intelligence Integration Center (AI2C), works on a team of three active-duty service members, including another data engineer and a data analyst, as well as four contracted software developers and one contracted technical team lead. “It’s a small team, roles are fluid, [and] everyone contributes code to Camo[GPT],” Doyle said.

Doyle’s team is working to transition CamoGPT into a program of record and put more focus into developing an Army-specific LLM. “An Army-specific LLM would perform much better at recognizing Army acronyms and providing recommendations founded in Army doctrine,” Doyle said. “Our team does not train LLMs; we simply host published, open-source models that have been trained by companies like Meta, Google and Mistral.”

The process of training LLMs involves pre-training the model by showing it as many examples of natural language as possible from across the internet. Everything from greetings to colloquialisms must be input so it can mimic human conversation. Sometimes, supervised learning is necessary for specific information during the fine-tuning step. Then the model generates different answers to questions, and humans evaluate and annotate the model responses and flag problems that arise. Once preferred responses are identified, developers adjust the model accordingly. This is a post-training step called reinforcement learning with human feedback, or alignment. Finally, the model generates both the questions and answers itself in the self-play step. When the model is ready, it is deployed.

A LITTLE TOO CREATIVE
The use of AI in creative fields faces significant challenges, such as the potential for plagiarism and inaccuracy. Artists spend a lot of time creating work that can easily be duplicated by AI without giving the artist any credit. AI can also repackage copyrighted material. It can be tough to track down the original source for something when it is generated with AI.

AI can—and sometimes does—introduce inaccuracies. When AI fabricates information unintentionally, it is called a hallucination. AI can make a connection between patterns it recognizes and pass them off as truth for a different set of circumstances. Facts presented by AI should always be verified, and websites such as CamoGPT often come with a disclaimer that hallucinations are possible. Journalists and content creators should be cautious with AI use as they run the risk of spreading misinformation.

Images and videos can also be intentionally manipulated. For example, AI-generated content on social media can be posted for shock value. Sometimes it’s easy to spot when something has been created by AI, and you can take it with a grain of salt. In other cases, social media trends can go viral before they are vetted.

For these reasons, the Army decided to implement CamoGPT. Not only does it currently have the ability to process classified information discreetly, but developmental advances will also ensure that CamoGPT provides minimal errors in its responses.

CONCLUSION
It’s becoming clear that analog is on the way out. Even older websites like Google and other search engines have started to prioritize AI summary results. Utilizing technology like AI, LLMs and other chatbots can save time and automate tedious tasks, which increases productivity and efficiency. CamoGPT is still evolving, and the team at AI2C is working hard to improve its accuracy and abilities. Other AI systems within the Army are still being developed, but the potential is limitless. While we may not be living in the future that the creator of “The Jetsons” predicted, we’re getting closer. In another 37 years, when 2062 rolls around, we may all be using flying cars—and those vehicles just might drive themselves with the help of AI.

For more information, go to https://www.camogpt.army.mil/camogpt.

HOLLY COMANSE provides contract support to the U.S. Army Acquisition Support Center from Honolulu, Hawaii, as a writer and editor for Army AL&T magazine and TMGL, LLC. She previously served as a content moderator and data specialist training artificial intelligence for a news app. She holds a B.A. in journalism from the University of Nevada, Reno.





Source link

Continue Reading

AI Insights

Big tech is offering AI tools to California students. Will it save jobs?

Published

on


By Adam Echelman, CalMatters

""
Students work in the library at San Bernardino Valley College on May 30, 2023. California education leaders are striking deals with tech companies to provide students with opportunities to learn AI. Photo by Lauren Justice for CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.

As artificial intelligence replaces entry-level jobs, California’s universities and community colleges are offering a glimmer of hope for students: free AI training that will teach them to master the new technology. 

“You’re seeing in certain coding spaces significant declines in hiring for obvious reasons,” Gov. Gavin Newsom said Thursday during a press conference from the seventh floor of Google’s San Francisco office.

Flanked by leadership from California’s higher education systems, he called attention to the recent layoffs at Microsoft, at Google’s parent company, Alphabet, and at Salesforce Tower, just a few blocks away, home to the tech company that is still the city’s largest private employer.

Now, some of those companies — including Google and Microsoft — will offer a suite of AI resources for free to California schools and universities. In return, the companies could gain access to millions of new users.

The state’s community colleges and its California State University campuses are “the backbone of our workforce and economic development,” Newsom said, just before education leaders and tech executives signed agreements on AI.

The new deals are the latest developments in a frenzy that began in November 2022, when OpenAI publicly released the free artificial intelligence tool ChatGPT, forcing schools to adapt.

The Los Angeles Unified School District implemented an AI chatbot last year, only to cancel it three months later without disclosing why. San Diego Unified teachers started using AI software that suggested what grades to give students, CalMatters reported. Some of the district’s board members were unaware that the district had purchased the software. 

Last month, the company that oversees Canvas, a learning management system popular in California schools and universities, said it would add “interactive conversations in a ChatGPT-like environment” into its software

To combat potential AI-related cheating, many K-12 and college districts are using a new feature from the software company Turnitin to detect plagiarism, but a CalMatters investigation found that the software accused students who did real work instead.

Mixed signals?

These deals are sending mixed signals, said Stephanie Goldman, the president of the Faculty Association of California Community Colleges. “Districts were already spending lots of money on AI detection software. What do you do when it’s built into the software they’re using?”

Don Daves-Rougeaux, a senior adviser for the community college system, acknowledged the potential contradiction but said it’s part of a broader effort to keep up with the rapid pace of changes in AI. He said the community college system will frequently reevaluate the use of Turnitin along with all other AI tools. 

California’s community college system is responsible for the bulk of job training in the state, though it receives the least funding from the state per student. 

“Oftentimes when we are having these conversations, we are looked at as a smaller system,” said Daves-Rougeaux. The state’s 116 community colleges collectively educate roughly 2.1 million students.

In the deals announced Thursday, the community college system will partner with Google, Microsoft, Adobe and IBM to roll out additional AI training for teachers. Daves-Rougeaux said the system has also signed deals that will allow students to use exclusive versions of Google’s counterpart to ChatGPT, Gemini, and Google’s AI research tool, Notebook LLM. Daves-Rougeaux said these tools will save community colleges “hundreds of millions of dollars,” though he could not provide an exact figure. 

“It’s a tough situation for faculty,” said Goldman. “AI is super important but it has come up time and time again: How do you use AI in the classroom while still ensuring that students, who are still developing critical thinking skills, aren’t just using it as a crutch?”

One concern is that faculty could lose control over how AI is used in their classrooms, she added.

The K-12 system and Cal State University system are forming their own tech deals. Amy Bentley-Smith, a spokesperson for the Cal State system, said it is working on its own AI programs with Google, Microsoft, Adobe and IBM as well as Amazon Web Services, Intel, LinkedIn, Open AI and others. 

Angela Musallam, a spokesperson for the state government operations agency, said California high schools are part of the deal with Adobe, which aims to promote “AI literacy,” the idea that students and teachers should have basic skills to detect and use artificial intelligence.

Much like the community college system, which is governed by local districts, Musallam said individual K-12 districts would need to approve any deal. 

Will deals make a difference to students, teachers?

Experts say it’s too early to tell how effective AI training will actually be.

Justin Reich, an associate professor at MIT, said a similar frenzy took place 20 years ago when teachers tried to teach computer literacy. “We do not know what AI literacy is, how to use it, and how to teach with it. And we probably won’t for many years,” Reich said. 

The state’s new deals with Google, Microsoft, Adobe and IBM allow these tech companies to recruit new users — a benefit for the companies — but the actual lessons aren’t time-tested, he said. 

“Tech companies say: ‘These tools can save teachers time,’ but the track record is really bad,” said Reich. “You cannot ask schools to do more right now. They are maxed out.”

Erin Mote, the CEO of an education nonprofit called InnovateEDU, said she agrees that state and education leaders need to ask critical questions about the efficacy of the tools that tech companies offer but that schools still have an imperative to act. 

“There are a lot of rungs on the career ladder that are disappearing,” she said. “The biggest mistake we could make as educators is to wait and pause.”

Last year, the California Community Colleges Chancellor’s Office signed an agreement with NVIDIA, a technology infrastructure company, to offer AI training similar to the kinds of lessons that Google, Microsoft, Adobe and IBM will deliver. 

Melissa Villarin, a spokesperson for the chancellor’s office, said the state won’t share data about how the NVIDIA program is going because the cohort of teachers involved is still too small. 

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.



Source link

Continue Reading

AI Insights

Warren County Schools hosts AI Workshop

Published

on


KENTUCKY — On this week’s program, we’re keeping you up to date on happenings within Kentucky’s government, which includes ongoing work this summer with legislative committees and special task forces in Frankfort.

During this “In Focus Kentucky” segment, reporter Aaron Dickens shared how leaders in Warren County Public Schools are helping educators bring their new computer science knowledge to the front of classrooms.

Also in this segment, we shared details about the U.S. Department of Energy selecting the Paducah Gaseous Diffusion Plant site in Paducah, Kentucky, as one of four sites for the development of artificial intelligence data centers and associated energy infrastructure. This initiative is reportedly part of the Trump administration’s plan to accelerate AI development, hoping to leverage federal land assets to establish high-performance computing facilities and reliable energy sources for the burgeoning AI industry.

You can watch the full “In Focus Kentucky” segment in the player above.




Source link

Continue Reading

Trending