Connect with us

Tools & Platforms

Big Tech Looks To AI Startups To Secure Talent – New Technology

Published

on


What is the Impact for Silicon Valley Innovation?

Acquihires have been a part of Silicon Valley for quite some
time now. Larger tech companies come in and acquire startups mainly
for their talent, as opposed to their products or technology.
However, starting with ex-FTC chair Lina Khan’s subpoenas to
big tech companies probing their prior unreported small
acquisitions, big tech companies were frozen out of the M&A
market since early 2021. As new AI startups were formed, they
couldn’t exit and had trouble raising follow-on rounds of
capital.

Then, starting with Microsoft’s licensing deal with
Inflection AI, we saw a new kind of “acquihire” designed
to circumvent the regulatory shutdown. Whereas traditional
acquihires resulted in return of capital to startup investors and
founders, now there is a new trend taking shape in Silicon Valley.
Major players in big tech have begun hiring away top AI talent in
what the WSJ has termed the “reverse
acquihire.”

These aren’t acquisitions of the startup or the entire team
to acquire talent, but rather a poaching of their founders and AI
researchers, sometimes packaged with small dollar licensing of the
startup’s technology, but which does not result in return of
meaningful capital to investors. So, what happens to the remaining
business? While this provides big tech with an alternative to bring
in the talent they need, it also means that AI leaders and top
talent are leaving their companies behind, creating what CNBC calls “zombie startups.”

The driving force behind this on the big tech side is the
immediate need for talent, combined with a way to circumvent
regulatory hurdles. The WSJ says companies see this stage in AI
development as a “once in a generation opportunity,” and
that means they need top talent quickly to capitalize on the
moment. Additionally, as we have written about previously, acquihires provide
an easier route to bring on talent without the regulatory and
integration issues of a traditional acquisition. There is also
great financial incentive for the founders and researchers being
lured away from startups, so it seems like it’s a win-win
situation. But, what about the remaining business and the
startup’s investors?

In the traditional Silicon Valley startup model, these startups
would be looking down a path to a major exit event, but instead,
they are losing those driving the company forward. Those who leave
are seeing the big payday, but not necessarily those who stay or
those who invested. CNBC cites tech investors and startup employees
as indicating that this trend “threatens to thwart innovation
as founders abandon their ambitious projects to work for the
biggest companies in the world.”

Could this significantly impact the traditional startup model
for Silicon Valley? If the trend continues, it very well could.
Future employees could see startups as too risky, or investors
could become more hesitant to put their money into a startup
thinking that the founders might leave. And while big tech
companies may secure the talent they need today, the long-term cost
could be a weakening of the very startup pipeline that has
historically driven disruptive breakthroughs.

For founders and researchers, there is the allure of the
immediate financial reward, but it also poses a risk to the other
employees, investors, and the broader innovation pipeline.
Ultimately, it will be important to preserve the independent
startup in Silicon Valley and to balance the short-term race for AI
talent with the long-term need to sustain entrepreneurial
ambition.

How do we incentivize founders to keep building? How do we
incentivize venture funds to continue allocating capital to
startups? It’s not by looking at one blockbuster IPO from FIGMA
and declaring mission accomplished (reference to Lina Khan’s recent case of
schadenfreude
), but rather, cutting off the regulatory
handcuffs put in place by the recently deposed and rearchitecting the capital markets to enable an
IPO market. That’s what Silicon Valley really
needs…

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Duke University pilot project examining pros and cons of using artificial intelligence in college

Published

on


DURHAM, N.C. — As generative artificial intelligence tools like ChatGPT have become increasingly prevalent in academic settings, faculty and students have been forced to adapt.

The debut of OpenAI’s ChatGPT in 2022 spread uncertainty across the higher education landscape. Many educators scrambled to create new guidelines to prevent academic dishonesty from becoming the norm in academia, while some emphasized the strengths of AI as a learning aid.

As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty, and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.”

Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester.

The Chronicle spoke to faculty members and students to understand how generative AI is changing the classroom.

ALSO SEE Job seekers, HR professionals grapple with use of artificial intelligence

Embraced or banned

Although some professors are embracing AI as a learning aid, others have implemented blanket bans and expressed caution regarding the implications of AI on problem-solving and critical thinking.

David Carlson, associate professor of civil and environmental engineering, took a “lenient” approach to AI usage in the classroom. In his machine learning course, the primary learning objective is to utilize these tools to understand and analyze data.

Carlson permits his students to use generative AI as long as they are transparent about their purpose for using the technology.

“You take credit for all of (ChatGPT’s) mistakes, and you can use it to support whatever you do,” Carlson said.

He added that although AI tools are “not flawless,” they can help provide useful secondary explanations of lectures and readings.

Matthew Engelhard, assistant professor of biostatistics and bioinformatics, said he also adopted “a pretty hands-off approach” by encouraging the use of AI tools in his classroom.

“My approach is not to say you can’t use these different tools,” Engelhard said. “It’s actually to encourage it, but to make sure that you’re working with these tools interactively, such that you understand the content.”

Engelhard emphasized that the use of these tools should not prevent students from learning the fundamental principles “from the ground up.” Engelhard noted that students, under the pressure to perform, have incentives to rely on AI as a shortcut. However, he said using such tools might be “short-circuiting the learning process for yourself.” He likened generative AI tools to calculators, highlighting that relying on a calculator hinders one from learning how addition works.

Like Engelhard, Thomas Pfau, Alice Mary Baldwin distinguished professor of English, believes that delegating learning to generative AI means students may lose the ability to evaluate the process and validity of receiving information.

“If you want to be a good athlete, you would surely not try to have someone else do the working out for you,” Pfau said.

Pfau recognized the role of generative AI in the STEM fields, but he believes that such technologies have no place in the humanities, where “questions of interpretation … are really at stake.” When students rely on AI to complete a sentence or finish an essay for them, they risk “losing (their) voice.” He added that AI use defeats the purpose of a university education, which is predicated on cultivating one’s personhood.

Henry Pickford, professor of German studies and philosophy, said that writing in the humanities serves the dual function of fostering “self-discovery” and “self-expression” for students. But with increased access to AI tools, Pickford believes students will treat writing as “discharging a duty” rather than working through intellectual challenges.

“(Students) don’t go through any kind of self-transformation in terms of what they believe or why they believe it,” Pickford said.

Additionally, the use of ChatGPT has broadened opportunities for plagiarism in his classes, leading him to adopt a stringent AI policy.

Faculty echoed similar concerns at an Aug. 4 Academic Council meeting, including Professor of History Jocelyn Olcott, who said that students who learn to use AI without personally exploring more “humanistic questions” risk being “replaced” by the technology in the future.

How faculty are adapting to generative AI

Many of the professors The Chronicle interviewed expressed difficulty in discerning whether students have used AI on standard assignments. Some are resorting to a range of alternative assessment methods to mitigate potential AI usage.

Carlson, who shared that he has trouble detecting student AI use in written or coding assignments, has introduced oral presentations to class projects, which he described as “very hard to fake.”

Pickford has also incorporated oral assignments into his class, including having students present arguments through spoken defense. He has also added in-class exams to lectures that previously relied solely on papers for grading.

“I have deemphasized the use of the kind of writing assignments that invite using ChatGPT because I don’t want to spend my time policing,” Pickford said.

However, he recognized that ChatGPT can prove useful in generating feedback throughout the writing process, such as when evaluating whether one’s outline is well-constructed.

A ‘tutor that’s next to you every single second’

Students noted that AI chatbots can serve as a supplemental tool to learning, but they also cautioned against over-relying on such technologies.

Junior Keshav Varadarajan said he uses ChatGPT to outline and structure his writing, as well as generate code and algorithms.

“It’s very helpful in that it can explain concepts that are filled with jargon in a way that you can understand very well,” Varadarajan said.

Varadarajan has found it difficult at times to internalize concepts when utilizing ChatGPT because “you just go straight from the problem to the answer” without paying much thought to the problem. Varadarajan acknowledged that while AI can provide shortcuts at times, students should ultimately bear the responsibility for learning and performing critical thinking tasks.

For junior Conrad Qu, ChatGPT is like a “tutor that’s next to you every single second.” He said that generative AI has improved his productivity and helped him better understand course materials.

Both Varadarajan and Qu agreed that AI chatbots come in handy during time crunches or when trying to complete tasks with little effort. However, they said they avoid using AI when it comes to content they are genuinely interested in exploring deeper.

“If it is something I care about, I will go back and really try to understand everything (and) relearn myself,” Qu said.

The future of generative AI in the classroom

As generative AI technologies continue evolving, faculty members have yet to reach consensus on AI’s role in higher education and whether its benefits for students outweigh the costs.

“To me, it’s very clear that it’s a net positive,” Carlson said. “Students are able to do more. Students are able to get support for things like debugging … It makes a lot of things like coding and writing less frustrating.”

Pfau is less optimistic about generative AI’s development, raising concerns that the next generation of high school graduates will be too accustomed to chatbots coming into the college classroom. He added that many students find themselves at a “competitive disadvantage” when the majority of their peers are utilizing such tools.

Pfau placed the responsibility on students to decide whether the use of generative AI will contribute to their intellectual growth.

“My hope remains that students will have enough self-respect and enough curiosity about discovering who they are, what their gifts are, what their aptitudes are,” Pfau said. “… something we can only discover if we apply ourselves and not some AI system to the tasks that are given to us.”
___

This story was originally published by The Chronicle and distributed through a partnership with The Associated Press.

Featured video is ABC11 24/7 Livestream

Copyright © 2025 by The Associated Press. All Rights Reserved.



Source link

Continue Reading

Tools & Platforms

Global cooperation in AI highlighted

Published

on


Two humanoid robots from Unitree Robotics punch their way at a boxing match, attracting a great number of spectators during the World Smart Industry Expo 2025, which opened in Chongqing on Friday. ZHOU YI/CHINA NEWS SERVICE

President Xi Jinping has highlighted China”s commitment to engaging in extensive international cooperation on artificial intelligence with countries around the world, saying that AI should be an international public good that benefits humanity.

Xi made the remarks in a congratulatory message sent to the World Smart Industry Expo 2025, which opened in Chongqing on Friday.

He said in the message that AI technology is rapidly evolving, profoundly transforming human production and lifestyles, and reshaping the global industrial landscape.

China attaches great importance to AI development and governance and actively promotes the deep integration of AI technological innovation with industrial innovation to empower high-quality economic and social development, thereby helping to improve people’s lives, he added.

Xi expressed China’s willingness to strengthen international cooperation and coordination with other countries in development strategies, governance rules and technical standards to promote the healthy and vigorous development of the AI industry, and bring greater benefits to people in all countries.

The four-day expo, with the themes of “AI+” and “Intelligent Connected New Energy Vehicles”, is co-hosted by the governments of Chongqing and Tianjin.

With Singapore acting as the guest country of honor and Sichuan province as the guest province of honor, it features participation from over 600 leading domestic and international companies, showcasing more than 3,000 innovative products and technologies.

At the opening ceremony, investment agreements worth more than 200 billion yuan ($28 billion) were signed, covering sectors such as intelligent connected new energy vehicles, electronic information, advanced materials, smart equipment and intelligent manufacturing, and the low-altitude economy, according to Zheng Xiangdong, vice-mayor of Chongqing.

Antonio Yung, chief representative of the China Office of Sacramento, the capital of the US state of California, said that Xi’s message highlighted the significance of the expo, as the whole world is paying attention to AI development and in particular, China is one of the major developers in the sector.

The State Council, China’s Cabinet, issued a guideline on Aug 26 to implement the “AI Plus” initiative, promoting the extensive and in-depth integration of AI in various fields.

Cai Guangzhong, vice-president of Tencent, one of China’s top tech firms, said at the expo that Tencent has consistently responded actively to the national strategy, and has taken a long-term approach by increasing investment in technology to solidify the foundation of “AI Plus”.

“Tencent will continue to invest in AI research and development, leveraging its rich application ecosystem to comprehensively promote the presence of ‘useful AI’ closer to users and industries,” Cai said.

“This will enable everyone to become a ‘super individual’ empowered by AI, transform AI into new quality productive forces across various sectors, and allow every enterprise to become an AI company, achieving truly useful, accessible and beneficial AI for all,” he added.

Tan Kiat How, Singapore’s senior minister of state for digital development and information, said that he sees tremendous scope for Singapore and Chongqing to deepen practical collaboration in AI applications and smart urban solutions.



Source link

Continue Reading

Tools & Platforms

Fort Wayne leads nation in AI bootcamp applicants as local innovators showcase technology

Published

on


FORT WAYNE, Ind. (WPTA) – Artificial intelligence is everywhere, and Fort Wayne is stepping into the national spotlight as a leader in both innovation and education.

On Friday, local AI experts gathered to share demonstrations of how the technology is already reshaping daily life.

RELATED: Fort Wayne selected to host Mark Cuban Foundation AI Bootcamp in November

From tools that help businesses to apps that make everyday life more accessible, innovators say Fort Wayne is uniquely positioned to benefit.

Jeremy Curry is an executive and co-founder of People Lead AI, and he says he knows AI firsthand.

Curry started to go blind at 18 years old, and he uses his own AI-powered tools to help navigate the world around him.

He says his experience is proof of how artificial intelligence can transform accessibility.

Curry’s message comes as Fort Wayne prepares to host the Mark Cuban AI Bootcamp this November, a program training high school students to better understand AI.

Founder of AI in Fort Wayne, Angie Carel, says northeast Indiana is currently leading the nation in student applicants.

Carel says that while the momentum is strong, she acknowledges that many people still have concerns about the rapid rise of AI.

She says that for Fort Wayne, the opportunity lies in embracing AI responsibly, preparing students, supporting businesses, and ensuring the technology works to improve lives rather than replace them.

Carel says the Mark Cuban AI Bootcamp starts on Nov. 1. The application deadline is Sep. 30.



Source link

Continue Reading

Trending