Connect with us

Tools & Platforms

The AI vibe shift is upon us

Published

on


A version of this story appeared in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free here.

Rather suddenly, there’s been a vibe shift around artificial intelligence, the tech that’s hypnotized Wall Street and inspired cultish devotion across Silicon Valley over the past three years.

And while it’s too soon to declare August 2025 the start of the AI winter, or the AI correction, or the AI bubble bursting, or whatever slowdown metaphor you prefer, it is undeniable that a series of industry stumbles is making investors, businesses and customers do a double-take.

Among them:


  • Meta, which was recently shelling out $100 million signing bonuses for AI talent, has instituted a hiring freeze and is reportedly looking at downsizing its AI division.

  • Sam Altman, the CEO of OpenAI and the industry’s biggest hype man, is floating the word “bubble” in media interviews.

  • ChatGPT-5, billed by OpenAI as a PhD-level game-changer, is a flop.

  • Coreweave, a cloud computing company backed by Nvidia, has shed nearly 40% of its value in just over a week.

  • Researchers at MIT published a report showing that 95% of the generative AI programs launched by companies failed to do the main thing they were intended for — ginning up more revenue.

  • Anthropic and OpenAI have struck deals to give their products to the US government for next to nothing — even as they are burning through cash and lack demonstrable paths to profitability.

All of that has sent traders rushing to buy “disaster puts” — options that act as a kind of insurance for when the market drops — in case we’re about to relive the late-90s dot-com bust. Per Bloomberg, investors aren’t just preparing for a pullback, they’re bracing for a nosedive.

“I suspect this will lead to a larger correction,” Mike O’Rourke, chief market strategist at JonesTrading, told me, noting that Meta dangling NFL-like compensation packages to attract AI engineers was “a sign the spending was going over the top.”

The tech stocks that have been propping up the entire market, including Nvidia (NVDA), Microsoft (MSFT) and Palantir (PLTR), tumbled this week. (Of course, Wall Street is weighing a lot more than just some bad headlines for the tech sector. There’s also tariffs, mixed retail earnings, and, not least, the president of the United States’ campaign to install loyalists at the Federal Reserve. That central bank drama is piling even more attention Friday’s speech by Fed Chair Jay Powell, who even in precedented times can move markets with a single furrow of his brow. You’ll be able to hear a pin drop across Wall Street as investors tune in at 10am ET.)

For some investors, the tech pullback is “just a pause that may refresh as investors retrench and rethink how they want to position their tech dollars,” Rob Haworth, senior investment strategy director at US Bank Asset Management Group, told my colleague John Towfighi this week.

Maybe. But the ups and downs of the market are just one measure of AI’s impact, and even some of AI’s biggest critics say the downfall won’t happen overnight.

“The bubble bursting was never going to be one event, but a series of sentiment shifts against technology that has never proven its worth outside of specious hype,” Ed Zitron, a tech writer and host of the podcast Better Offline, told me. “In any case, it’s been three years, and at some point there had to be some sort of proof that any of this was worth it… The narrative is spiraling out of control, with the only way to fix it being to show actual returns, which none of these companies have.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Google Cloud CEO Says Tech Giant Has ‘Made Billions’ on AI

Published

on

By


Google Cloud’s chief executive has reportedly outlined how the company is generating revenue through AI services.

“We’ve made billions using AI already,” Thomas Kurian said Tuesday (Sept. 9) at the Goldman Sachs Communacopia and Technology Conference in San Francisco, CNBC reported.

“Our backlog is now at $106 billion — it is growing faster than our revenue. More than 50% of it will convert to revenue over the next two years,” Kurian said.

Google reported revenue of $13.62 billion for its cloud computing unit, up 32% over the previous year. The company’s cloud business trails those of Microsoft and Amazon, the report noted, but is growing faster than them.

In some cases, the revenue comes from people paying by consumption, such as enterprise customers who purchase artificial intelligence infrastructure. Others pay for cloud services through subscriptions.

“You pay per user per monthly fee — for example, agents or Workspace,” said Kurian, referring to the company’s Gemini products and Google Workspace productivity suite, which come with a number of subscription tiers.

Kurian told the conference that upselling is another important part of Google Cloud’s strategy.

“We also upsell people as they use more of it from one version to another because we have higher quality models and higher-priced tiers,” he said, also noting that Google is capturing new customers more quickly.

“We’ve seen 28% sequential quarter-over-quarter growth in new customer wins in the first half of the year,” said Kurian, with nearly two-thirds of customers already using Google Cloud’s AI tools.

In other Google Cloud news, the company’s head of strategy, Web3 said recently that Google’s Layer 1 blockchain will provide a neutral infrastructure layer for use by financial institutions.

In a post on LinkedIn, Rich Widmann wrote that the blockchain, Google Cloud Universal Ledger (GCUL), “brings together years of R&D at Google to provide financial institutions with a novel Layer 1 that is performant, credibly neutral and enables Python-based smart contracts.”

Linking to a March report by PYMNTS, Widmann added that CME Group employed GCUL to as it explored tokenization and payments on its commodities exchange.

“Besides bringing to bear Google’s distribution, GCUL is a neutral infrastructure layer,” Widmann wrote in his post. “Tether won’t use Circle’s blockchain — and Adyen probably won’t use Stripe’s blockchain. But any financial institution can build with GCUL.”

Widmann said Google Cloud will reveal additional technical details about GCUL within months.



Source link

Continue Reading

Tools & Platforms

AI “Can’t Draw a Damn Floor Plan With Any Degree of Coherence” – Common Edge

Published

on


Recently I began interviewing people for a piece I’m writing about “Artificial Intelligence and the Future of Architecture,” a ludicrously broad topic that will at some point require me to home in on a particular aspect of this rapidly changing phenomenon. Before undertaking that process, I spoke with some experts, starting with Phil Bernstein, an architect, educator, and longtime technologist. Bernstein is deputy dean and professor at the Yale School of Architecture, where he teaches courses in professional practice, project delivery, and technology. He previously served as a vice president at Autodesk, where he was responsible for setting the company’s AEC vision and strategy for technology. He writes extensively on issues of architectural practice and technology, and his books include Architecture | Design | Data — Practice Competency in the Era of Computation (Birkhauser, 2018) and Machine Learning: Architecture in the Era of Artificial Intelligence (2nd ed., RIBA, 2025). Our short talk covered a lot of ground: the integration of AI into schools, its obvious shortcomings, and where AI positions the profession.

PB: Phil Bernstein
MCP: Martin C. Pedersem

MCP:

You’re actively involved in the education of architects, all of them digital natives. How is AI being taught and integrated into the curriculum?

PB:

I was just watching a video with a chart that showed how long it took different technologies to get to 100 million users: the telephone, Facebook, and DeepSeek. It was 100 years for the phone, four years for Facebook, two months for Deepseek. Things are moving quickly, almost too quickly, which means you don’t have a lot of time to plan and test pedagogy. 

We are trying to do three things here. One, make sure that students understand the philosophical, legal, and disciplinary implications of using these kinds of technologies. I’ll be giving a talk to our incoming students as part of their orientation about the relationship between generative technology, architectural intellectual property, precedent, and academic integrity. And why you’re here to learn: not how to teach algorithms to do things, but to do them yourself. That’s one dimension. 

The second dimension is, we’re big believers in making as much technology as we can support and afford available to the students. So we’ve been working with the central campus to provide access to larger platforms, and to make things as available and understandable as we possibly can.

Thirdly, in the classroom, individual studio instructors are taking their own stance on how they want to see the tools used. We taught a studio last year where the students tried to delegate a lot of their design responsibility to algorithms, just to see how it went, right? 

PB:

Control. You lose a lot of design autonomy when you delegate to an algorithm. We’ve also been teaching a class called “Scales of Intelligence,” which tries to look at this problem from a theory, history, and technological evolution perspective, delving into the implications for practice and design. So it’s a mixed bag of stuff, very much a moving target, because the technology evolves literally during the course of a semester. 

MCP:

I am a luddite, and even I can see it improve in real time.

PB:

It’s getting more interesting, minute to minute, very shifting ground. I was on the Yale Provost’s AI Task Force, which was the faculty working group formed a year ago that tried to figure out what we’re doing as a university. Everybody was in the same boat, it’s just some of the boats were tiny, paper boats floating in the bathtub, and some of them were battleships—like the medical school, with more than 50 AI pilots. We’re trying to keep up with that. I don’t know how good a job we’re doing now. 

 

MCP:

What’s your sense in talking to people in the architecture world? How are they incorporating AI into their firms?

PB:

It’s difficult to generalize, because there are a lot of variables: your willingness to experiment, a firm’s internal capabilities, the availability of data, and degree of sophistication. I’ve been arguing that because this technology is expensive and requires a lot of data and investment to figure it out, the real innovation will happen in the big firms. 

Everybody’s creating marketing collateral, generating renderings, all that stuff. The diffusion models and large language models, the two things that are widely available—everybody is screwing around with that. The question is, where’s the innovation? And it’s a little early to tell.

The other thing you’ve got to remember is the basic principle of technology adoption in the architectural world, which is: When you figure out a technological advantage, you don’t broadcast it; you keep your advantage to yourself for as long as you can, until somebody else catches up. A recent example: It’s not like there were firms out there helping each other adopt building information modeling.

MCP:

I guess it’s impossible to project where all this goes in three or five years?

PB:

I don’t know. The reigning thesis—I’m simplifying this—is that you can build knowledge from which you can reason inferentially by memorizing all the data in the world and breaking it into a giant probability matrix. I don’t happen to think that thesis is correct. It’s the Connectionists vs. the Symbolic Logic people. I believe that you’re going to need both of these things. But all the money right now is down on the Connectionists, the Sam Altman theory of the world. Some of these things are very useful, but they’re not 100% reliable. And in our world, as architects, reliability is kind of important.

MCP:

Again, we can’t predict the pace of this, but it’s going to fundamentally change the role of the architect. How do you see that evolving as these tools get more powerful?

PB:

Why do you say that? There’s a conclusion in your statement. 

MCP:

I guess, because I’ve talked to a few people. They seem to be using AI now for everything but design. You can do research much faster using AI. 

PB:

That’s true, but you better check it.

MCP:

I agree, but isn’t there inevitably a point when the tools become sophisticated enough where they can design buildings?

PB:

So, therefore … what? 

MCP:

Where does that leave human architects?

PB:

I don’t know that it’s inevitable that machines could design entire buildings well …

MCP:

It would seem to me that we would be moving toward that.

PB:

The essence of my argument is: there are many places where AI is very useful. Where it begins to collapse is when it’s operating in a multivalent environment, trying to integrate multiple streams of both data and logic.

MCP:

Which would be virtually any architecture project.

PB:

Exactly. Certain streams may become more optimized. For instance: If I were a structural engineer right now, I’d be worried, because structural engineering has very clear, robust means of representation, clear rules of measurement. The bulk of the work can be routinized. So they’re massively exposed. But these diffusion models right now can’t draw a damn floor plan with any degree of coherence. A floor plan is an abstraction of a much more complicated phenomenon. It’s going to be a while before these systems are able to do the most important things that architects do, which is make judgments, exercise experience, make tradeoffs, and take responsibility for what they do.

 

Phil-Bernstein via grace farms smaller
MCP:

Where do you fall on the AI-as-job-obliterator, AI-as-job-creator debate? 

PB:

For purposes of this discussion, let’s stipulate that artificial general intelligence that can do anything isn’t in the foreseeable future, because once that happens, the whole economic proposition of the world collapses. When that happens, we’re in a completely different world. And that won’t just be a problem for architects. So, if that’s not going to happen any time soon, then you have two sets of questions. Question one: In the near term, does AI provide productivity gains in a way that reduces the need for staff in an architect’s office?

MCP:

That may be the question I’m asking …

PB:

OK, in the near term, maybe we won’t need as many marketing people. You won’t need any rendering people, although you probably didn’t have those in the first place. But let me give you an example from an adjacent discipline that’s come up recently. It turns out that one thing that these AIs are supposed to be really good at is writing computer code. Because computer code is highly rational. You can test it and see if it works. There’s boatloads of it on the internet as training data in well organized locations, very consistently accessible—which is not true of architectural data, by the way. 

It turns out that many software engineering companies who had decided to replace their programmers with AIs are now hiring them back because the code-generating AIs are not reliable enough to write good code. And then you intersect that with the problem that was described in a presentation I saw a couple of months ago by our director of undergraduate studies in computer science, [Theodore Kim,] who said that so many students are using AI to generate code that they don’t understand how to debug the code once it’s written. He got a call from the head of software engineering for EA, who said, “I can’t hire your graduates because they don’t know how to debug.” And if it’s true here, I guarantee you, it’s true everywhere across the country. So you have a skill loss. 

Then there’s what I would call the issue of the luddites. The [original] Luddites didn’t object to the weaving machines, per se, but they objected to the fact that while they were waiting for a job in the loom factory, they didn’t have any work. Because there was this gap between when humans get replaced by technology and when there are new jobs for them doing other things: you lost your job plowing that cornfield with a horse because there’s a tractor now, but you didn’t get a job in the tractor factory, somebody else did. These are all issues that have to be thought about.

MCP:

It seems like a lot of architects are dismissive because of what AI can’t do now, but that seems silly to me, because I’m seeing AI enabling things like transcriptions now. 

PB:

But transcriptions are so easy. I do not disagree that, over time, these algorithms will get more capable doing some of the things that architects do. But if we get to the point where they’re good enough to literally replace architects, we’re going to be facing a much larger social problem. 

There’s also a market problem here that you need to be aware of. These things are fantastically expensive to build, and architects are not good technology customers. We’re cheap and steal a lot of software—not good customers for multibillion-dollar investments. Maybe, over time, someone builds something that’s sophisticated enough, multimodal enough, that can operate with language, video, three-dimensional reasoning, analytical models, cost estimates, all those things that architects need. But I’m not concerned that that’s going to happen in the foreseeable future. It’s too hard a problem, unless somebody comes up with a way to train these things on much skinnier data sets. 

That’s the other problem: all of our data is disaggregated, spread all over the place. Nobody wants to share it, because it involves risk. When the med school has 33,000 patients enrolled in a trial, they’re getting lots of highly curated, accurate data that they can use to train their AIs. Where’s our accurate data? I can take every Revit model that Skidmore, Owings & Merrill has ever produced in the history of their firm, and it’s not nearly enough data to train an AI. Not nearly enough.

MCP:

And what do you think AI does to the traditional business model of architecture, which has been under some pressure even before this?

PB:

That’s always been under pressure. It depends on what we as a profession decide. I’ve written extensively about this. We have two options. The first option is a race to the bottom: Who can use AI to cut their fees as much as possible? Option number two, value: How do we use AI to do a better job and charge more money? That’s not a technology question, it’s a business strategy question. So if I’ve built an AI that is so good that I can promise a client that x is going to happen or y is going to happen, I should charge for that: “I’m absolutely positive that this building is going to produce 23% less carbon than it would have had I not designed it. Here’s a third party that can validate this. Write me a check.” 

Featured image courtesy of Easy-Peasy.AI. 



Source link

Continue Reading

Tools & Platforms

Need A Job? ChatGPT Becomes LinkedIn Meets AI Tutor And Recruiter

Published

on


ChatGPT is stepping into a job market that has been struggling to find its footing. Millions of job seekers complain that sending out resumes feels like shouting into a void, while employers admit they cannot easily separate real skills from inflated buzzwords.

Per Resume Genius, as of July 2025, there were 7 million unemployed individuals competing for 7.7 million job openings, marking the first time since 2021 that job seekers outnumbered available positions. At the same time, workers worry about being displaced by the very technologies reshaping business, particularly artificial intelligence.

Into this environment comes OpenAI with a potentially disruptive concept: a jobs platform driven by ChatGPT. Reports from CNBC suggest the company is preparing to launch a hiring and certification ecosystem that could rival Microsoft’s LinkedIn. If successful, it could transform how the job market functions.

What ChatGPT May Bring to Hiring

Think of the vision as LinkedIn meets AI tutor meets recruiter—but powered by ChatGPT.

This isn’t a traditional job board. ChatGPT may integrate three core features.

First, skill certification: through ChatGPT’s study and learning modes, people could earn AI-validated micro-credentials, from AI fluency to prompt engineering, in days rather than years.

Second, AI-powered matching: instead of sifting through keyword-stuffed listings, ChatGPT might match users to roles based on demonstrated ability, not just past job titles.

Third, up-skilling: if candidates fall short of a requirement, the same AI could guide them through learning modules to get them up to speed.

Imagine logging into OpenAI’s platform and asking ChatGPT to show you open Web3 marketing roles. The AI identifies relevant openings and spots gaps in your skills—maybe in blockchain fundamentals, crypto ecosystems, or decentralized identity. It then suggests tailored training modules, and once completed, issues on-chain credentials that employers can verify. With these portable, tamper-proof certifications, ChatGPT may match you with hiring managers looking for precisely those skills—and might even facilitate scheduling interviews.

Pretty awesome promise for job seekers.

Why Timing May Be Critical for ChatGPT

Today’s labor market is fractured.

Applications stack up unanswered. Employers struggle to gauge real capability. Career shifters and those laid off can’t easily prove what they know. Meanwhile, workers feel vulnerable amid fast-moving AI disruption. OpenAI’s approach—with training and certification built into job matching—shifts ChatGPT from being seen as a threat to becoming a tool for empowerment.

The ambition is striking.

OpenAI is embedding certification directly into ChatGPT, allowing anyone to prepare and test within the app’s Study mode. The company has set a bold goal of certifying 10 million Americans by 2030, starting with launch partners like Walmart.

Walmart, the largest private employer in the world, announced it will provide the new no-cost OpenAI certification to its 2 million U.S. associates beginning next year as part of its up-skilling efforts. The program is designed to equip workers with essential AI skills and support their growth as technology becomes a larger part of daily work.

How ChatGPT Could Impact Job Seekers and Employers

For individuals, the implications could be life-changing. Many job seekers feel stuck without the right degree or with resumes filtered out by automated systems. Someone self-taught in generative AI may finally get credentials that employers trust. A mid-career worker facing layoffs could retrain quickly and prove new capabilities. International applicants might gain universally recognized credentials—all contributing to a fairer job market.

For employers, the promise is compelling. Hiring is expensive and uncertain. Resumes don’t always tell the truth, and turnover drains resources. If ChatGPT can certify skills and match candidates with greater precision, hiring could become faster, cheaper, and more confident—even introducing new pools of overlooked talent.

ChatGPT, Microsoft, and the Future of Work

The strategic dynamics are worth noting. Microsoft is OpenAI’s largest backer and also owns LinkedIn. On the one hand, OpenAI’s move may compete with its own investor’s platform. On the other, there may be collaboration—imagine ChatGPT-based certifications flowing into LinkedIn profiles. Regardless, wider AI literacy drives demand for Microsoft’s cloud and AI services, making this ecosystem too valuable to ignore.

Of course, there are risks.

Will employers accept AI-issued credentials on par with degrees? Could bias in the system reinforce inequalities? How will user data be protected? And will job seekers and employers be willing to adopt an entirely new platform? ChatGPT’s path to trust, adoption, and success depends not only on technology but on credibility. AI and education could help parents and children understand the path forward.

Still, the upside is significant. ChatGPT could recast AI from a job destroyer to a job enabler. Combining credentials, tutoring, and matching in one seamless experience may position OpenAI not just as LinkedIn’s competitor, but as the architect of a new category: the AI-driven opportunity engine.

The Future For Jobs MayBe ChatGPT

The traditional hiring playbook is outdated. Degrees are slow and often outdated, resumes are noisy, and job posts are overwhelming. Embedding seamless skills certification and learning into hiring may rewrite the hiring playbook. Will this help positively impact AI and Education?

If OpenAI succeeds, opportunities may shift from “where did you go to school?” to “what can you do?”

That paradigm shift could open doors for millions and help employers access untapped talent. We’re witnessing the early outlines of what may become one of the most significant AI applications of the next decade.

ChatGPT may not just challenge LinkedIn—it may fundamentally redefine how skills, work, and opportunity connect in a digital age.



Source link

Continue Reading

Trending