Connect with us

Tools & Platforms

Data gaps and AI training hurdles threaten progress in VBC, report finds

Published

on















Data gaps and AI training hurdles threaten progress in VBC, report finds | Healthcare IT News



Skip to main content



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Flexential Hires Greg Ogle as CIO to Accelerate IT Transformation and AI Strategy

Published

on


Data center veteran to drive AI integration and advance security across Flexential’s operations and customer-facing systems

DENVER, Sept. 10, 2025 /PRNewswire/ — Flexential, a leading provider of secure and flexible data center solutions, has appointed Greg Ogle as Chief Information Officer (CIO) to unify its internal technology systems, strengthen cybersecurity oversight, and lead the next phase of its digital and AI-enabled transformation.

Ogle brings more than 25 years of experience aligning enterprise IT with business goals, most recently as Vice President of Global IT Infrastructure and Cloud Operations at Equinix. In that role, he directed global networks, cloud platforms, and enterprise applications, and served a board-appointed term as the interim Chief Information Security Officer.

As Flexential’s CIO, Ogle will also oversee enterprise systems, cloud architecture, data governance, and AI integration. Additionally, he will shape IT policy and investment strategy, support compliance efforts, and modernize the company’s digital foundation. His appointment comes as demand for hybrid infrastructure and AI-ready capacity accelerates, and data center vacancy rates dipped to a low of 1.6% in primary North American markets earlier this year.

“Greg has a proven track record of leading IT in complex, high-stakes environments,” said Ryan Mallory, President & COO at Flexential. “He brings the operational discipline and forward-looking vision we need as we embed AI more deeply into our technology and processes. Greg’s leadership will ensure our systems are more intelligent, resilient, and efficient, enhancing both the employee experience and the way customers engage with our services.

Ogle’s immediate focus will be on unifying Flexential’s technology stack, building a standardized architecture for digital services, and embedding AI across IT operations. His broader strategy includes streamlining platform portfolios and engineering processes, as well as reinforcing governance through centralized tools and policies.

“Flexential is at a pivotal moment, where the convergence of AI, cloud, and hybrid infrastructure are reshaping how enterprises operate,” said Greg Ogle, CIO at Flexential. “I’m excited to build on Flexential’s strong foundation to deliver technology that is smarter, more secure, and more connected. My focus will be on ensuring our digital backbone not only supports today’s needs but also provides the resiliency and trust our customers require as we prepare for the opportunities of tomorrow.”

Ogle’s appointment follows a series of recent leadership moves at Flexential aimed at supporting its long-term infrastructure strategy. In July, the company appointed Thomas Bailey as Vice President of Energy and Matthew Baumann as Vice President of Site Acquisition to expand development capacity and secure long-term power access. Flexential has also raised approximately $1 billion from GI Partners, GI Data Infrastructure, Hamilton Lane, and Morgan Stanley Infrastructure Partners in the last 12 months to support its continued expansion.

For more information on Flexential’s secure infrastructure and flexible IT solutions, visit www.flexential.com.

About Flexential
Flexential empowers the IT journey of the most complex businesses by offering customizable IT solutions designed for today’s demanding high-density computing requirements. With colocation, cloud, connectivity, data protection, and professional services, the FlexAnywhere® platform anchors our services in 40+ data centers across 18 highly connected markets on a scalable 100Gbps+ private network backbone. Flexential solutions are strategically engineered to meet the most stringent challenges in security, compliance, and resiliency. Experience the power of IT flexibility and how we enable digital transformation at www.flexential.com.

Media Contact
Alison Brooker
Corporate Marketing
[email protected]

Christian Rizzo
Gregory FCA for Flexential
[email protected]

SOURCE Flexential



Source link

Continue Reading

Tools & Platforms

Google Cloud CEO Says Tech Giant Has ‘Made Billions’ on AI

Published

on

By


Google Cloud’s chief executive has reportedly outlined how the company is generating revenue through AI services.

“We’ve made billions using AI already,” Thomas Kurian said Tuesday (Sept. 9) at the Goldman Sachs Communacopia and Technology Conference in San Francisco, CNBC reported.

“Our backlog is now at $106 billion — it is growing faster than our revenue. More than 50% of it will convert to revenue over the next two years,” Kurian said.

Google reported revenue of $13.62 billion for its cloud computing unit, up 32% over the previous year. The company’s cloud business trails those of Microsoft and Amazon, the report noted, but is growing faster than them.

In some cases, the revenue comes from people paying by consumption, such as enterprise customers who purchase artificial intelligence infrastructure. Others pay for cloud services through subscriptions.

“You pay per user per monthly fee — for example, agents or Workspace,” said Kurian, referring to the company’s Gemini products and Google Workspace productivity suite, which come with a number of subscription tiers.

Kurian told the conference that upselling is another important part of Google Cloud’s strategy.

“We also upsell people as they use more of it from one version to another because we have higher quality models and higher-priced tiers,” he said, also noting that Google is capturing new customers more quickly.

“We’ve seen 28% sequential quarter-over-quarter growth in new customer wins in the first half of the year,” said Kurian, with nearly two-thirds of customers already using Google Cloud’s AI tools.

In other Google Cloud news, the company’s head of strategy, Web3 said recently that Google’s Layer 1 blockchain will provide a neutral infrastructure layer for use by financial institutions.

In a post on LinkedIn, Rich Widmann wrote that the blockchain, Google Cloud Universal Ledger (GCUL), “brings together years of R&D at Google to provide financial institutions with a novel Layer 1 that is performant, credibly neutral and enables Python-based smart contracts.”

Linking to a March report by PYMNTS, Widmann added that CME Group employed GCUL to as it explored tokenization and payments on its commodities exchange.

“Besides bringing to bear Google’s distribution, GCUL is a neutral infrastructure layer,” Widmann wrote in his post. “Tether won’t use Circle’s blockchain — and Adyen probably won’t use Stripe’s blockchain. But any financial institution can build with GCUL.”

Widmann said Google Cloud will reveal additional technical details about GCUL within months.



Source link

Continue Reading

Tools & Platforms

AI “Can’t Draw a Damn Floor Plan With Any Degree of Coherence” – Common Edge

Published

on


Recently I began interviewing people for a piece I’m writing about “Artificial Intelligence and the Future of Architecture,” a ludicrously broad topic that will at some point require me to home in on a particular aspect of this rapidly changing phenomenon. Before undertaking that process, I spoke with some experts, starting with Phil Bernstein, an architect, educator, and longtime technologist. Bernstein is deputy dean and professor at the Yale School of Architecture, where he teaches courses in professional practice, project delivery, and technology. He previously served as a vice president at Autodesk, where he was responsible for setting the company’s AEC vision and strategy for technology. He writes extensively on issues of architectural practice and technology, and his books include Architecture | Design | Data — Practice Competency in the Era of Computation (Birkhauser, 2018) and Machine Learning: Architecture in the Era of Artificial Intelligence (2nd ed., RIBA, 2025). Our short talk covered a lot of ground: the integration of AI into schools, its obvious shortcomings, and where AI positions the profession.

PB: Phil Bernstein
MCP: Martin C. Pedersem

MCP:

You’re actively involved in the education of architects, all of them digital natives. How is AI being taught and integrated into the curriculum?

PB:

I was just watching a video with a chart that showed how long it took different technologies to get to 100 million users: the telephone, Facebook, and DeepSeek. It was 100 years for the phone, four years for Facebook, two months for Deepseek. Things are moving quickly, almost too quickly, which means you don’t have a lot of time to plan and test pedagogy. 

We are trying to do three things here. One, make sure that students understand the philosophical, legal, and disciplinary implications of using these kinds of technologies. I’ll be giving a talk to our incoming students as part of their orientation about the relationship between generative technology, architectural intellectual property, precedent, and academic integrity. And why you’re here to learn: not how to teach algorithms to do things, but to do them yourself. That’s one dimension. 

The second dimension is, we’re big believers in making as much technology as we can support and afford available to the students. So we’ve been working with the central campus to provide access to larger platforms, and to make things as available and understandable as we possibly can.

Thirdly, in the classroom, individual studio instructors are taking their own stance on how they want to see the tools used. We taught a studio last year where the students tried to delegate a lot of their design responsibility to algorithms, just to see how it went, right? 

PB:

Control. You lose a lot of design autonomy when you delegate to an algorithm. We’ve also been teaching a class called “Scales of Intelligence,” which tries to look at this problem from a theory, history, and technological evolution perspective, delving into the implications for practice and design. So it’s a mixed bag of stuff, very much a moving target, because the technology evolves literally during the course of a semester. 

MCP:

I am a luddite, and even I can see it improve in real time.

PB:

It’s getting more interesting, minute to minute, very shifting ground. I was on the Yale Provost’s AI Task Force, which was the faculty working group formed a year ago that tried to figure out what we’re doing as a university. Everybody was in the same boat, it’s just some of the boats were tiny, paper boats floating in the bathtub, and some of them were battleships—like the medical school, with more than 50 AI pilots. We’re trying to keep up with that. I don’t know how good a job we’re doing now. 

 

MCP:

What’s your sense in talking to people in the architecture world? How are they incorporating AI into their firms?

PB:

It’s difficult to generalize, because there are a lot of variables: your willingness to experiment, a firm’s internal capabilities, the availability of data, and degree of sophistication. I’ve been arguing that because this technology is expensive and requires a lot of data and investment to figure it out, the real innovation will happen in the big firms. 

Everybody’s creating marketing collateral, generating renderings, all that stuff. The diffusion models and large language models, the two things that are widely available—everybody is screwing around with that. The question is, where’s the innovation? And it’s a little early to tell.

The other thing you’ve got to remember is the basic principle of technology adoption in the architectural world, which is: When you figure out a technological advantage, you don’t broadcast it; you keep your advantage to yourself for as long as you can, until somebody else catches up. A recent example: It’s not like there were firms out there helping each other adopt building information modeling.

MCP:

I guess it’s impossible to project where all this goes in three or five years?

PB:

I don’t know. The reigning thesis—I’m simplifying this—is that you can build knowledge from which you can reason inferentially by memorizing all the data in the world and breaking it into a giant probability matrix. I don’t happen to think that thesis is correct. It’s the Connectionists vs. the Symbolic Logic people. I believe that you’re going to need both of these things. But all the money right now is down on the Connectionists, the Sam Altman theory of the world. Some of these things are very useful, but they’re not 100% reliable. And in our world, as architects, reliability is kind of important.

MCP:

Again, we can’t predict the pace of this, but it’s going to fundamentally change the role of the architect. How do you see that evolving as these tools get more powerful?

PB:

Why do you say that? There’s a conclusion in your statement. 

MCP:

I guess, because I’ve talked to a few people. They seem to be using AI now for everything but design. You can do research much faster using AI. 

PB:

That’s true, but you better check it.

MCP:

I agree, but isn’t there inevitably a point when the tools become sophisticated enough where they can design buildings?

PB:

So, therefore … what? 

MCP:

Where does that leave human architects?

PB:

I don’t know that it’s inevitable that machines could design entire buildings well …

MCP:

It would seem to me that we would be moving toward that.

PB:

The essence of my argument is: there are many places where AI is very useful. Where it begins to collapse is when it’s operating in a multivalent environment, trying to integrate multiple streams of both data and logic.

MCP:

Which would be virtually any architecture project.

PB:

Exactly. Certain streams may become more optimized. For instance: If I were a structural engineer right now, I’d be worried, because structural engineering has very clear, robust means of representation, clear rules of measurement. The bulk of the work can be routinized. So they’re massively exposed. But these diffusion models right now can’t draw a damn floor plan with any degree of coherence. A floor plan is an abstraction of a much more complicated phenomenon. It’s going to be a while before these systems are able to do the most important things that architects do, which is make judgments, exercise experience, make tradeoffs, and take responsibility for what they do.

 

Phil-Bernstein via grace farms smaller
MCP:

Where do you fall on the AI-as-job-obliterator, AI-as-job-creator debate? 

PB:

For purposes of this discussion, let’s stipulate that artificial general intelligence that can do anything isn’t in the foreseeable future, because once that happens, the whole economic proposition of the world collapses. When that happens, we’re in a completely different world. And that won’t just be a problem for architects. So, if that’s not going to happen any time soon, then you have two sets of questions. Question one: In the near term, does AI provide productivity gains in a way that reduces the need for staff in an architect’s office?

MCP:

That may be the question I’m asking …

PB:

OK, in the near term, maybe we won’t need as many marketing people. You won’t need any rendering people, although you probably didn’t have those in the first place. But let me give you an example from an adjacent discipline that’s come up recently. It turns out that one thing that these AIs are supposed to be really good at is writing computer code. Because computer code is highly rational. You can test it and see if it works. There’s boatloads of it on the internet as training data in well organized locations, very consistently accessible—which is not true of architectural data, by the way. 

It turns out that many software engineering companies who had decided to replace their programmers with AIs are now hiring them back because the code-generating AIs are not reliable enough to write good code. And then you intersect that with the problem that was described in a presentation I saw a couple of months ago by our director of undergraduate studies in computer science, [Theodore Kim,] who said that so many students are using AI to generate code that they don’t understand how to debug the code once it’s written. He got a call from the head of software engineering for EA, who said, “I can’t hire your graduates because they don’t know how to debug.” And if it’s true here, I guarantee you, it’s true everywhere across the country. So you have a skill loss. 

Then there’s what I would call the issue of the luddites. The [original] Luddites didn’t object to the weaving machines, per se, but they objected to the fact that while they were waiting for a job in the loom factory, they didn’t have any work. Because there was this gap between when humans get replaced by technology and when there are new jobs for them doing other things: you lost your job plowing that cornfield with a horse because there’s a tractor now, but you didn’t get a job in the tractor factory, somebody else did. These are all issues that have to be thought about.

MCP:

It seems like a lot of architects are dismissive because of what AI can’t do now, but that seems silly to me, because I’m seeing AI enabling things like transcriptions now. 

PB:

But transcriptions are so easy. I do not disagree that, over time, these algorithms will get more capable doing some of the things that architects do. But if we get to the point where they’re good enough to literally replace architects, we’re going to be facing a much larger social problem. 

There’s also a market problem here that you need to be aware of. These things are fantastically expensive to build, and architects are not good technology customers. We’re cheap and steal a lot of software—not good customers for multibillion-dollar investments. Maybe, over time, someone builds something that’s sophisticated enough, multimodal enough, that can operate with language, video, three-dimensional reasoning, analytical models, cost estimates, all those things that architects need. But I’m not concerned that that’s going to happen in the foreseeable future. It’s too hard a problem, unless somebody comes up with a way to train these things on much skinnier data sets. 

That’s the other problem: all of our data is disaggregated, spread all over the place. Nobody wants to share it, because it involves risk. When the med school has 33,000 patients enrolled in a trial, they’re getting lots of highly curated, accurate data that they can use to train their AIs. Where’s our accurate data? I can take every Revit model that Skidmore, Owings & Merrill has ever produced in the history of their firm, and it’s not nearly enough data to train an AI. Not nearly enough.

MCP:

And what do you think AI does to the traditional business model of architecture, which has been under some pressure even before this?

PB:

That’s always been under pressure. It depends on what we as a profession decide. I’ve written extensively about this. We have two options. The first option is a race to the bottom: Who can use AI to cut their fees as much as possible? Option number two, value: How do we use AI to do a better job and charge more money? That’s not a technology question, it’s a business strategy question. So if I’ve built an AI that is so good that I can promise a client that x is going to happen or y is going to happen, I should charge for that: “I’m absolutely positive that this building is going to produce 23% less carbon than it would have had I not designed it. Here’s a third party that can validate this. Write me a check.” 

Featured image courtesy of Easy-Peasy.AI. 



Source link

Continue Reading

Trending