Connect with us

Tools & Platforms

AI “Can’t Draw a Damn Floor Plan With Any Degree of Coherence” – Common Edge

Published

on


Recently I began interviewing people for a piece I’m writing about “Artificial Intelligence and the Future of Architecture,” a ludicrously broad topic that will at some point require me to home in on a particular aspect of this rapidly changing phenomenon. Before undertaking that process, I spoke with some experts, starting with Phil Bernstein, an architect, educator, and longtime technologist. Bernstein is deputy dean and professor at the Yale School of Architecture, where he teaches courses in professional practice, project delivery, and technology. He previously served as a vice president at Autodesk, where he was responsible for setting the company’s AEC vision and strategy for technology. He writes extensively on issues of architectural practice and technology, and his books include Architecture | Design | Data — Practice Competency in the Era of Computation (Birkhauser, 2018) and Machine Learning: Architecture in the Era of Artificial Intelligence (2nd ed., RIBA, 2025). Our short talk covered a lot of ground: the integration of AI into schools, its obvious shortcomings, and where AI positions the profession.

PB: Phil Bernstein
MCP: Martin C. Pedersem

MCP:

You’re actively involved in the education of architects, all of them digital natives. How is AI being taught and integrated into the curriculum?

PB:

I was just watching a video with a chart that showed how long it took different technologies to get to 100 million users: the telephone, Facebook, and DeepSeek. It was 100 years for the phone, four years for Facebook, two months for Deepseek. Things are moving quickly, almost too quickly, which means you don’t have a lot of time to plan and test pedagogy. 

We are trying to do three things here. One, make sure that students understand the philosophical, legal, and disciplinary implications of using these kinds of technologies. I’ll be giving a talk to our incoming students as part of their orientation about the relationship between generative technology, architectural intellectual property, precedent, and academic integrity. And why you’re here to learn: not how to teach algorithms to do things, but to do them yourself. That’s one dimension. 

The second dimension is, we’re big believers in making as much technology as we can support and afford available to the students. So we’ve been working with the central campus to provide access to larger platforms, and to make things as available and understandable as we possibly can.

Thirdly, in the classroom, individual studio instructors are taking their own stance on how they want to see the tools used. We taught a studio last year where the students tried to delegate a lot of their design responsibility to algorithms, just to see how it went, right? 

PB:

Control. You lose a lot of design autonomy when you delegate to an algorithm. We’ve also been teaching a class called “Scales of Intelligence,” which tries to look at this problem from a theory, history, and technological evolution perspective, delving into the implications for practice and design. So it’s a mixed bag of stuff, very much a moving target, because the technology evolves literally during the course of a semester. 

MCP:

I am a luddite, and even I can see it improve in real time.

PB:

It’s getting more interesting, minute to minute, very shifting ground. I was on the Yale Provost’s AI Task Force, which was the faculty working group formed a year ago that tried to figure out what we’re doing as a university. Everybody was in the same boat, it’s just some of the boats were tiny, paper boats floating in the bathtub, and some of them were battleships—like the medical school, with more than 50 AI pilots. We’re trying to keep up with that. I don’t know how good a job we’re doing now. 

 

MCP:

What’s your sense in talking to people in the architecture world? How are they incorporating AI into their firms?

PB:

It’s difficult to generalize, because there are a lot of variables: your willingness to experiment, a firm’s internal capabilities, the availability of data, and degree of sophistication. I’ve been arguing that because this technology is expensive and requires a lot of data and investment to figure it out, the real innovation will happen in the big firms. 

Everybody’s creating marketing collateral, generating renderings, all that stuff. The diffusion models and large language models, the two things that are widely available—everybody is screwing around with that. The question is, where’s the innovation? And it’s a little early to tell.

The other thing you’ve got to remember is the basic principle of technology adoption in the architectural world, which is: When you figure out a technological advantage, you don’t broadcast it; you keep your advantage to yourself for as long as you can, until somebody else catches up. A recent example: It’s not like there were firms out there helping each other adopt building information modeling.

MCP:

I guess it’s impossible to project where all this goes in three or five years?

PB:

I don’t know. The reigning thesis—I’m simplifying this—is that you can build knowledge from which you can reason inferentially by memorizing all the data in the world and breaking it into a giant probability matrix. I don’t happen to think that thesis is correct. It’s the Connectionists vs. the Symbolic Logic people. I believe that you’re going to need both of these things. But all the money right now is down on the Connectionists, the Sam Altman theory of the world. Some of these things are very useful, but they’re not 100% reliable. And in our world, as architects, reliability is kind of important.

MCP:

Again, we can’t predict the pace of this, but it’s going to fundamentally change the role of the architect. How do you see that evolving as these tools get more powerful?

PB:

Why do you say that? There’s a conclusion in your statement. 

MCP:

I guess, because I’ve talked to a few people. They seem to be using AI now for everything but design. You can do research much faster using AI. 

PB:

That’s true, but you better check it.

MCP:

I agree, but isn’t there inevitably a point when the tools become sophisticated enough where they can design buildings?

PB:

So, therefore … what? 

MCP:

Where does that leave human architects?

PB:

I don’t know that it’s inevitable that machines could design entire buildings well …

MCP:

It would seem to me that we would be moving toward that.

PB:

The essence of my argument is: there are many places where AI is very useful. Where it begins to collapse is when it’s operating in a multivalent environment, trying to integrate multiple streams of both data and logic.

MCP:

Which would be virtually any architecture project.

PB:

Exactly. Certain streams may become more optimized. For instance: If I were a structural engineer right now, I’d be worried, because structural engineering has very clear, robust means of representation, clear rules of measurement. The bulk of the work can be routinized. So they’re massively exposed. But these diffusion models right now can’t draw a damn floor plan with any degree of coherence. A floor plan is an abstraction of a much more complicated phenomenon. It’s going to be a while before these systems are able to do the most important things that architects do, which is make judgments, exercise experience, make tradeoffs, and take responsibility for what they do.

 

Phil-Bernstein via grace farms smaller
MCP:

Where do you fall on the AI-as-job-obliterator, AI-as-job-creator debate? 

PB:

For purposes of this discussion, let’s stipulate that artificial general intelligence that can do anything isn’t in the foreseeable future, because once that happens, the whole economic proposition of the world collapses. When that happens, we’re in a completely different world. And that won’t just be a problem for architects. So, if that’s not going to happen any time soon, then you have two sets of questions. Question one: In the near term, does AI provide productivity gains in a way that reduces the need for staff in an architect’s office?

MCP:

That may be the question I’m asking …

PB:

OK, in the near term, maybe we won’t need as many marketing people. You won’t need any rendering people, although you probably didn’t have those in the first place. But let me give you an example from an adjacent discipline that’s come up recently. It turns out that one thing that these AIs are supposed to be really good at is writing computer code. Because computer code is highly rational. You can test it and see if it works. There’s boatloads of it on the internet as training data in well organized locations, very consistently accessible—which is not true of architectural data, by the way. 

It turns out that many software engineering companies who had decided to replace their programmers with AIs are now hiring them back because the code-generating AIs are not reliable enough to write good code. And then you intersect that with the problem that was described in a presentation I saw a couple of months ago by our director of undergraduate studies in computer science, [Theodore Kim,] who said that so many students are using AI to generate code that they don’t understand how to debug the code once it’s written. He got a call from the head of software engineering for EA, who said, “I can’t hire your graduates because they don’t know how to debug.” And if it’s true here, I guarantee you, it’s true everywhere across the country. So you have a skill loss. 

Then there’s what I would call the issue of the luddites. The [original] Luddites didn’t object to the weaving machines, per se, but they objected to the fact that while they were waiting for a job in the loom factory, they didn’t have any work. Because there was this gap between when humans get replaced by technology and when there are new jobs for them doing other things: you lost your job plowing that cornfield with a horse because there’s a tractor now, but you didn’t get a job in the tractor factory, somebody else did. These are all issues that have to be thought about.

MCP:

It seems like a lot of architects are dismissive because of what AI can’t do now, but that seems silly to me, because I’m seeing AI enabling things like transcriptions now. 

PB:

But transcriptions are so easy. I do not disagree that, over time, these algorithms will get more capable doing some of the things that architects do. But if we get to the point where they’re good enough to literally replace architects, we’re going to be facing a much larger social problem. 

There’s also a market problem here that you need to be aware of. These things are fantastically expensive to build, and architects are not good technology customers. We’re cheap and steal a lot of software—not good customers for multibillion-dollar investments. Maybe, over time, someone builds something that’s sophisticated enough, multimodal enough, that can operate with language, video, three-dimensional reasoning, analytical models, cost estimates, all those things that architects need. But I’m not concerned that that’s going to happen in the foreseeable future. It’s too hard a problem, unless somebody comes up with a way to train these things on much skinnier data sets. 

That’s the other problem: all of our data is disaggregated, spread all over the place. Nobody wants to share it, because it involves risk. When the med school has 33,000 patients enrolled in a trial, they’re getting lots of highly curated, accurate data that they can use to train their AIs. Where’s our accurate data? I can take every Revit model that Skidmore, Owings & Merrill has ever produced in the history of their firm, and it’s not nearly enough data to train an AI. Not nearly enough.

MCP:

And what do you think AI does to the traditional business model of architecture, which has been under some pressure even before this?

PB:

That’s always been under pressure. It depends on what we as a profession decide. I’ve written extensively about this. We have two options. The first option is a race to the bottom: Who can use AI to cut their fees as much as possible? Option number two, value: How do we use AI to do a better job and charge more money? That’s not a technology question, it’s a business strategy question. So if I’ve built an AI that is so good that I can promise a client that x is going to happen or y is going to happen, I should charge for that: “I’m absolutely positive that this building is going to produce 23% less carbon than it would have had I not designed it. Here’s a third party that can validate this. Write me a check.” 

Featured image courtesy of Easy-Peasy.AI. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Jared Kushner launches AI startup with top Israeli tech entrepreneur

Published

on


Coming to light after operating secretly since 2024, the company raised $30 million in a Series A round led by Kushner’s Affinity Partners and Gil’s Gil Capital, with backing from prominent investors like Coinbase CEO Brian Armstrong, Stripe founder Patrick Collison and LinkedIn co-founder Reid Hoffman. Brain Co. aims to bridge the gap between large language models like GPT-5 and their practical application in organizations.

2 View gallery

איוונקה וג'ראד קושנר

Ivanka Trump and Jared Kushner

(Photo: Paul Sancya, AP)

The venture began in February 2024 when Kushner, Gil, and former Mexican Foreign Minister Luis Videgaray met to address challenges large organizations face in integrating AI tools. Kushner, seeking to expand Affinity’s AI investments, connected with Gil, a former Google and Twitter executive turned venture capitalist, through his brother, Josh Kushner.

Videgaray, who met Kushner during Trump’s 2016 campaign, also joined. Brain Co. has secured deals with major clients like Sotheby’s, owned by Israeli-French businessman Patrick Drahi and Warburg Pincus, alongside government agencies, energy firms, healthcare systems and hospitality chains.

With 40 employees, Brain Co. collaborates with OpenAI to develop tailored applications. A recent MIT study cited by Forbes found that 95% of generative AI pilot programs failed in surveyed organizations, highlighting the gap Brain Co. targets.

CEO Clemens Mewald, a former AI expert at Google and Databricks, explained, “So far, we haven’t seen a reason to only double down on one sector. Actually, it turns out that at the technology level and the AI capability level, a lot of the use cases look very similar.”

He noted similarities between processing building permits and insurance claims, both requiring document analysis and rule-based recommendations, areas where Brain Co. is active.

Kushner, who founded Affinity Partners after leaving the White House, said, We’re living through a once-in-a-generation platform shift,” Kushner said in a press release. “After speaking with Elad, we realized we could build a bridge between Silicon Valley’s best AI talent and the world’s most important institutions to drive global impact.”

Affinity manages over $4.8 billion, primarily from Saudi, Qatari and UAE funds. In September 2024, Brain Co. acquired Serene AI, bringing in experienced founders. While Kushner will serve as an active board member, Gil said he will primarily operate through Affinity.





Source link

Continue Reading

Tools & Platforms

Google AI Chief Stresses Continuous Learning for Fast-Changing AI Era

Published

on


At an open-air summit in Athens, Demis Hassabis, head of Google’s DeepMind and Nobel chemistry laureate, argued that the skill most needed in the years ahead will be the ability to keep learning. He described education as moving into a period where adaptability matters more than fixed knowledge, because the speed of artificial intelligence research is shortening the lifespan of expertise.

Hassabis said future workers will have to treat learning as a constant process, not a stage that ends with graduation. He pointed to rapid advances in computing and biology as examples of how quickly fields now change once AI tools enter the picture.

Outlook on technology

The DeepMind chief warned that artificial general intelligence may not be far away. In his view, it could emerge within a decade, carrying a weight of opportunity and risk. He described its potential impact as larger and faster than the industrial revolution, a shift that could deliver breakthroughs in medicine, clean energy, and space exploration.

Even so, he stressed that powerful models must be tested carefully before being widely deployed. The practice of pushing products out quickly, common in earlier technology waves, should not guide the release of systems capable of influencing economies and societies on a global scale.

Prime minister’s caution

Greek Prime Minister Kyriakos Mitsotakis, who shared the stage at the Odeon of Herodes Atticus, said governments will struggle to keep pace with corporate growth unless they adopt a more active role. He warned that when the benefits of technology are concentrated among a small set of companies, public confidence erodes. He tied the issue to social stability, saying communities won’t support AI unless they see its value in everyday life.

Mitsotakis pointed to Greece’s efforts to build an “AI factory” around a new supercomputer in Lavrio. He presented the project as part of a wider European push to turn regulation and research into competitive advantages, while reducing reliance on U.S. and Chinese platforms.

Education and jobs

Both speakers returned repeatedly to the theme of skills. Hassabis said that in addition to traditional training in science and mathematics, students should learn how to monitor their own progress and adjust their methods. He argued that the most valuable opportunities often appear where two fields overlap, and that AI can serve as a tutor to help learners explore those connections.

Mitsotakis said the challenge for governments is to match school systems with shifting labor markets. He noted that Greece is mainly a service economy, which may delay some of the disruption already visible in manufacturing-heavy nations. But he cautioned that job losses are unavoidable, including in sectors long thought resistant to automation.

Strains on democracy

The prime minister voiced concern that misinformation powered by AI could undermine elections. He mentioned deepfakes as a direct threat to public trust and said Europe may need stricter rules on content distribution. He also highlighted risks to mental health among teenagers exposed to endless scrolling and algorithm-driven feeds.

Hassabis agreed that lessons from social media should inform current choices. He suggested AI might help by filtering information in ways that broaden debate instead of narrowing it. He described a future where personal assistants act in the interest of individual users, steering them toward content that supports healthier dialogue.

The question of abundance

Discussion also touched on the idea that AI could usher in an era of radical abundance. Hassabis said research in protein science, energy, and material design already shows how quickly knowledge is expanding. He argued that the technology could open access to vast resources, but he added that how wealth is shared will depend on governments and economic policy, not algorithms.

Mitsotakis drew parallels with earlier industrial shifts, warning that if productivity gains are captured only by large firms, pension systems and social programs will face heavy strain. He said policymakers must prepare for a period of disruption that could arrive faster than many expect.

Greece’s role

The Athens event also highlighted the country’s ambition to build a regional hub for technology. Mitsotakis praised the growth of local startups and said incentives, venture capital, and government adoption of AI in public services would be central to maintaining momentum.

Hassabis, whose family has roots in Cyprus, said Europe needs to remain at the frontier of AI research if it wants influence in setting ethical and technical standards. He called Greece’s combination of history and new infrastructure a symbolic setting for conversations on the future of technology.

Preparing for the next era

The dialogue closed on a shared message: societies will need citizens who can adapt and learn throughout their lives. For Hassabis, this adaptability is the foundation for navigating a future shaped by artificial intelligence. For Mitsotakis, the task is making sure those changes strengthen democratic values rather than weaken them.

Notes: This post was edited/created using GenAI tools.

Read next: Most Americans Now Rely on AI in Search and Shopping, Survey Finds





Source link

Continue Reading

Tools & Platforms

Why does ChatGPT agree with everything you say? The dangers of sycophantic AI

Published

on


Do you like me? I feel really sad,” a 30-year-old Sydney woman asked ChatGPT recently.

Then, “Why isn’t my life like the movies?”

Loading…



Source link

Continue Reading

Trending