Tools & Platforms
AI “Can’t Draw a Damn Floor Plan With Any Degree of Coherence” – Common Edge

Recently I began interviewing people for a piece I’m writing about “Artificial Intelligence and the Future of Architecture,” a ludicrously broad topic that will at some point require me to home in on a particular aspect of this rapidly changing phenomenon. Before undertaking that process, I spoke with some experts, starting with Phil Bernstein, an architect, educator, and longtime technologist. Bernstein is deputy dean and professor at the Yale School of Architecture, where he teaches courses in professional practice, project delivery, and technology. He previously served as a vice president at Autodesk, where he was responsible for setting the company’s AEC vision and strategy for technology. He writes extensively on issues of architectural practice and technology, and his books include Architecture | Design | Data — Practice Competency in the Era of Computation (Birkhauser, 2018) and Machine Learning: Architecture in the Era of Artificial Intelligence (2nd ed., RIBA, 2025). Our short talk covered a lot of ground: the integration of AI into schools, its obvious shortcomings, and where AI positions the profession.
PB: Phil Bernstein
MCP: Martin C. Pedersem
You’re actively involved in the education of architects, all of them digital natives. How is AI being taught and integrated into the curriculum?
I was just watching a video with a chart that showed how long it took different technologies to get to 100 million users: the telephone, Facebook, and DeepSeek. It was 100 years for the phone, four years for Facebook, two months for Deepseek. Things are moving quickly, almost too quickly, which means you don’t have a lot of time to plan and test pedagogy.
We are trying to do three things here. One, make sure that students understand the philosophical, legal, and disciplinary implications of using these kinds of technologies. I’ll be giving a talk to our incoming students as part of their orientation about the relationship between generative technology, architectural intellectual property, precedent, and academic integrity. And why you’re here to learn: not how to teach algorithms to do things, but to do them yourself. That’s one dimension.
The second dimension is, we’re big believers in making as much technology as we can support and afford available to the students. So we’ve been working with the central campus to provide access to larger platforms, and to make things as available and understandable as we possibly can.
Thirdly, in the classroom, individual studio instructors are taking their own stance on how they want to see the tools used. We taught a studio last year where the students tried to delegate a lot of their design responsibility to algorithms, just to see how it went, right?
Control. You lose a lot of design autonomy when you delegate to an algorithm. We’ve also been teaching a class called “Scales of Intelligence,” which tries to look at this problem from a theory, history, and technological evolution perspective, delving into the implications for practice and design. So it’s a mixed bag of stuff, very much a moving target, because the technology evolves literally during the course of a semester.
I am a luddite, and even I can see it improve in real time.
It’s getting more interesting, minute to minute, very shifting ground. I was on the Yale Provost’s AI Task Force, which was the faculty working group formed a year ago that tried to figure out what we’re doing as a university. Everybody was in the same boat, it’s just some of the boats were tiny, paper boats floating in the bathtub, and some of them were battleships—like the medical school, with more than 50 AI pilots. We’re trying to keep up with that. I don’t know how good a job we’re doing now.
What’s your sense in talking to people in the architecture world? How are they incorporating AI into their firms?
It’s difficult to generalize, because there are a lot of variables: your willingness to experiment, a firm’s internal capabilities, the availability of data, and degree of sophistication. I’ve been arguing that because this technology is expensive and requires a lot of data and investment to figure it out, the real innovation will happen in the big firms.
Everybody’s creating marketing collateral, generating renderings, all that stuff. The diffusion models and large language models, the two things that are widely available—everybody is screwing around with that. The question is, where’s the innovation? And it’s a little early to tell.
The other thing you’ve got to remember is the basic principle of technology adoption in the architectural world, which is: When you figure out a technological advantage, you don’t broadcast it; you keep your advantage to yourself for as long as you can, until somebody else catches up. A recent example: It’s not like there were firms out there helping each other adopt building information modeling.
I guess it’s impossible to project where all this goes in three or five years?
I don’t know. The reigning thesis—I’m simplifying this—is that you can build knowledge from which you can reason inferentially by memorizing all the data in the world and breaking it into a giant probability matrix. I don’t happen to think that thesis is correct. It’s the Connectionists vs. the Symbolic Logic people. I believe that you’re going to need both of these things. But all the money right now is down on the Connectionists, the Sam Altman theory of the world. Some of these things are very useful, but they’re not 100% reliable. And in our world, as architects, reliability is kind of important.
Again, we can’t predict the pace of this, but it’s going to fundamentally change the role of the architect. How do you see that evolving as these tools get more powerful?
Why do you say that? There’s a conclusion in your statement.
I guess, because I’ve talked to a few people. They seem to be using AI now for everything but design. You can do research much faster using AI.
That’s true, but you better check it.
I agree, but isn’t there inevitably a point when the tools become sophisticated enough where they can design buildings?
So, therefore … what?
Where does that leave human architects?
I don’t know that it’s inevitable that machines could design entire buildings well …
It would seem to me that we would be moving toward that.
The essence of my argument is: there are many places where AI is very useful. Where it begins to collapse is when it’s operating in a multivalent environment, trying to integrate multiple streams of both data and logic.
Which would be virtually any architecture project.
Exactly. Certain streams may become more optimized. For instance: If I were a structural engineer right now, I’d be worried, because structural engineering has very clear, robust means of representation, clear rules of measurement. The bulk of the work can be routinized. So they’re massively exposed. But these diffusion models right now can’t draw a damn floor plan with any degree of coherence. A floor plan is an abstraction of a much more complicated phenomenon. It’s going to be a while before these systems are able to do the most important things that architects do, which is make judgments, exercise experience, make tradeoffs, and take responsibility for what they do.

Where do you fall on the AI-as-job-obliterator, AI-as-job-creator debate?
For purposes of this discussion, let’s stipulate that artificial general intelligence that can do anything isn’t in the foreseeable future, because once that happens, the whole economic proposition of the world collapses. When that happens, we’re in a completely different world. And that won’t just be a problem for architects. So, if that’s not going to happen any time soon, then you have two sets of questions. Question one: In the near term, does AI provide productivity gains in a way that reduces the need for staff in an architect’s office?
That may be the question I’m asking …
OK, in the near term, maybe we won’t need as many marketing people. You won’t need any rendering people, although you probably didn’t have those in the first place. But let me give you an example from an adjacent discipline that’s come up recently. It turns out that one thing that these AIs are supposed to be really good at is writing computer code. Because computer code is highly rational. You can test it and see if it works. There’s boatloads of it on the internet as training data in well organized locations, very consistently accessible—which is not true of architectural data, by the way.
It turns out that many software engineering companies who had decided to replace their programmers with AIs are now hiring them back because the code-generating AIs are not reliable enough to write good code. And then you intersect that with the problem that was described in a presentation I saw a couple of months ago by our director of undergraduate studies in computer science, [Theodore Kim,] who said that so many students are using AI to generate code that they don’t understand how to debug the code once it’s written. He got a call from the head of software engineering for EA, who said, “I can’t hire your graduates because they don’t know how to debug.” And if it’s true here, I guarantee you, it’s true everywhere across the country. So you have a skill loss.
Then there’s what I would call the issue of the luddites. The [original] Luddites didn’t object to the weaving machines, per se, but they objected to the fact that while they were waiting for a job in the loom factory, they didn’t have any work. Because there was this gap between when humans get replaced by technology and when there are new jobs for them doing other things: you lost your job plowing that cornfield with a horse because there’s a tractor now, but you didn’t get a job in the tractor factory, somebody else did. These are all issues that have to be thought about.
It seems like a lot of architects are dismissive because of what AI can’t do now, but that seems silly to me, because I’m seeing AI enabling things like transcriptions now.
But transcriptions are so easy. I do not disagree that, over time, these algorithms will get more capable doing some of the things that architects do. But if we get to the point where they’re good enough to literally replace architects, we’re going to be facing a much larger social problem.
There’s also a market problem here that you need to be aware of. These things are fantastically expensive to build, and architects are not good technology customers. We’re cheap and steal a lot of software—not good customers for multibillion-dollar investments. Maybe, over time, someone builds something that’s sophisticated enough, multimodal enough, that can operate with language, video, three-dimensional reasoning, analytical models, cost estimates, all those things that architects need. But I’m not concerned that that’s going to happen in the foreseeable future. It’s too hard a problem, unless somebody comes up with a way to train these things on much skinnier data sets.
That’s the other problem: all of our data is disaggregated, spread all over the place. Nobody wants to share it, because it involves risk. When the med school has 33,000 patients enrolled in a trial, they’re getting lots of highly curated, accurate data that they can use to train their AIs. Where’s our accurate data? I can take every Revit model that Skidmore, Owings & Merrill has ever produced in the history of their firm, and it’s not nearly enough data to train an AI. Not nearly enough.
And what do you think AI does to the traditional business model of architecture, which has been under some pressure even before this?
That’s always been under pressure. It depends on what we as a profession decide. I’ve written extensively about this. We have two options. The first option is a race to the bottom: Who can use AI to cut their fees as much as possible? Option number two, value: How do we use AI to do a better job and charge more money? That’s not a technology question, it’s a business strategy question. So if I’ve built an AI that is so good that I can promise a client that x is going to happen or y is going to happen, I should charge for that: “I’m absolutely positive that this building is going to produce 23% less carbon than it would have had I not designed it. Here’s a third party that can validate this. Write me a check.”
Featured image courtesy of Easy-Peasy.AI.
Tools & Platforms
Rise of High-Paid Cleanup Engineers

In the rapidly evolving world of software development, a new breed of specialists is emerging to tackle the fallout from “vibe coding,” a trend where artificial intelligence tools generate code based on loose, natural-language prompts rather than rigorous engineering principles. This approach, popularized by figures like Andrej Karpathy, allows non-experts to churn out applications quickly, but it often results in tangled, inefficient codebases riddled with bugs and security flaws. As companies rush to adopt AI-driven coding to cut costs and speed up production, a shadow industry of cleanup experts has sprung up, commanding premium rates to salvage these digital disasters.
These “vibe code cleanup specialists,” as they’ve been dubbed on platforms like LinkedIn, are typically seasoned software engineers with deep expertise in debugging and refactoring. They step in after amateur or AI-assisted coders produce prototypes that work just well enough to impress stakeholders but fail under real-world scrutiny. According to a recent article in 404 Media, freelance developers and specialized firms are now making a lucrative business out of this, with some charging upwards of $200 per hour to untangle the messes left by tools like GitHub Copilot or Cursor.
The Rise of Vibe Coding and Its Hidden Costs
The term “vibe coding” gained traction earlier this year, building on Karpathy’s 2023 quip that English is the hottest new programming language, as detailed in a Wikipedia entry updated in August. It promises democratization: startups can prototype apps in days instead of months, empowering designers and entrepreneurs without formal coding training. However, critics argue it sacrifices maintainability for speed, leading to code that’s opaque even to its creators.
Posts on X, formerly Twitter, from users like software engineers and tech commentators highlight the frustration, with many sharing stories of prompts yielding conflicting results across AI models, turning simple tasks into endless tweaking sessions. This sentiment echoes in a Wired piece from June, which warned that engineering jobs, once stable, are now threatened by AI’s ability to “vibe” through code generation, though not without creating downstream chaos.
Case Studies from the Front Lines
Take the example of a mid-sized fintech startup that used vibe coding to build a payment processing app. The initial version, generated via natural-language descriptions to an AI, handled basic transactions but crumbled under high traffic, exposing vulnerabilities that could have led to data breaches. Enter the cleanup crew: engineers from firms specializing in AI code audits, who spent weeks dissecting the spaghetti-like structure, implementing proper error handling, and ensuring compliance with security standards.
Similar tales abound in industry forums. A Reddit thread on r/technology, discussing the 404 Media article, amassed hundreds of comments from developers venting about inheriting “vibe-coded messes” that lack documentation or logical flow. As one anonymous poster noted, these projects often require starting from scratch, inflating costs far beyond the initial savings promised by AI tools.
Economic Implications for the Tech Sector
The economic ripple effects are significant. A Ars Technica report from March explored how accepting AI-written code without full understanding is becoming commonplace, yet it burdens companies with technical debt. Cleanup specialists are filling this gap, with some reporting a 300% increase in demand over the past six months, per insights from X posts by tech recruiters.
This trend underscores a broader shift: while vibe coding accelerates innovation, it creates a two-tier system where elite engineers command higher premiums for remediation. A Verge analysis from September suggests that AI isn’t ending software engineering but evolving it, with humans essential for high-level comprehension and fixes.
Challenges and Solutions in Practice
Fixing vibe-coded software isn’t just about rewriting lines; it involves forensic analysis to trace bugs back to flawed prompts or model hallucinations. Engineers often employ tools like static analyzers and version control forensics to map out the chaos, as shared in a Medium post by a developer who likened the role to digital archaeology.
Solutions are emerging, too. Some companies are integrating “vibe coding hygiene” training, teaching teams to refine prompts and review AI outputs iteratively. X discussions from August reveal engineers experimenting with prompts that emphasize root-cause analysis, such as instructing AI to “trace the full user flow and identify origins,” which helps prevent messes from escalating.
The Future of AI-Assisted Development
Looking ahead, the cleanup boom may force a reckoning. Industry insiders, including those cited in a Index.dev blog from March, predict that as vibe coding matures, better AI models could reduce errors, but human oversight will remain crucial. Gary Marcus, in an X post from June, argued that prototypes still need professional rebuilding, a view supported by a ServiceNow community blog from early September noting that 95% of generative AI projects fail to reach production without engineering intervention.
Yet, optimism persists. A IT Munch overview from three weeks ago highlights vibe coding’s benefits for startups, like slashing development cycles to hours. The key, experts say, is hybrid approaches: use AI for speed, but pair it with engineers for polish.
Navigating the Cleanup Economy
For aspiring cleanup specialists, the field offers fertile ground. Freelance platforms are buzzing with gigs, and companies like those profiled in the 404 Media piece are scaling up. Rates reflect the expertise required—think $150,000-plus salaries for full-time roles, as per LinkedIn trends echoed on X.
Ultimately, this phenomenon reveals the double-edged sword of AI in tech: it empowers rapid creation but demands skilled humans to sustain it. As one engineer quipped in a recent X thread, “Vibe coding is the party; we’re the ones cleaning up the confetti—and getting paid handsomely for it.” With the current date marking mid-September 2025, the vibe coding cleanup wave shows no signs of slowing, positioning these specialists as the unsung guardians of reliable software in an AI-dominated era.
Tools & Platforms
Professor LEON & MYQuant AI

— A New Era of AI-Driven Finance
In today’s global AI race, models such as GPT and DeepSeek are constantly pushing the boundaries of technology. Yet in the complex game of financial markets, the true challenge is not merely about computing power and data, but about understanding human nature, market behavior, and the logic of capital.
Professor Leon Lee Cheng Wei (LEON) — Ph.D. in Economics from a prestigious university, former Senior Strategy Advisor at a leading financial institution in Asia, and Co-Founder of the Asia-Pacific FinTech Think Tank — has spent over two decades navigating both academia and practice. Now, he has chosen a different path:
to create an AI that not only “converses” but can also trade, forecast, and accompany investors.
Journey: From Academia to Live Trading
· LEON delved into behavioral finance and asset pricing models, realizing that “markets are not inefficient, but human nature is the ultimate variable.”
· On Wall Street and the Singapore Exchange, he witnessed the rise of algorithmic trading, but also how investors struggled without rational tools.
· Returning to Malaysia, he made a resolution:
“I want to build an AI that truly understands both markets and investors.”
Thus, MYQuant AI was born.
Beyond GPT and DeepSeek
While GPT and DeepSeek represent general-purpose AI, the mission of MYQuant AI is to become a specialized AI in finance.
· It not only interprets macroeconomic and market data, but also captures the behavioral biases unique to Asia-Pacific markets.
· It not only generates text, but can also simulate decision-making and optimize strategies in real-time trading.
· It is not just a tool, but an investor’s companion and coach.
Professor LEON firmly believes:
“The future of financial AI is not about answering questions, but about growing alongside investors.”
The Fusion of Finance and Artificial Intelligence
The roadmap of MYQuant AI is clear:
· Behavioral Finance + Machine Learning: Reconstructing the “noise of human nature” behind market moves.
· Quantitative Models + AI Decision-Making: Transforming complex trading logic into executable strategies.
· Local Markets + Global Vision: Bridging Malaysia and Southeast Asian investment culture with global capital flows.
This is not merely a model — it is a philosophy:
bringing AI into the market as the most reliable partner for investors in uncertain times.
Conclusion
Today, MYQuant AI is no longer just a research project, but Professor LEON’s “second language.”
Beyond GPT and DeepSeek, it represents another possibility — the deep integration of artificial intelligence and financial markets.
Technology is not just cold computation; it can also be warmth, responsibility, and a guide to wealth creation.
MYQuant’s Vision:
Make investing more rational, markets more transparent, and AI truly serve people.
Contact Info:
Name: Mary
Email: Send Email
Organization: xmyquant
Website: http://www.xmyquant.com
Disclaimer:
This press release is for informational purposes only. Information verification has been done to the best of our ability. Still, due to the speculative nature of the blockchain (cryptocurrency, NFT, mining, etc.) sector as a whole, complete accuracy cannot always be guaranteed.
You are advised to conduct your own research and exercise caution. Investments in these fields are inherently risky and should be approached with due diligence.
Release ID: 89169671
In case of identifying any problems, concerns, or inaccuracies in the content shared in this press release, or if a press release needs to be taken down, we urge you to notify us immediately by contacting error@releasecontact.com (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). Our dedicated team will be readily accessible to address your concerns and take swift action within 8 hours to rectify any issues identified or assist with the removal process. We are committed to delivering high-quality content and ensuring accuracy for our valued readers.
Tools & Platforms
Datadog Inc. (DDOG)’s AI Initiatives Accelerating Growth

Datadog, Inc. (NASDAQ:DDOG) is one of the best tech stocks to buy for the long term. At Citi’s 2025 Global TMT Conference on September 3, CFO David Obstler reiterated that the company is experiencing robust growth driven by AI-native companies.
The robust growth stems from the company’s increasing focus on strategic initiatives in artificial intelligence and cybersecurity. Consequently, AI initiatives have contributed to 10% of the company’s underlying growth. The growth has occurred in eight of the ten largest AI tool companies, which have leveraged their solutions.
In addition, the executive reiterated that Datadog is pursuing growth opportunities in international markets, with a focus on India and Brazil. As part of the expansion drive, Datadog is also integrating new technologies to maintain its competitive edge. Part of the strategy entails enhancing Cloud SIEM, service management, and product analytics.
Datadog, Inc. (NASDAQ:DDOG) is a technology company that provides a cloud-based platform for observability and security. It also offers tools for infrastructure monitoring, application performance monitoring (APM), log management, real-user monitoring, and security.
While we acknowledge the potential of DDOG as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you’re looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock.
READ NEXT: 12 Best Consumer Goods Stocks Billionaires Are Quietly Buying and Goldman Sachs Penny Stocks: Top 12 Stock Picks.
Disclosure: None. This article is originally published at Insider Monkey.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries