AI Research
How AI is Reconstructing Construction: From Pre‑Design to Predictive Maintenance

The construction industry is in the midst of a profound transformation. Traditionally slow to digitize, it is now embracing artificial intelligence (AI) across every stage, from early design to long-term asset maintenance. This shift is redefining how projects are planned, built, and sustained. The rise of AI in construction spans multiple dimensions: optimizing design, enhancing site safety, improving project predictability, and enabling smarter, more sustainable buildings.
For decades, construction was defined by manual tasks, paper drawings, and fragmented data silos. Complex logistics, distributed stakeholders, and a labor-intensive culture contributed to minimal technology adoption. The industry relied heavily on human intuition and experience to manage risks, often leading to cost overruns and delays.
So, what’s changed?
Rapid advances in AI, machine learning, computer vision, cloud computing, and mobile connectivity have catalyzed a wave of digital transformation. Many AI tools are now built specifically for construction workflows, enabling automation and data-driven decision-making that was once impractical. The global AI in construction market is projected to grow at a compound annual growth rate of over 20% through 2030, reaching more than $22 billion USD. Roughly 35% of construction firms have adopted at least one form of AI, and 70% of large projects now include some AI-driven component. These tools are delivering real results, from cost reductions of up to 20% in project planning to measurable improvements in safety and efficiency.
On job sites, AI is already reshaping how work gets done. Safety monitoring and risk prediction systems powered by computer vision can detect hazards such as missing PPE, unsafe proximity to machinery, and potential fall risks. In fact, firms are now relying on AI to aggregate weather, personnel, and task data to forecast safety incidents across workers, flagging risks like insufficient supervision. Emerging Vision-Language Models such as GPT‑4o and Gemini now demonstrate high hazard identification accuracy–scoring BERTScore ~0.90–though real-time implementation remains challenging.
Autonomous equipment and robotics are also making their mark. Companies like Built Robotics retrofit heavy equipment such as excavators and bulldozers for autonomous excavation and grading. Robots are performing tasks like bricklaying, drywall installation, surveying, and demolition, often with precision that exceeds human performance. Productivity gains of 20–22% and material waste reductions of up to 30% are being reported. In the UK, bricklaying robots are being trialed to assemble facades using dual-arm systems that can lay roughly 500 bricks per shift with minimal supervision.
Predictive scheduling and budget forecasting powered by machine learning are also gaining traction. By incorporating variables such as weather patterns, labor availability, supplier lead times, and live site progress, these tools can reduce planning errors by up to 20% and improve scheduling accuracy by 35%.
Quality assurance has seen similar benefits. AI-powered drones and stationary cameras perform inspections at speeds up to three times faster than manual checks, detecting misalignments or material defects with nearly 90% accuracy. Early detection of defects reduces rework and prevents costly downstream errors, improving both efficiency and safety.
Beyond the job site, AI is transforming the design and planning phase. Generative design platforms now evaluate thousands of permutations under constraints like structural integrity, cost limits, and sustainability goals. This accelerates the creation of optimized solutions that human designers might overlook. Case studies have shown AI can reduce tender analysis times and surface critical safety optimizations in large-scale infrastructure projects. Integration with Building Information Modeling (BIM) platforms allows AI to automate clash detection, simulate construction sequences, and estimate resource usage. These capabilities improve coordination accuracy by 30%, cut estimate errors by 25%, and reduce manual planning workloads.
Workforce augmentation is another critical dimension. AI is not eliminating skilled trades; instead, it is changing the nature of their work. Robots handle repetitive or hazardous tasks–like excavation and heavy lifting–while humans focus on craftsmanship, complex assemblies, and finishing. Pilot programs emphasize that skilled oversight remains essential, particularly in modular construction and micro-factory workflows.
As AI proliferates, digital literacy and new skillsets are becoming indispensable. Workers now benefit from training in data interpretation, human-machine collaboration, and even prompt engineering for AI tools. Virtual and augmented reality training simulations, along with AI-based safety modules, have been shown to improve compliance rates by 20–40% and roughly 60% of firms now invest in AI upskilling initiatives.
However, challenges remain. Safety monitoring often involves wearable sensors, cameras, or GPS tracking, raising valid concerns about privacy and surveillance. Broader adoption also introduces cybersecurity and data ownership issues, as sensitive project and personnel information resides on cloud-connected devices. Ethical considerations include job displacement, transparency in automated decisions, liability for AI errors, and maintaining worker trust . Legacy system integration also presents hurdles, as many firms still rely on paper-based workflows and disconnected software tools, requiring careful change management for full AI implementation.
Looking ahead, AI will play a central role in developing smart cities and infrastructure. Traffic-responsive roads, energy-aware street lighting, and predictive maintenance for bridges and utilities are becoming feasible. Early AI integration in new developments can prevent expensive retrofits later, as shown in Australian urban pilots where smart poles and sensors improved both safety and pedestrian traffic. Sustainability is another frontier: AI-driven HVAC optimization in a Manhattan tower cut energy use by approximately 15.8%, saving $42,000 annually and reducing emissions by 37 tons of CO₂. Combined with generative design and modular building, these advances point toward large-scale decarbonization.
Artificial intelligence is advancing from the margins to the mainstream of the construction industry. Its influence spans pre-design optimization, job site safety, workforce collaboration, and long-term sustainability. While privacy, ethics, and legacy integration remain challenges, the trajectory is clear: firms using AI are reporting efficiency gains, risk reduction, and measurable environmental benefits. As adoption rises from roughly 35% of firms toward saturation, a smarter, safer, and more sustainable built environment is not just possible–it is imminent.
AI Research
Delaware Partnership to Build AI Skills in Students, Workers

Delaware has announced a partnership with OpenAI on its certification program, which aims to build AI skills in the state among students and workers alike.
The Diamond State’s officials have been exploring how to move forward responsibly with AI, establishing a generative AI policy this year to help inform safe use among public-sector employees, which one official said was the “first step” to informing employees about acceptable AI use. The Delaware Artificial Intelligence Commission also took action this year to advance a “sandbox” environment for testing new AI technologies including agentic AI; the sandbox model has proven valuable for governments across the U.S., from San Jose to Utah.
The OpenAI Certification Program aims to address a common challenge for states: fostering AI literacy in the workforce and among students. It builds on the OpenAI Academy, an open-to-all initiative launched in an effort to democratize knowledge about AI. The initiative’s expansion will enable the company to offer certifications based upon levels of AI fluency, from the basics to prompt engineering. The company is committing to certifying 10 million Americans by 2030.
“As a former teacher, I know how important it is to give our students every advantage,” Gov. Matt Meyer said in a statement. “As Governor, I know our economy depends on workers being ready for the jobs of the future, no matter their zip code.”
The partnership will start with early-stage programming across schools and workforce training programs in Delaware in an effort led by the state’s new Office of Workforce Development, which was created earlier this year. The office will work with schools, colleges and employers in coming months to identify pilot opportunities for this programming, to ensure that every community in the state has access.
Delaware will play a role in shaping how certifications are rolled out at the community level because the program is in its early stages and Delaware is one of the first states to join, per the state’s announcement.
“We’ll obviously use AI to teach AI: anyone will be able to prepare for the certification in ChatGPT’s Study mode and become certified without leaving the app,” OpenAI’s CEO of Applications Fidji Simo said in an article.
This announcement comes on the heels of the federal AI Action Plan’s release. The plan, among other content potentially limiting states’ regulatory authority, aims to invest in skills training and AI literacy.
“By boosting AI literacy and investing in skills training, we’re equipping hardworking Americans with the tools they need to lead and succeed in this new era,” U.S. Secretary of Labor Lori Chavez-DeRemer said in a statement about the federal plan.
Delaware’s partnership with OpenAI for its certification program mirrors this goal, equipping Delawareans with the knowledge to use these tools — in the classroom, in their careers and beyond.
AI skills are a critical part of broader digital literacy efforts; today, “even basic digital skills include AI,” National Digital Inclusion Alliance Director Angela Siefer said earlier this summer.
AI Research
The End of Chain-of-Thought? CoreThink and University of California Researchers Propose a Paradigm Shift in AI Reasoning

For years, the race in artificial intelligence has been about scale. Bigger models, more GPUs, longer prompts. OpenAI, Anthropic, and Google have led the charge with massive large language models (LLMs), reinforcement learning fine-tuning, and chain-of-thought prompting—techniques designed to simulate reasoning by spelling out step-by-step answers.
But a new technical white paper titled CoreThink: A Symbolic Reasoning Layer to reason over Long Horizon Tasks with LLMs from CoreThink AI and University of California researchers argues that this paradigm may be reaching its ceiling. The authors make a provocative claim: LLMs are powerful statistical text generators, but they are not reasoning engines. And chain-of-thought, the method most often used to suggest otherwise, is more performance theater than genuine logic.
In response, the team introduces General Symbolics, a neuro-symbolic reasoning layer designed to plug into existing models. Their evaluations show dramatic improvements across a wide range of reasoning benchmarks—achieved without retraining or additional GPU cost. If validated, this approach could mark a turning point in how AI systems are designed for logic and decision-making.
What Is Chain-of-Thought — and Why It Matters
Chain-of-thought (CoT) prompting has become one of the most widely adopted techniques in modern AI. By asking a model to write out its reasoning steps before delivering an answer, researchers found they could often improve benchmark scores in areas like mathematics, coding, and planning. On the surface, it seemed like a breakthrough.
Yet the report underscores the limitations of this approach. CoT explanations may look convincing, but studies show they are often unfaithful to what the model actually computed, rationalizing outputs after the fact rather than revealing true logic. This creates real-world risks. In medicine, a plausible narrative may mask reliance on spurious correlations, leading to dangerous misdiagnoses. In law, fabricated rationales could be mistaken for genuine justifications, threatening due process and accountability.
The paper further highlights inefficiency: CoT chains often grow excessively long on simple problems, while collapsing into shallow reasoning on complex ones. The result is wasted computation and, in many cases, reduced accuracy. The authors conclude that chain-of-thought is “performative, not mechanistic”—a surface-level display that creates the illusion of interpretability without delivering it.
Symbolic AI: From Early Dreams to New Revivals
The critique of CoT invites a look back at the history of symbolic AI. In its earliest decades, AI research revolved around rule-based systems that encoded knowledge in explicit logical form. Expert systems like MYCIN attempted to diagnose illnesses by applying hand-crafted rules, and fraud detection systems relied on vast logic sets to catch anomalies.
Symbolic AI had undeniable strengths: every step of its reasoning was transparent and traceable. But these systems were brittle. Encoding tens of thousands of rules required immense labor, and they struggled when faced with novel situations. Critics like Hubert Dreyfus argued that human intelligence depends on tacit, context-driven know-how that no rule set could capture. By the 1990s, symbolic approaches gave way to data-driven neural networks.
In recent years, there has been a renewed effort to combine the strengths of both worlds through neuro-symbolic AI. The idea is straightforward: let neural networks handle messy, perceptual inputs like images or text, while symbolic modules provide structured reasoning and logical guarantees. But most of these hybrids have struggled with integration. Symbolic backbones were too rigid, while neural modules often undermined consistency. The result was complex, heavy systems that failed to deliver the promised interpretability.
General Symbolics: A New Reasoning Layer
CoreThink’s General Symbolics Reasoner (GSR) aims to overcome these limitations with a different approach. Instead of translating language into rigid formal structures or high-dimensional embeddings, GSR operates entirely within natural language itself. Every step of reasoning is expressed in words, ensuring that context, nuance, and modality are preserved. This means that differences like “must” versus “should” are carried through the reasoning process, rather than abstracted away.
The framework works by parsing inputs natively in natural language, applying logical constraints through linguistic transformations, and producing verbatim reasoning traces that remain fully human-readable. When contradictions or errors appear, they are surfaced directly in the reasoning path, allowing for transparency and debugging. To remain efficient, the system prunes unnecessary steps, enabling stable long-horizon reasoning without GPU scaling.
Because it acts as a layer rather than requiring retraining, GSR can be applied to existing base models. In evaluations, it consistently delivered accuracy improvements of between 30 and 60 percent across reasoning tasks, all without increasing training costs.
Benchmark Results
The improvements are best illustrated through benchmarks. On LiveCodeBench v6, which evaluates competition-grade coding problems, CoreThink achieved a 66.6 percent pass rate—substantially higher than leading models in its category. In SWE-Bench Lite, a benchmark for real-world bug fixing drawn from GitHub repositories, the system reached 62.3 percent accuracy, the highest result yet reported. And on ARC-AGI-2, one of the most demanding tests of abstract reasoning, it scored 24.4 percent, far surpassing frontier models like Claude and Gemini, which remain below 6 percent.
These numbers reflect more than raw accuracy. In detailed case studies, the symbolic layer enabled models to act differently. In scikit-learn’s ColumnTransformer
, for instance, a baseline model proposed a superficial patch that masked the error. The CoreThink-augmented system instead identified the synchronization problem at the root and fixed it comprehensively. On a difficult LeetCode challenge, the base model misapplied dynamic programming and failed entirely, while the symbolic reasoning layer corrected the flawed state representation and produced a working solution.
How It Fits into the Symbolic Revival
General Symbolics joins a growing movement of attempts to bring structure back into AI reasoning. Classic symbolic AI showed the value of transparency but could not adapt to novelty. Traditional neuro-symbolic hybrids promised balance but often became unwieldy. Planner stacks that bolted search onto LLMs offered early hope but collapsed under complexity as tasks scaled.
Recent advances point to the potential of new hybrids. DeepMind’s AlphaGeometry, for instance, has demonstrated that symbolic structures can outperform pure neural models on geometry problems. CoreThink’s approach extends this trend. In its ARC-AGI pipeline, deterministic object detection and symbolic pattern abstraction are combined with neural execution, producing results far beyond those of LLM-only systems. In tool use, the symbolic layer helps maintain context and enforce constraints, allowing for more reliable multi-turn planning.
The key distinction is that General Symbolics does not rely on rigid logic or massive retraining. By reasoning directly in language, it remains flexible while preserving interpretability. This makes it lighter than earlier hybrids and, crucially, practical for integration into enterprise applications.
Why It Matters
If chain-of-thought is an illusion of reasoning, then the AI industry faces a pressing challenge. Enterprises cannot depend on systems that only appear to reason, especially in high-stakes environments like medicine, law, and finance. The paper suggests that real progress will come not from scaling models further, but from rethinking the foundations of reasoning itself.
General Symbolics is one such foundation. It offers a lightweight, interpretable layer that can enhance existing models without retraining, producing genuine reasoning improvements rather than surface-level narratives. For the broader AI community, it marks a possible paradigm shift: a return of symbolic reasoning, not as brittle rule sets, but as a flexible companion to neural learning.
As the authors put it: “We don’t need to add more parameters to get better reasoning—we need to rethink the foundations.”
AI Research
What It Means for State and Local Projects

To lead the world in the AI race, President Donald Trump says the U.S. will need to “triple” the amount of electricity it produces. At a cabinet meeting on Aug. 26, he made it clear his administration’s policy is to favor fossil fuels and nuclear energy, while dismissing solar and wind power.
“Windmills, we’re just not going to allow them. They ruin our country,” Trump said at the meeting. “They’re ugly, they don’t work, they kill your birds, they’re bad for the environment.”
He added that he also didn’t like solar because of the space it takes up on land that could be used for farming.
“Whether we like it or not, fossil fuel is the thing that works,” said Trump. “We’re going to fire up those big monster factories.”
In the same meeting, he showcased a photo of what he said was a $50 billion mega data center planned for Louisiana, provided by Mark Zuckerberg.
Watch a condensed version of Trump’s comments at the cabinet meeting in the video below.
But there’s a reason coal-fired power plants have been closing at a rapid pace for years: cost. According to the think tank Energy Innovation, coal power in the U.S. tends to cost more to run than renewables. Before Trump’s second term, the U.S. Department of Energy publicized a strategy to support new energy demand for AI with renewable sources, writing that “solar energy, land-based wind energy, battery storage and energy efficiency are some of the most rapidly scalable and cost competitive ways to meet increased electricity demand from data centers.”
Further, many governments examining how to use AI also have climate pledges in place to reduce their greenhouse gas emissions — including states such as North Carolina and California.
Earlier this year Trump passed an executive order, “Reinvigorating America’s Beautiful Clean Coal Industry and Amending Executive Order 14241,” directing the secretaries of the Interior, Commerce and Energy to identify regions where coal-powered infrastructure is available and suitable for supporting AI.
A separate executive order, “Accelerating Federal Permitting of Data Center Infrastructure,” shifts the power to the federal government to ensure that new AI infrastructure, fueled by specific energy sources, is built quickly by “easing federal regulatory burdens.”
In an interview with Government Technology, a representative of Core Natural Resources, a U.S.-based mining and mineral resource company, explained this federal shift will be a “resurgency for the industry,” pressing that coal is “uniquely positioned” to fill the energy need AI will create.”
“If you’re looking to generate large amounts of energy that these data centers are going to require, you need to focus on energy sources that are going to be able to meet that demand without sacrificing the power prices for the consumers,” said Matthew Mackowiak, director of government affairs at Core.
“It’s going to be what powers the future, especially when you look at this demand growth over the next few years,” said Mackowiak.
Yet these plans for the future, including increased reliance on fossil fuels and coal, as well as needing mega data centers, may not be what the public is willing to accept. According to the International Energy Agency, a typical AI-focused data center consumes as much electricity as 100,000 households, but larger ones currently under construction may consume 20 times as much.
A recent report from Data Center Watch suggests that local activism is threatening to derail a potential data center boom.
According to the research firm, $18 billion worth of data center projects have been blocked, while $46 billion of projects were delayed over the last two years in situations where there was opposition from residents and activist groups. Common arguments against the centers are higher utility bills, water consumption, noise, impact on property value and green space preservation.
The movement may put state and local governments in the middle of a clash between federal directives and backlash from their communities. Last month in Tucson, Ariz., City Council members voted against a proposed data center project, due in large part to public pressure from residents with fears about its water usage.
St. Charles, Mo., recently considered banning proposed data centers for one year, pausing the acceptance of any zoning change applications for data centers or the issuing of any building permits for data centers following a wave of opposition from residents.
This debate may hit a fever pitch as many state and local governments are also piloting or launching their own programs powered by AI, from traffic management systems to new citizen portals.
As the AI energy debate heats up, local leaders could be in for some challenging choices. As Mackowiak of Core Natural Resources noted, officials have a “tough job, listening to constituents and trying to do what’s best.” He asserted that officials should consider “resource adequacy,” adding that “access to affordable, reliable, dependable power is first and foremost when it comes to a healthy economy and national security.”
The ultimate question for government leaders is not just whether they can meet the energy demands of a private data center, but how the public’s perception of this new energy future will affect their own technology goals. If the citizens begin to associate AI with contentious projects and controversial energy sources, it could create a ripple effect of distrust, disrupting the potential of the technology regardless of the benefits.
Ben Miller contributed to this story.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi