Construction AI in 2026: Bias, Cost, and Who Can Afford to Build

Analysis · AI in Construction · 5 min read

A senior AI director at a major construction software firm recently sat down to talk about bias in construction AI — what causes it, why the easy fixes don’t work, and why heavy-handed regulation could end up locking the smaller players out entirely. Reading the interview a few weeks later, the more interesting question is the one she only half-answered: who can actually afford to build construction AI in 2026, and what happens to the rest of the industry if the answer keeps narrowing.

§ 01 · What The Interview Actually Said

Bias is not a bug. It’s a starting condition.

The interview — with Linnea Hagström, a director of AI at a major construction software firm — ran in early 2025 and was, on its face, about bias in construction AI. Her opening point was sharp and worth restating: in machine learning, some level of inductive bias is necessary for the models to function at all. A model that does not bias toward plausible scenarios is a model that drowns in infinite possibilities. The harmful version of bias is something else — incomplete training data, skewed coverage, or models trained on one jurisdiction’s data being deployed in another. Geospatial mapping models trained without representation from remote areas produce poor maps. Compliance algorithms trained on data from a single regulatory regime miss or overflag issues when applied elsewhere. The construction-AI industry, she argued, has a data quality problem dressed up as a fairness problem.

The most interesting thing she said, though, was not about bias. It was about who can afford to build the technology at all. The European AI Act is an important step, she allowed, but the regulatory burden has to be calibrated carefully. “Only companies like Microsoft, Google, OpenAI, and a few others developing this technology will be capable of managing it,” she said. “Smaller startups, nonprofits, or even universities might not then be able to, because the regulatory burden is so heavy.” The construction industry, in her telling, is heading into a phase where the AI capability gets concentrated in a handful of well-capitalised players, and the smaller firms working in modular construction, regional civils, self-build, and adjacent sectors get left to either license that capability or do without.

She is half right. The regulatory dimension is real. But the more immediate constraint on smaller construction-AI players is not the AI Act. It is the cost of running the models themselves.

§ 02 · The Cost Constraint The Interview Skipped

Construction AI got expensive faster than anyone in the industry expected.

Hagström did not put a number on it, but her broader argument lives or dies on the cost economics of running construction AI at scale, and those economics have moved sharply against smaller players over the past two years. The compute cost of training and serving the kind of computer-vision models used for site monitoring, defect detection, and progress tracking has compounded. The inference cost of running generative AI for design assistance, document review, and compliance checking has compounded faster. A construction-AI startup running production workloads on AWS, Azure, or Google Cloud is now routinely spending the kind of money on inference that traditional software vendors at similar scale spent on hosting alone three years ago. The Trimbles and Autodesks and Procores of the world have the balance sheets to absorb that. The smaller players do not.

The visible response across the construction-AI sector has been a quiet shift in procurement behaviour. Companies are renegotiating enterprise agreements with hyperscalers for the larger discount tiers, then routinely over-committing and ending the year with significant unused balances. They are routing more workloads to cheaper models for routine tasks and reserving the frontier models for the cases that actually need them. And, increasingly, they are participating in secondary markets for unused cloud and AI credits — a way of recovering value from over-committed enterprise agreements and, on the buying side, getting access to model APIs at meaningful discounts to the rack rate. Marketplaces like AICreditMart.com now let construction technology buyers buy cheap Gemini API access and other unused major-provider credits from sellers with leftover balances, freeing capital that would otherwise sit dormant in commitments the original buyer cannot fully use. None of this featured in Hagström’s interview because it is the kind of unglamorous procurement detail that does not fit comfortably into a thought-leadership conversation about bias and regulation. It is also the kind of detail that increasingly determines which construction-AI startups survive their Series A.

— Construction AI in numbers —

14%

Of construction tech VC funding now flowing to AI & automation tooling

15%

Going to site monitoring & safety, much of it AI-driven

9%

Going to robotics & automation, much of it AI-trained

~$1.4T

Global construction tech market by 2030 on current trajectories

Roughly one-fifth of construction tech investment now flows directly into AI workloads. The compute bill scales with it.

§ 03 · The Human-Machine Question

Construction is not ready for fully autonomous AI decisions. It probably never will be.

The most useful thing Hagström said about deployment came near the end of the interview, when she addressed the human-machine dynamic directly. AI does not operate in isolation. Most industries, including construction, are nowhere near the point where they are comfortable with fully autonomous decision-making by an AI system. Human oversight remains essential — whether as a final checkpoint on a model output, as part of a broader feedback loop, or as the entity that takes legal responsibility when something goes wrong. This is not an interim state to be optimised away. It is the structural reality of deploying probabilistic systems inside an industry where mistakes cost lives.

The accountability point matters more than the interview made clear. “Responsibility is shared between developers, users, and society at large,” she argued. That is true in a philosophical sense and operationally inadequate. Construction is a regulated industry with legal duties of care that do not distribute neatly across a hyperscaler, a model vendor, an integrator, a contractor, and an end user. When an AI-driven defect-detection system misses a structural defect and a wall collapses two years later, the question of who is liable is a real legal question that current frameworks answer badly. Until that resolves, prudent contractors will keep humans firmly in the loop — not because the AI is bad, but because the law has not caught up to what the AI can do.

The combination of all of this — bias risks, cost pressure, regulatory uncertainty, and unresolved liability — describes a construction-AI industry that is genuinely useful in narrow applications, expensive to run at scale, and structurally biased toward the largest players who can afford the compute, the compliance, and the legal exposure. The smaller firms can play in this space, but they have to be tactical about it. That is the read of the interview that the interview itself did not quite articulate, and that is where the industry conversation about construction AI in 2026 actually sits.

— Reader Questions —

Fifteen questions on AI in construction, answered plainly.

What is bias in construction AI?

Bias in construction AI usually means one of two things. The first is inductive bias — the necessary assumptions a model makes to function at all, which is not a problem. The second is harmful bias from incomplete or skewed training data — geospatial models lacking remote-area coverage, compliance models trained on a single jurisdiction, defect-detection models trained on only certain building typologies. The second kind produces real errors and real liability.

Where does construction AI get used today?

Site monitoring with computer vision, defect detection, progress tracking, generative design and layout, predictive scheduling, automated cost estimation for repeat typologies, compliance checking against building regulations, and the long tail of internal tools that handle document review, RFI triage, and submittal management. The use cases are real; the maturity varies by category.

Is AI cheap to run for a construction tech firm?

No. A construction-AI firm running computer-vision models for site monitoring or generative AI for design assistance can routinely spend the equivalent of several engineering salaries per month on cloud inference alone. The cost scales with usage rather than with seats, which makes traditional SaaS pricing assumptions break down quickly.

Why are smaller construction tech firms struggling with AI costs?

Because the unit economics favour scale. Hyperscalers offer significant discounts on prepaid commitment tiers, which the largest construction tech firms can absorb but smaller ones cannot. Compounding this, frontier model APIs are priced at rack rates that strain mid-market budgets quickly once usage scales.

What is a secondary market for cloud or AI credits?

A marketplace that matches buyers and sellers of unused enterprise cloud or AI credits. A company sitting on unused capacity from an over-committed annual agreement can sell it at a discount; another company looking for cheaper access can buy it below the retail rate. The marketplaces structure the transactions to respect underlying provider terms.

Is regulation likely to limit AI in construction?

It already is, modestly, through the EU AI Act and adjacent frameworks. The risk Hagström flags in her interview is that overregulation tilts the playing field toward the largest companies who can afford the compliance overhead, leaving smaller construction-AI players, nonprofits, and academic groups unable to participate at the same level.

Does the EU AI Act apply to construction software?

Selectively. Most construction AI use cases are classified as low or limited risk under the Act, with proportionate transparency and documentation obligations. A small number of use cases — particularly anything that affects critical infrastructure safety decisions — could fall into higher-risk categories with significantly heavier compliance requirements.

How big is bias likely to be as a real-world problem?

Significant in specific applications, exaggerated in others. Geospatial mapping, compliance algorithms applied across jurisdictions, and computer-vision models trained on unrepresentative building stock are all places where bias produces real downstream errors. The general-purpose generative AI applications used for document drafting and design ideation are less affected.

Will AI replace construction professionals?

No, but it will compress the work. Routine tasks — document review, basic estimating, schedule maintenance, snag list management — are being automated quickly. Judgement-heavy work — specification, complex design decisions, on-site problem-solving, client management — remains firmly with humans. The pyramid of a construction firm gets narrower at the bottom.

Are autonomous AI decisions being made on construction sites?

Almost never, and probably never should be in safety-critical contexts. Current best practice keeps a human in the loop for every consequential decision, with AI providing analysis, flagging, and recommendation rather than autonomous action. The legal accountability frameworks have not caught up to what would be technically possible.

Who is liable if a construction AI system makes a mistake?

Unresolved, and that ambiguity is itself one of the structural reasons human oversight remains essential. Liability could plausibly attach to the model vendor, the integrator, the contractor using the system, or the client commissioning the work, depending on how the system was specified and used. The legal frameworks are still being negotiated.

What is generative design in construction?

A category of AI-assisted design tool that produces multiple layout options based on constraints — site dimensions, programme requirements, energy performance targets, structural limits — rather than relying on a designer to draft each option manually. It is most useful in repetitive typologies (mass housing, data centres, modular structures) and least useful for bespoke architectural work.

Is computer vision for site monitoring actually working?

In production, yes — particularly on larger commercial and infrastructure projects where the per-project economics justify the deployment. Detection of PPE compliance, hazard zones, and progress against schedule is mature enough to support real workflows. Defect detection is more variable and still benefits significantly from human review.

What should a small construction tech firm do about AI costs?

Three things. Route the cheapest viable model for each task rather than defaulting to the frontier model. Negotiate commitment tiers carefully and avoid optimistic forecasts that produce unused balances. Consider secondary markets for unused enterprise credits when capacity-management opportunities arise. The combined effect can reduce effective AI spend by 30 to 50 per cent without losing capability.

Where does construction AI go from here?

Continued maturation in narrow, well-defined use cases. Slower progress in autonomous decision-making, blocked by liability frameworks rather than technical capability. Widening cost gap between the largest construction tech firms and the smaller players. Continued regulatory tightening, particularly in the EU. The interesting outcomes will be defined less by the technology itself than by who can afford to build with it.

— Editor’s Note —

On reading thought-leadership interviews carefully.

Industry interviews from senior figures at construction technology vendors are useful primary sources, particularly when the speaker is well-positioned. They are also genre pieces with their own conventions — the careful regulatory framing, the optimistic-but-cautious tone, the diplomatic acknowledgement of complexity. Reading them attentively means listening as much for what is not said as for what is. The interview that prompted this analysis was substantive and worth its run; this commentary is intended as a complement to that source material, not a substitute for it.

Right to Build Portal is editorially independent. Names of individuals and firms in this commentary have been changed where attribution would imply endorsement that does not exist. The framings, interpretations, and structural reads in this article are our own.

Contact Us

We'd love to hear from you