
How to Be Smart About Using AI in Construction

Peyton Aldrich
To get up-to-date on the risks and possible downsides of using artificial intelligence in construction, ENR turned to two construction lawyers with much experience in the industry and with issues related to AI: Christopher Horton, a partner at Smith Currie Oles LLP and Peyton Aldrich, an associate at the firm. They spoke with Correspondent Elaine Silver about possible inaccuracies, how to sharpen queries and what they call a ‘defensible trail.’
What is the most important thing construction companies need to know about using AI?
Horton: Treat AI like a scalpel and use it strategically for specific inefficiencies, not as a catch-all solution. Take the time to conduct due diligence before adopting AI to understand the tool’s security, capabilities, and limitations.
Aldrich: Don’t forget that AI is a tool, not a replacement for human oversight. Without proper checks, you risk relying on inaccurate outputs that could lead to costly mistakes.
There are plenty of strengths and advantages, right?
Horton: AI shines in streamlining project planning, like crafting bid package narratives or reviewing proposals to spot improvements. Our clients are also using it for note-taking in meetings, though security must be ensured. It’s important to vet tools for security and implement them with clear guidelines to maximize efficiency.
So that means what?
Aldrich: Companies should train staff to use AI critically and set policies to avoid ad hoc use. Define tasks to automate, evaluate tools carefully, and provide training on inputting data and spotting errors. Here’s a personal example: I use the Westlaw AI feature a lot. So, when I’m starting a research project, if it’s not an issue that I have a lot of knowledge on what I will do is I’ll actually take the prompts that Westlaw gives us. That shows us, hey, this is how our AI works. This is how to ask the best kind of question, and I will copy and paste that into ChatGpt, and then say, this is the issue that I’m trying to look up.
So, I use one AI to help me generate a question to ask to get the most accurate response from another.
What are some of the biggest pitfalls?
Horton: Understanding what the security features of it are, and whether it’s closed loop or open loop. It’s very important not to put any information into it that would include any confidential proprietary attorney, client or work product or privileged information.

Christopher Horton
Aldrich: And, AI can generate biased or inaccurate outputs, misinterpreting contract clauses and hallucinations—where AI produces false information, like made-up cases or regulations. That could lead to, for example picking the wrong bid winner due to flawed data, creating new liabilities. It’s a real risk if you don’t verify the results carefully.
Why is the construction industry particularly vulnerable?
Aldrich: Construction involves complex contracts and high-stakes decisions, so AI errors, like misinterpreting a contract provision, can lead to missed deadlines or financial losses. Construction is vulnerable because it deals with lengthy, custom contracts and data-heavy processes like risk assessments and subcontractor evaluations. AI’s tendency to overlook nuanced contract terms or produce biased outputs from incomplete data can lead to serious consequences, like compliance issues or unfair decisions.
Horton: The industry’s fast-paced nature tempts firms to rely on AI without verification, especially for bids or schedules. But unverified outputs can violate contractual obligations or expose proprietary data, particularly if AI tools aren’t secure.
So to train AI most effectively…?
Horton: Training involves learning the tool’s limits and security features. It’s not just about using AI but knowing when to trust it and when to verify, which comes from structured guidance and policy-driven implementation.
And what is a defensible trail that you mentioned to me?
Aldrich: A defensible trail documents how AI was used, including inputs, outputs, and human oversight. This record strengthens your position in litigation or regulatory reviews, proving consistent and proper procedures were followed.
That’s important because…?
Horton: Without a defensible trail, you can’t justify AI-driven decisions in court or arbitration, especially if privileged information is compromised. Opaque AI algorithms also make it hard to defend decisions in disputes which could lead to increased legal and financial exposure.
What AI policies do you have in your company?
Horton: Our firm has a strict AI policy I developed as AI committee chair. We only use approved tools like Westlaw, ChatGPT, and Microsoft Copilot, ensuring they’re secure and closed loop to protect confidential data. No proprietary or privileged information goes into unverified tools, and we’re testing generative AI like Thomson Reuters’ CoCounsel for future use.
What is the one thing every construction executive needs to do to use AI programs well?
Horton: Verify, verify, verify.
Post a Comment
You must be logged in to post a comment.