O’Hagan Meyer’s Design Professional Team is pleased to share the latest edition of our newsletter, featuring timely insights on emerging legal and technological risks affecting the design and construction industry. In this issue, we highlight three articles addressing evolving challenges in contract risk allocation and the growing impact of artificial intelligence on litigation and legal decision-making.

  1. The Dangers of Seeking Legal Advice from AI: AI chatbots may offer quick answers, but they can produce inaccurate guidance and expose sensitive information. This article explains the risks to confidentiality and attorney-client privilege when using AI for legal advice.
  2. The New Frontier of Pro Se Litigation: AI tools are enabling pro se litigants to file polished—but often inaccurate—legal pleadings. We examine how courts are responding and what businesses should expect when defending these cases.
  3. Managing Liability Through Defend, Indemnify, and Hold Harmless Agreements: Learn how these common contract clauses shift financial risk, what the duty to defend really means, and how firms can negotiate clearer language to better align obligations with insurance coverage.

As always, our goal is to provide actionable legal insights that help design professionals navigate today’s complex project and litigation landscape. If you have any questions or would like to discuss how these issues may affect your practice, please reply to this newsletter and a member of our team will connect with you.


The Dangers of Seeking Legal Advice from AI

By Robert Ware, Law Clerk, and James Walker, Partner

You might think your firm is heading toward a lawsuit. It’s natural to want quick answers. Today, you might turn to artificial intelligence tools like ChatGPT, Gemini, or Claude for that first bit of guidance. A chatbot can feel like an easy and fast way to get a basic sense of the law, figure out how strong a case might be, or estimate how much legal trouble you might be in. Even after speaking with a lawyer, you might use AI to help make sense of what your attorney said, think through questions to ask next, or brainstorm possible strategies.

The Risks

The most obvious concern with using AI is accuracy. AI can get things wrong. Hundreds of lawyers have been sanctioned in US courts for using AI to write briefs to save time or money, only to learn that the AI “hallucinated:” it cited a case that does not exist or cited a real case for a proposition of law that is nowhere to be found in that case. If a chatbot gives bad legal guidance, you might misunderstand your rights, misjudge your exposure, or fail to prepare for a claim or lawsuit. That alone can cause serious problems.

But there is a more insidious problem with using AI for legal research that you probably don’t know about: what feels like a private conversation with a chatbot often isn’t truly private at all. Behind the scenes, there are significant data and privacy issues that could come back to haunt you if a dispute eventually ends up in court. In our view, the biggest danger with using AI for legal guidance is not bad advice. It is that the information you enter into a chatbot is neither private nor confidential.

First, any information you input to an AI chatbot may be discoverable by an opposing party during a lawsuit. That means your questions, theories, musings, what ifs, or admissions typed into a chatbot could end up in the hands of the opposing party and be used against you in court. Second, even engaging a lawyer does not necessarily provide protection once AI enters the picture. Although the attorney-client privilege normally protects communications between lawyers and their clients, a federal court in New York recently held that typing details about legal advice from a lawyer into an AI chatbot in order to learn more about the subject matter waived the confidentiality of the lawyer’s advice.

The Boilerplate

Before you can give an AI chatbot your first prompt, you must agree to the platform’s terms of service. The terms of service of many blue-chip generative AI tools include clauses that allow the platform to collect data on users’ inputs and the platform’s outputs, use that data to train the AI model, and disclose such data to a host of third parties, including governmental regulatory authorities. Sometimes, these terms can only be negotiated for expensive “enterprise” accounts; but the $20/month ubiquitous general subscription does not make your data confidential.

For example, the terms of service for Claude (Anthropic’s generative AI chatbot) may, even in the absence of a subpoena compelling it to do so, “disclose personal data to third parties in connection with claims, disputes[,] or litigation.” Similarly, ChatGPT’s (OpenAI’s model) policy states that user interactions may be stored and shared under certain circumstances, including legal processes, noting that the company may disclose information “to comply with applicable law, legal process, or enforceable governmental request.”

The Blowback

In a recent series of cases, federal courts in New York have interpreted these terms of service to mean that AI users do not have “substantial privacy interests” in their interactions with AI platforms. In short, your conversations with AI chatbots are not private, and anything you say to one can easily end up in the hands of the party on the other side of a lawsuit (including the government in a criminal case).

This privacy issue also transcends litigation concerns. All business owners and their employees should be very careful about sharing confidential or proprietary information with AI tools. These platforms’ laissez faire terms of service enable any them to disclose your business opportunities and intellectual property just as easily as your confidential information related to a lawsuit. Even something as innocent getting an AI tool to transcribe the notes of a meeting about a not-yet-launched product or to help drafting response to a Request for Proposals may end up in the hands of the competition.

In sum, don’t tell an AI chatbot anything you wouldn’t want to be read aloud in a courtroom or to come across the desk of your leading competitor. Talk to your lawyer before talking to AI – and be careful with what you do with AI afterwards. The conversation is not private, and the damage done by disclosing confidential information will not be something that your lawyer can fix retroactively.

The Stinger

The attorney-client privilege is designed to protect from disclosure communications between you and an attorney seeking or providing legal advice. This privilege is waived if the holder of the privilege – that’s you – voluntarily discloses the privileged material to a third party. The work product doctrine, in contrast, protects materials prepared by you independently or at the direction of an attorney when you reasonably anticipate of litigation. This protection is also waived in if the information is shared in a non-confidential forum.

In United States v. Heppner, a financial services executive charged with criminal securities and wire fraud, gave the AI chatbot Claude information he had learned from his lawyers so he could learn more about his legal predicament. Heppner subsequently sent documents containing his prompts and Claude’s responses to his lawyers. The FBI seized those documents during a search of Heppner’s home. Heppner argued that the documents should be protected by the attorney–client privilege and/or the work product doctrine. The court held that by sharing the information with Claude, Heppner waived the attorney-client privilege and the protection of the work-product doctrine. In essence, the court ruled that typing questions into a chatbot and capturing the response is the digital equivalent of posting your private information on the jumbotron at a local sporting event for every fan to see.

This is still a developing area of the law intersecting with new technology, but the principles underlying the Heppner decision are well established and universal. Under Heppner, once information leaves the confidential relationship between attorney and client and is shared with an AI platform, the legal protections that normally shield those communications disappear.

Practice Pointer

AI has its place for all manner of research, technical or otherwise. Understand the terms of service of the platforms you use. If the platform is “open”, chances are that anything you post is discoverable in litigation or perhaps even without litigation. “Enterprise” platforms provide a more secure platform, but may not be immune from all discovery. If the topic you want to learn about involves legal issues or potential litigation, consider consulting an attorney before sharing your private information with an AI platform, or at a minimum plan your AI prompts with an expectation that you might see the prompt as your adversary’s Exhibit A in the future.


The New Frontier of Pro Se Litigation:
Managing AI-Generated Legal Filings in Business Disputes

By: James Walker, Partner

The increasing adoption of artificial intelligence (AI) tools by pro se litigants—individuals who represent themselves without legal counsel—has introduced a complex dynamic within the legal landscape, particularly impacting businesses involved in litigation. While AI-generated legal documents often present with a polished and professional appearance, they frequently harbor inaccuracies, such as fabricated citations or distorted legal arguments. This trend poses significant challenges to the integrity of judicial proceedings and complicates defense strategies for businesses.

AI’s Role in Shaping Pro Se Legal Filings

Recent advancements in AI-driven language models have made sophisticated legal drafting tools accessible to non-experts. These technologies can produce pleadings and briefs that closely resemble those prepared by trained attorneys. However, it is important to recognize that these models operate without true legal understanding. Instead, they generate content based on statistical patterns in data, which can lead to “hallucinations”—confident assertions of false or misleading information. In practice, this may result in:

  • Citations to cases that do not exist
  • Misinterpretations or distortions of established legal precedents
  • Fabricated statutes or legal principles
  • Incorrectly attributed quotations from judicial authorities

Such inaccuracies undermine the adversarial process, which depends on arguments grounded in authentic and verifiable legal authority.

Judicial Responses and Implications

Courts are increasingly vigilant in addressing AI-generated filings from pro se litigants. Judicial officers and court staff routinely verify the authenticity of cited authorities, and when fabrications are uncovered, they may impose sanctions, damage the litigant’s credibility, or dismiss cases outright. Some jurisdictions have begun requiring explicit disclosure when AI tools assist in preparing legal documents, alongside implementing procedures to detect and address fabricated citations. These developments reflect a broader judicial effort to uphold procedural integrity amid evolving technological challenges.

Challenges for Businesses Facing AI-Generated Pro Se Filings

Businesses confronted with litigation initiated by pro se parties using AI-generated documents face several distinct obstacles:

  • Unanticipated Legal Arguments: AI-generated filings may advance novel but legally unsound theories, necessitating additional resources to investigate and counter these claims.
  • Increased Litigation Costs: The need to scrutinize and respond to fabricated or misleading content can extend litigation timelines and escalate expenses.
  • Procedural Uncertainty: The polished nature of AI-generated filings may delay judicial dismissal, prolonging disputes and uncertainty.
  • Reputational Risk: Businesses must carefully manage public perception when faced with seemingly sophisticated but legally baseless allegations.

Strategic Considerations for Managing AI-Influenced Pro Se Litigation

To effectively navigate litigation involving AI-assisted pro se filings, businesses and their legal teams should consider the following:

  • Establish Robust Verification Processes: Implement systematic review protocols to authenticate legal filings, engaging counsel to identify fabricated or misrepresented authorities early in the process.
  • Educate Internal Stakeholders: Ensure that legal, compliance, and relevant business units understand the unique challenges posed by AI-generated filings to facilitate coordinated and informed responses.
  • Utilize Judicial Mechanisms: Work closely with counsel to prompt courts to scrutinize questionable filings. Courts increasingly welcome motions that expose fabricated citations or procedural abuses, which can expedite case resolution.
  • Maintain Professionalism and Detailed Records: Interact with pro se litigants respectfully and clearly, avoiding unnecessary escalation. Meticulously document all communications and filings to build a strong evidentiary record.
  • Monitor Technological and Legal Developments: Stay informed about advancements in AI drafting tools and evolving judicial standards to anticipate changes and adapt litigation strategies accordingly.

The convergence of AI technology and pro se litigation presents a paradox: while AI democratizes access to legal drafting capabilities, its current limitations risk compromising procedural fairness and imposing new burdens on businesses. Successfully navigating this landscape demands vigilance, adaptability, and a balanced approach that embraces innovation without sacrificing the integrity of legal processes. Businesses and their legal counsel that recognize these challenges and respond with strategic foresight will be better positioned to protect their interests and uphold the principles of justice across this increasingly complex legal environment.


Managing Liability Through Defend, Indemnify, and Hold Harmless Agreements

By: James Walker, Partner

Contracts in architecture and engineering projects often include provisions requiring one party to defend, indemnify, and hold harmless the other. While these terms are common, their implications can be complex and carry significant risks. Understanding what these clauses mean, how they differ from each other —and the potential consequences of agreeing to them—is essential for managing liability and protecting professional/commercial interests.

What Does Indemnification Mean?

At its core, indemnification is a promise by one party to reimburse another for losses or damages that arise from defined circumstances. For example, if your client is sued because of something related to your work, an indemnity clause might require you to reimburse their costs, including legal fees and settlements. Your professional liability policy will cover the indemnity – reimbursement – you owe to your client only if the trigger for indemnity is your professional negligence. If your contract requires you to indemnify the owner for costs the owner incurs for something “related” to your services but not necessarily negligence in the performance of those services, there is a good chance you will not be covered.

Indemnification can be either “Express” (clearly stated in the contract) or “Implied” (recognized by law in certain situations, such as when one party pays damages caused by another’s fault, even if not explicitly agreed upon). It is important to note that courts generally uphold the specific language in contracts. If indemnity clauses are vague or ambiguous, they are often interpreted against the party seeking protection. For architects and engineers, this means that unclear indemnity language can lead to unexpected liabilities.

The Duty to Defend: What it Entails

Separate from indemnification is the duty to defend, which obligates one party to pay for or provide the legal defense of the other when a claim arises. Unlike indemnification, which applies after a loss is established, the duty to defend kicks in as soon as a claim is made—even if it is ultimately unfounded.

This duty can result in substantial legal expenses early in a dispute, which can be financially and operationally burdensome. Some jurisdictions require the duty to defend to be explicitly stated or requested, while others impose it automatically.

For practitioners, understanding whether a contract includes a duty to defend—and the scope of that duty—is critical. Accepting this obligation can significantly increase your exposure to legal costs. Also, it is important to note that your professional liability insurer will not cover your costs to defend your client in a claim of lawsuit, so you are paying for your client’s defense out of your own pocket.

Hold Harmless Clauses: What You Should Know

The phrase “hold harmless” is often used interchangeably with indemnification, but its meaning can vary depending on the jurisdiction. In many cases, it means protecting the other party from liability or loss, i.e., identical to indemnity.

However, in some jurisdictions, “hold harmless” extends to potential liabilities that have not yet materialized, whereas indemnification typically covers only actual losses. This distinction affects how much risk you may be assuming. In jurisdictions where “hold harmless” extends to potential liabilities that have not yet materialized, you may be at risk of not having coverage under your professional liability insurance, but its complicated. It is crucial to clarify the scope of any hold harmless provision in your contracts to understand whether you are agreeing to cover only known losses or also future, potential claims.

Practical Recommendations for Architects and Engineers

To manage risks associated with defend, indemnify, and hold harmless provisions, architects and engineers should:

  • Understand jurisdictional differences, as legal interpretations vary widely, so knowing local laws is essential. This is particularly important for those who practice in many jurisdictions, or take on a project in a new jurisdiction.
  • Draft with clarity to avoid ambiguous language to reduce disputes and unintended liabilities.
  • Eliminate duty to defend clauses if possible. Instead, make sure the indemnity you owe includes indemnity for reasonable legal fees incurred by the owner. Your professional liability policy does not cover the cost of defending the owner directly, but it does cover reimbursement of the cost of defense incurred by the owner.
  • Review exclusive remedy clauses to ensure they reflect the intended risk allocation without limiting critical protections.
  • If uncertain, consult with your broker or an attorney or both about the hidden risks of these or other proposed terms in a contract.

By developing a thorough understanding of these contractual provisions and their practical effects, architects and engineers can negotiate more concisely, allocate risks more appropriately, and protect their financial interests more efficiently.


To learn more about our A&E Team, click here.