Defining the Duty: AI Use and Informed Consent in Legal Practice
The Illusion of Intelligence: Legal Risk in the Age of AI
Executive Summary
Artificial intelligence has rapidly become a fixture in legal technology, powering everything from drafting assistants to research tools. But its rise poses a foundational question: Can a machine that does not understand truth, responsibility, or harm be trusted with legal reasoning?
We conclude with a specific call to action: Bar associations and ethics boards must establish a duty of informed consent when AI is used in client work. This includes disclosure of AI use, sharing of client data, and the risks involved. In all cases, lawyers must closely supervise and review AI-generated work to ensure accuracy and appropriateness—because responsibility cannot be outsourced.
“When the assistant starts making things up, the problem isn’t that it’s wrong — it’s that it doesn’t know what wrong is.”
Introduction: The Temptation of the Substitute
Large language models (LLMs) are changing the way legal work is done. They can summarize long documents, suggest arguments, and even draft contracts or pleadings in seconds. For overwhelmed lawyers, they offer relief. For technologists, they promise transformation.
But beneath the efficiency lies a serious risk: substitution without understanding and informed consent.
The legal profession is increasingly relying on systems that mimic intelligence without possessing it. The danger is not just in overuse—it’s in misplaced trust. A machine that cannot understand the difference between a statute and a story, a precedent and a prediction, cannot be trusted to reason through legal matters. Yet that is exactly the illusion that powerful LLMs create.
This paper explores why that illusion is dangerous, and why the profession must act now to draw clear ethical boundaries before the line between tool and surrogate disappears.
The Philosophical Perspective: Can AI Know Right from Wrong?
Law is not just rules and procedures; it is a human endeavor grounded in values, ethics, and judgment. Lawyers are not mere technicians—they are moral agents who balance competing interests, interpret nuance, and take responsibility for their decisions.
Artificial intelligence cannot do that.
Unlike human beings, AI does not possess:
Awareness — It does not know what it is doing.
Intention — It does not aim to serve justice.
Consequences — It does not bear the burden of its actions.
Instead, AI systems operate through mathematical prediction. When asked to draft a motion or respond to a query, an LLM simply calculates which words are most likely to follow based on patterns in its training data. It does not know what those words mean. It cannot evaluate their truth or their fairness.
The result is an uncanny illusion of competence. But it is just that: an illusion.
This is why hallucinations occur. When an AI tool invents a case citation, it is not making a mistake in the way a human would. It is doing exactly what it was trained to do: produce language that sounds plausible.
“If it has seen similar phrases in similar contexts, it will echo them — without knowing they are wrong.”
That lack of grounding is a fundamental limitation, not a bug to be patched.
In legal practice, where the cost of error is borne by clients, courts, and the public, this absence of understanding is not acceptable. Responsibility requires more than fluency. It requires judgment—a distinctly human faculty rooted in experience, empathy, and accountability.
To replace that judgment with probabilistic output is not just risky. It is a category error.
The Societal Perspective: Trust, Institutions, and Responsibility
The justice system depends on public trust. Courts, law firms, and legal professionals operate not merely by force of law, but by a shared belief that the system is principled, responsible, and humane. That belief is fragile.
Artificial intelligence, when misused or misunderstood, poses a threat not only to accuracy but to legitimacy.
When lawyers submit AI-generated briefs with fictitious citations, it doesn’t just embarrass a single practitioner—it casts doubt on the competence of the profession. When contract generators or will-writing tools fail silently, the public may never know what rights they have lost or which responsibilities were left unenforced.
A. Trust Requires a Responsible Actor
The public expects that a person is ultimately responsible for legal advice and documentation. AI lacks standing, status, and soul. It cannot swear an oath, hold a license, or be disbarred. It cannot be questioned under oath or found liable in court.
The legal system was not designed to accommodate machines that can do the work of a lawyer but carry none of the responsibility. If that division is not clarified soon—and enforced—the credibility of legal institutions may erode from within.
B. Lawyers Cannot Abdicate Responsibility to AI
In today’s legal marketplace, many lawyers are relying on AI-powered document automation to generate complex legal instruments: wills, trusts, healthcare directives, real estate filings, commercial contracts, and more. These tools are fast, inexpensive, and convincing. But when lawyers fail to closely supervise and review the results, they are placing clients’ futures in the hands of systems that do not understand law and cannot be held accountable.
Clients, unaware of the risks, rely on their lawyer’s assurance that the work is sound. But that assurance is increasingly being given without basis—because the lawyer has not fully reviewed the output, has not tested it, and may not even understand the technology that produced it. Without is there can be no informed consent.
“The lawyer tells the client it’s done right. The client believes it. Years later, the document fails.”
This is not hypothetical. These time bombs are already being embedded in legal records across jurisdictions—in language that no one will read until it is too late to fix.
C. The Role of the Profession
If lawyers do not lead in setting ethical standards for AI use, someone else will: courts, regulators, malpractice insurers, or public scandal.
The legal profession must reaffirm its role as a human-centered institution. Not in opposition to technology, but in recognition of what only human lawyers can do:
Exercise independent judgment
Take moral and legal responsibility
Supervise and explain what machines cannot
Public confidence in the law depends on knowing that real people are still responsible for justice.
The Legal Practitioner’s Perspective: Risk, Ethics, and Informed Consent
AI tools offer significant benefits to legal practitioners—faster drafting, document summarization, research assistance. But they also bring substantial risks that cannot be outsourced or ignored.
A. The Hallucination Problem
AI-generated content can be persuasive and articulate, yet completely false. Lawyers using AI to draft documents must recognize that hallucinated citations, inaccurate dates, or inconsistent logic are not rare edge cases—they are built-in limitations of current models.
AI does not know when it is wrong. It simply predicts what “sounds right” based on patterns in data. That gap between surface fluency and substantive accuracy presents a core risk for legal professionals.
B. The Risk of Unsupervised Client-Facing Drafting
Consumer-facing legal AI tools now draft wills, trusts, health care directives, prenuptial agreements, and more. These are documents with profound, long-term consequences that often remain unread or unchallenged until someone has died, become incapacitated, or left the jurisdiction.
AI-generated documents in this context may:
Include legally invalid or contradictory provisions
Use ambiguous language that fails under stress or scrutiny
Misapply or omit jurisdiction-specific requirements
These failures may not emerge until it is far too late to remedy them. As one practitioner observed decades ago about consumer credit agreements: “Some documents only work because no one ever reads them.” In the context of personal legal instruments, this is not just bad drafting—it’s a betrayal of trust.
The risk of time bombs in legal drafting—errors that won’t surface until a crisis occurs—must be contrasted with the more obvious dangers of public court filings that can be reviewed, challenged, or corrected quickly. In sensitive documents like wills or directives, a flaw may remain hidden until the damage is irreversible.
C. Recommendation: Establish a Duty of Informed Consent
To protect clients, the public, and the integrity of the profession, we recommend that Bar associations adopt a duty of informed consent when AI is used in any capacity related to client legal work. This duty should require lawyers to:
Inform clients when AI is used in document creation, analysis, or drafting
Disclose any sharing of client data with AI systems, particularly cloud-based tools
Explain the risks, including hallucinations, data leakage, and non-reviewable reasoning
Closely supervise and review all AI-generated work before it is relied upon or shared
“You can delegate tasks. You cannot delegate responsibility.”
Lawyers must remain the accountable party. No AI tool should ever be treated as a surrogate for legal judgment.
A Call to Action for Bars and Ethics Boards
Bar associations and regulators must act now to define ethical use of AI in legal work.
We recommend:
A duty of informed consent.
Lawyers must disclose to clients when AI is used in drafting or advising. This includes identifying what data is shared and the risks of relying on machine-generated output and receiving informed consent
Lawyer-in-the-Loop (LITL) supervision.
AI-generated work must be closely reviewed and approved by the responsible attorney. Supervision must be meaningful and documented.
Prohibition on unsupervised client-facing tools.
Lawyers must not offer AI-powered drafting tools to clients without attorney oversight. Responsibility cannot be transferred.
Ethics and CLE training requirements.
Continuing education standards should include practical instruction on AI’s capabilities and limits—alongside legal ethics.
This framework ensures that AI can be used safely—without undermining the foundations of professional duty.
Conclusion: AI as Tool, Not Colleague
Artificial intelligence is not going away. It will grow more fluent, more persuasive, and more deeply embedded in the workflows of law firms, courts, and clients. But its growth must not be confused with maturity.
Legal AI is not a new lawyer. It is a new assistant.
Assistants can be brilliant, but they must be supervised. They can organize, draft, and suggest—but they cannot decide. They do not bear responsibility for what happens if something goes wrong. That burden falls on the lawyer. Always.
In an age of dazzling automation, the core value of the profession is not its speed or formatting skill, but its judgment. That cannot be outsourced. That cannot be replaced.
“When the assistant starts making things up, the problem isn’t that it’s wrong—it’s that it doesn’t know what wrong is.”
The challenge for the profession is to embrace the future without surrendering its soul. That means using tools wisely, drawing boundaries clearly, and reaffirming that the law is a human institution, founded on human responsibility.
AI will be part of the legal future. But it must remain just that: a part, not a partner.
This is the fourth in our series of White Papers discussing the intersection of Artificial Intelligence and the legal profession. See the three predecessor white papers in our Security section.