AI and Legal Fees: Why A Lawyer’s Judgment Is One Thing AI Can’t Replace

Cartoon illustration of a man protecting his will with a shield while others try to challenge it

Can a Will Be Challenged in Ontario? A Guide on How to Avoid Litigation and Probate Disputes

March 1, 2026
Cartoon illustration of a man protecting his will with a shield while others try to challenge it

Can a Will Be Challenged in Ontario? A Guide on How to Avoid Litigation and Probate Disputes

March 1, 2026
Show all

AI and Legal Fees: Why A Lawyer’s Judgment Is One Thing AI Can’t Replace

A client sent me an email recently that captures most of what I want to say about AI and legal fees. It had bolded subheadings, a numbered list, and a “Next Steps” section at the bottom. They were updating me on a file that had been open for weeks.

The email was organized beautifully, but it did not contain a single fact I could use.

I knew within the first paragraph that they had not written it. They had asked a chatbot to draft their reply to me, and they had trusted it to say something useful on their behalf. Instead, it had produced the shape of a thoughtful email without any of the thinking.

I spent thirty minutes on the phone getting to the substance, which could have told me in a three-line note. That call, by the way, added to the bill.

What AI and legal fees look like in my practice

I run a small firm with practice areas in estates and wills, real estate, and small business law, and I use AI every day. I use it cautiously and deliberately, and only for things I understand well enough to check. It drafts my engagement letters. It summarizes hour-long intake meetings. It can fill in the routine parts of intake and documents that used to eat an entire afternoon.

I charge flat fees on most of my files, so the hours AI saves me can show up as lower prices for clients. This Globe and Mail piece from this week suggested AI might cut a prenup from $5,000 to $2,500, and that tracks with what I see on simple estate and real estate files. These files used to cost more because they took longer, not because they were harder.

Using AI responsibly in a law practice is not automatic. It has taken real time, and more than a few wasted afternoons, to figure out where it helps and where it hurts. Holding the line on confidentiality and client security while still finding efficiencies is something I do not take lightly. Lawyers who treat AI as a magic autocomplete end up in Federal Court decisions.

Where AI is going to cost you money

The part that does not get written about as often is the verification burden. Every document the AI produces has to be read, line by line, by me, because I am the one who signs.

When the AI makes up a case, which happened to a Canadian lawyer in a Federal Court decision last year, the lawyer is the one whose name appears in the reasons. Not the tool. That risk does not go away when the tool gets faster. It gets bigger.

Clients sometimes show up with a contract they drafted themselves using a free AI tool and ask me to “just look it over.” I have not yet seen one that did not need to be rewritten.

More often, clients show up with something worse than a bad document. They show up with a story a chatbot told them about what the law says, and they are sure of it. I then spend an hour walking them back from conclusions that a machine produced with confidence and zero accountability. That hour is not saving anyone money. I have to charge for it, because the argument I am having is not really with my client, it is with the model my client consulted.

The judgment problem

The AI is not the expert in the room. I am, and so is every other lawyer who has invested in the years of education and experience dealing with people on a human level.

When a client asks me whether to leave the cottage to their daughter outright or in a trust, the answer depends on the daughter’s marriage, the jurisdiction they live in, whether her siblings are going to contest it, and whether my client wants the cottage kept in the family after the daughter is gone. No chatbot in the world can answer that without the lawyer in the room to ask the follow-ups. The technology can create a first draft clause once I decide what it should say. It cannot decide on its own what it should say, unless it is carefully guided, and sometimes not even then.

So yes, AI can bring my fees down on the files where it can, and the billable hour is probably on its way out for routine work. But the part that keeps getting skipped in the coverage is that the value of a lawyer was never the typing. It was the judgment, and the counsel that comes from sitting across from people and asking the right follow-ups.

The responsibility is ours

Used responsibly, AI is a very good tool. The “responsibly” is doing a lot of work in that sentence. It means knowing when to use it and when not to, knowing how to prompt for useful output, and, most of all, knowing how to catch what it gets wrong. None of that comes pre-installed. Lawyers have to invest the time and steer the tool in the right direction. Not unlike mentoring a newly called lawyer.

Clients are going to keep using AI, and a lot of them are going to use it badly. The profession is not going to stop them. What the profession can do is make sure that the lawyer on the other side of the file knows more about the tool than the client does. That is the floor we have to hold, and the only way we hold it is by educating ourselves properly before we let the technology anywhere near client work.

The email, revisited

My client phoned me back after we got off the call that day. They told me they had been embarrassed that they did not know enough about their own file to answer me directly, so they had asked a chatbot to write something that sounded like they did.

That is the whole story. AI is getting good enough to produce writing that looks like understanding. It is not good enough, and may never be good enough, to produce the understanding itself.

That gap is where my value comes from.