The Student Room Group
City-centre campus surrounded by parks
Solent University
Southampton

Tomorrow's Lawyer: Will Artificial Intelligence replace the human lawyer?

Poll

Tomorrow's Lawyer: Will Artificial Intelligence replace the human lawyer?

If COVID and the past 18-months has shown us anything, it is that the work of lawyers can still take place remotely. Whether this is through online courts or completion of legal transactions at a distance or the provision of legal advice.

But take this one step further. All the above examples require lawyer interaction and engagement. Is that needed?

Artificial intelligence is designed to replicate human thinking and acting. It can replicate a human to notice patterns in data. Artificial intelligence can replicate a human in terms of making judgments on a person including whether a person’s smile is ‘real’ or ‘false’.

Can you foresee a legal system where no human interaction is needed and so human lawyers are replaced rather than their work replicated? We can pay for goods at a self-service check-out. We can order a taxi via an app and in the not too distant future are likely to have a driver-less car pick us up. Adverts that we see on social media are targeted at us by our viewing habits and the profile that has been built up about us. All require no other human interaction.

Why can’t this extend to the legal sector? Can you see a future where a court case is determined by AI examining the papers and coming to a decision? Would this remove all the human bias that may exist within a jury? What implications are there for someone who was perhaps charged with theft or a terrorism offence to attend court for their case to be considered by AI? Is this desirable?

Think ahead and put your thoughts below there are no right or wrong answers let’s dream about what future law and the role of lawyers and judges may look like.

Spoiler

Scroll to see replies

Maybe. But I don't see it catching on very quickly.

For straightforward procedural things like conveyancing, I can see it - although things like wills and trusts where there are long term consequences, I could see AI making a lot of mistakes that would cost people down the line. At some point, AI would have to stop being supervised by a human otherwise its pointless.

For anything contentious or criminal, the issue is that eventually the endpoint is not only AI solicitors and advocates, but AI judges - and no one will want that. Simply put, I don't see how an AI will ever be able to judge merit or harm. If you plead guilty to shoplifting, but say that in mitigation you had to feed your starving children - how will an AI decide if that is mitigation or just a transparent pack of lies? Humans always seem to err on the side of caution and have the tendency to give the maximum forgiveness to any given defendant or applicant. I don't see an AI doing that - you would get people getting judgements or sentences as per statute or desert - and that would make people very very unhappy. I'd give it 1 year before you would have demands for AI judges to be scrapped because AI is racist or sexist.
City-centre campus surrounded by parks
Solent University
Southampton
Original post by Trinculo
Maybe. But I don't see it catching on very quickly.

For straightforward procedural things like conveyancing, I can see it - although things like wills and trusts where there are long term consequences, I could see AI making a lot of mistakes that would cost people down the line. At some point, AI would have to stop being supervised by a human otherwise its pointless.

For anything contentious or criminal, the issue is that eventually the endpoint is not only AI solicitors and advocates, but AI judges - and no one will want that. Simply put, I don't see how an AI will ever be able to judge merit or harm. If you plead guilty to shoplifting, but say that in mitigation you had to feed your starving children - how will an AI decide if that is mitigation or just a transparent pack of lies? Humans always seem to err on the side of caution and have the tendency to give the maximum forgiveness to any given defendant or applicant. I don't see an AI doing that - you would get people getting judgements or sentences as per statute or desert - and that would make people very very unhappy. I'd give it 1 year before you would have demands for AI judges to be scrapped because AI is racist or sexist.


Hi Trinculo

I think you make a really valid point about the difference between more procedural legal functions which might be more appropriate for some form of AI oversight, and then the more substantive legal issues such as adjudging the merits of a case which may not be.

Take your example of pleading mitigation to theft, is there anyway you think an AI could look for additional information which might assist it in coming to the correct answer on whether to permit the mitigation?

As the legal system comes under increasing pressure coupled with a reduction in funding, would you ever be happy with a system that sees AI adjudging low level offences which carry limited sentences in a criminal setting, and perhaps small claims in the civil setting? You might then have the more 'meaty' cases being heard by human judges/juries?

You raise a really important point about discrimination, how do you think an AI might act in discriminatory fashion?

We look in detail at the litigation process during your first year at Solent Law School. We look at these kinds of issues, trial by jury, composition of benches, evidential issues, sentencing etc and we really encourage our students to critique the system they see.

It's fantastic to see you already doing that and thinking about things from a practical stand point.
(edited 2 years ago)
I believe that the problem will be the opposite - AI will behave in a manner that will be non-discriminatory, and this will cause widespread anger and unrest. Currently, the CPS sentencing guidelines are almost always applied in the most lenient possible manner. Offences such as assault on emergency worker almost never receive anything other than the minimum allowable penalty, and very often well below that. In the hands of AI, you would see people receiving the statutory penalties commensurate with their offending history and the circumstances, and you'd see a lot more custodial sentences. If you look at the demographics of offenders, that is only going to point one way - you will get a lot more young BAME males in prison, and the rallying cry of the social justice brigade will be that the judiciary has become racist robots. The alternative to this, is of course to program in additional leniency - which then makes a mockery of the sentencing guidelines.

The same would go for certain classes of accused. Currently, women almost never receive the same penalties as men for a similar offence, especially violent offences. This is due to a general judicial leniency and taking into account issues like childcare. AI would have to be programmed to take this kind of thing into account - essentially building systemic bias into the software, rather than it being judicial fiat.

As you go higher up the courts, the idea of innovation and landmark decisions would disappear, or be largely discredited. If the Master of the Rolls, or the President of the Family Division is basically Akinator with a law degree, there either won't be any novel case law, or there would be a constant fear of it being a software bug. Take for example Radmacher. Ante-nuptial contracts had never before been given weight - why has this decision gone this way? Are "judges" suddenly making a general move toward this, or is it a software problem? You can't ask an AI for a rationale - it will just regurgitate what has been fed in.
Original post by Trinculo
I believe that the problem will be the opposite - AI will behave in a manner that will be non-discriminatory, and this will cause widespread anger and unrest. Currently, the CPS sentencing guidelines are almost always applied in the most lenient possible manner. Offences such as assault on emergency worker almost never receive anything other than the minimum allowable penalty, and very often well below that. In the hands of AI, you would see people receiving the statutory penalties commensurate with their offending history and the circumstances, and you'd see a lot more custodial sentences. If you look at the demographics of offenders, that is only going to point one way - you will get a lot more young BAME males in prison, and the rallying cry of the social justice brigade will be that the judiciary has become racist robots. The alternative to this, is of course to program in additional leniency - which then makes a mockery of the sentencing guidelines.

The same would go for certain classes of accused. Currently, women almost never receive the same penalties as men for a similar offence, especially violent offences. This is due to a general judicial leniency and taking into account issues like childcare. AI would have to be programmed to take this kind of thing into account - essentially building systemic bias into the software, rather than it being judicial fiat.

As you go higher up the courts, the idea of innovation and landmark decisions would disappear, or be largely discredited. If the Master of the Rolls, or the President of the Family Division is basically Akinator with a law degree, there either won't be any novel case law, or there would be a constant fear of it being a software bug. Take for example Radmacher. Ante-nuptial contracts had never before been given weight - why has this decision gone this way? Are "judges" suddenly making a general move toward this, or is it a software problem? You can't ask an AI for a rationale - it will just regurgitate what has been fed in.


Hi Trinculo

Thank you for the really interesting response there.

What you've done as well is really critiqued not just the notion of AI as judge, but people as judges using a jurisprudence focus. What you've highlighted there is an almost naturalist and realist type critique/observation. The notions of real rules and paper rules conflicting and something above purely positivist statements of law.

I guess as well a question could be, how does the AI work out the interpretation of a statutory provision? Or keep the law updated (you highlighted Radmacher v Granatino)? Courts are often tasked with attributing meaning to vague concepts like 'dishonesty' and 'serious harm', if there is no existing precedent, do you think an AI could ever be aware enough to create a meaning? Or update a meaning? Would you trust it to? Do you think we would be back to your idea of some sort of judicial programming body in charge of updating algorithms etc?

You make an interesting point about appeals as well. Do you think an AI can err in law?

You spoke extensively about sentencing. If you take the custody threshold, for example, how do you think an AI might work out whether it has been passed? The same for the community order threshold?

What do you think of the current state of sentencing?
Reply 5
So I'm not an expect in the law by a long shot at all.

However if you gave me a choice between an AI Jury or Human Jury - I would actually choose the AI jury. Human beings are biased, emotional and easy to manipulate. I've watched so many documentaries of people who were innocent but ended up in prison because the prosecutor has planted an idea. It also works the opposite way - there was a famous case a few years ago of a guy named Richard Scrushy. It was clear that he had committed what he was accused of (fraud) but his lawyers got him out, he pretended to be a Christian, when to "black churches" and during the trial his lawyers made it all about the "American Dream". The jury didn't understand the prosecutors argument as it was full of financial jargon. AI presumably would look at the facts and come to a conclusion.

I think where I would have a human is definitely the judge as they can look at background of the person who committed the crime, whey they committed the crime etc
Reply 6
have you ever worked in a law firm? most legal transactions and legal advice is done remotely - even pre Covid - through emails and phone calls. rarely do clients have to actually come in for a face-to-face and rarely do you have to go to court, so not sure how Covid has proven anything in that regard? :curious:

legal pursuits are often emotional for clients, particularly in family law, immigration, bankruptcy, human rights law and criminal law for example, and part of lawyering is comforting and reassuring your client to some extent. how would AI be able to bring that personal touch to them?

not sure how AI could physically handle and assess paper documents like deeds, photos, wills, cheques, paper mail, etc? surely you would need a human hand and eye for that?

there needs to be a certain understanding of human nature to draw back on when you're playing judge or jury. how would AI be able to understand things like the reasonable person standard or determine the best interests of a child when it's not set in stone and AI has no human experience to draw back on to determine that? these would be my questions, but tbh i'm also very uneducated when it comes to the workings of AI :redface:
Original post by Solent University- TSR Talks
Hi Trinculo

Thank you for the really interesting response there.

What you've done as well is really critiqued not just the notion of AI as judge, but people as judges using a jurisprudence focus. What you've highlighted there is an almost naturalist and realist type critique/observation. The notions of real rules and paper rules conflicting and something above purely positivist statements of law.

I guess as well a question could be, how does the AI work out the interpretation of a statutory provision? Or keep the law updated (you highlighted Radmacher v Granatino)? Courts are often tasked with attributing meaning to vague concepts like 'dishonesty' and 'serious harm', if there is no existing precedent, do you think an AI could ever be aware enough to create a meaning? Or update a meaning? Would you trust it to? Do you think we would be back to your idea of some sort of judicial programming body in charge of updating algorithms etc?

You make an interesting point about appeals as well. Do you think an AI can err in law?

You spoke extensively about sentencing. If you take the custody threshold, for example, how do you think an AI might work out whether it has been passed? The same for the community order threshold?

What do you think of the current state of sentencing?

Let's take an example in the way an AI would function - how would an AI have decided Ivey v Genting? It's not clear to me how you would program in a common law imperative. How would an AI ever know that it is time to change established case law? This is the essential issue - AI's can't be judges. If they can't be judges, where will the judges come from after two generations of advocates have been replaced by robots?

In terms of sentencing (unrelated to AI practicioners) - it is clearly an entirely social/political issue rather than a legal one. It is very clear on what the sentencing guidelines *should* be - and yet they are almost never exercised other than in high profile cases. Popular examples include repeat habitual knife carriers. The likelihood of receiving a custodial sentence is exceptionally low, regardless of how many times you have been found in possession. An under 16 will always get a referral order first, and then regardless of what they do next, unless it is an absolutely top level offence, they will get some kind of supervision order. People commit burglaries and thefts and receive one day in prison, time served, or a fine that they will never pay. This cannot be merely down to how magistrates and judges feel on the day - this must be driven by the dominant culture and politics - and that culture is one of giving offenders every single forgiveness ; and then some on top of that.
Another issue would be - would AI all be the same software? If so- how would the appeals system work? What would the point be if your first instance judge is running the same software as your CA judge? Surely they would come to the same decision? If they don't, then surely that would make the lower court software obsolete?

Can you imagine the mess when things go off to Europe? A Belgian, Ukranian, Latvian, Greek and Portuguese robot have agreed that our English robot decided the case incorrectly. Appeal allowed. Plot twist - they're all running the same software.
Original post by kwame88
So I'm not an expect in the law by a long shot at all.

However if you gave me a choice between an AI Jury or Human Jury - I would actually choose the AI jury. Human beings are biased, emotional and easy to manipulate. I've watched so many documentaries of people who were innocent but ended up in prison because the prosecutor has planted an idea. It also works the opposite way - there was a famous case a few years ago of a guy named Richard Scrushy. It was clear that he had committed what he was accused of (fraud) but his lawyers got him out, he pretended to be a Christian, when to "black churches" and during the trial his lawyers made it all about the "American Dream". The jury didn't understand the prosecutors argument as it was full of financial jargon. AI presumably would look at the facts and come to a conclusion.

I think where I would have a human is definitely the judge as they can look at background of the person who committed the crime, whey they committed the crime etc


I wouldn't be so sure. AIs don't simply look at facts, and particularly in legal cases, facts may require interpretation or contextual assessment. They instead abide by algorithms (sometimes yielded through black box approaches) to enable them to make decisions. And they can be very susceptible to manipulation themselves, if you can figure out what makes them tick.
Original post by kwame88
So I'm not an expect in the law by a long shot at all.

However if you gave me a choice between an AI Jury or Human Jury - I would actually choose the AI jury. Human beings are biased, emotional and easy to manipulate. I've watched so many documentaries of people who were innocent but ended up in prison because the prosecutor has planted an idea. It also works the opposite way - there was a famous case a few years ago of a guy named Richard Scrushy. It was clear that he had committed what he was accused of (fraud) but his lawyers got him out, he pretended to be a Christian, when to "black churches" and during the trial his lawyers made it all about the "American Dream". The jury didn't understand the prosecutors argument as it was full of financial jargon. AI presumably would look at the facts and come to a conclusion.

I think where I would have a human is definitely the judge as they can look at background of the person who committed the crime, whey they committed the crime etc

This is really interesting, @kwame88. Here's a question though...Is AI only as good as the information that is fed into it? With a jury trial, yes you would certainly get individual biases; that is the nature of it. But would an AI jury potentially share the bias of the creator of the AI? A human jury can check and balance any biases within then as part of their decision making process. How could an AI jury do this?
Reply 11
Original post by Solent University- TSR Talks
If COVID and the past 18-months has shown us anything, it is that the work of lawyers can still take place remotely. Whether this is through online courts or completion of legal transactions at a distance or the provision of legal advice.

But take this one step further. All the above examples require lawyer interaction and engagement. Is that needed?

Artificial intelligence is designed to replicate human thinking and acting. It can replicate a human to notice patterns in data. Artificial intelligence can replicate a human in terms of making judgments on a person including whether a person’s smile is ‘real’ or ‘false’.

Can you foresee a legal system where no human interaction is needed and so human lawyers are replaced rather than their work replicated? We can pay for goods at a self-service check-out. We can order a taxi via an app and in the not too distant future are likely to have a driver-less car pick us up. Adverts that we see on social media are targeted at us by our viewing habits and the profile that has been built up about us. All require no other human interaction.

Why can’t this extend to the legal sector? Can you see a future where a court case is determined by AI examining the papers and coming to a decision? Would this remove all the human bias that may exist within a jury? What implications are there for someone who was perhaps charged with theft or a terrorism offence to attend court for their case to be considered by AI? Is this desirable?

Think ahead and put your thoughts below there are no right or wrong answers let’s dream about what future law and the role of lawyers and judges may look like.

Spoiler




AI is built on hard facts, systemising processes that can be put into formulas, human judgement is based on experience and logical deductions a lot which would be criminal and unethical to leave to a robot. ive unfortunately had to take someone to court and the discussions weren't over hard facts and numbers, it was about the most logical reasoning based on what has been presented.

perhaps the more mundane work like putting together paperwork for a will or buying a house can be replicated, but law involving disputes, not so much.
(edited 2 years ago)
Original post by Solent University- TSR Talks
This is really interesting, @kwame88. Here's a question though...Is AI only as good as the information that is fed into it? With a jury trial, yes you would certainly get individual biases; that is the nature of it. But would an AI jury potentially share the bias of the creator of the AI? A human jury can check and balance any biases within then as part of their decision making process. How could an AI jury do this?

It could even develop its own biases. These systems learn experientially more and more and are very driven by probabilistic reasoning. Biases ultimately have the same origin in nature (mental shortcuts that could yield an adaptive advantage) and they can vary in terms of how reflective of reality they are. There's ways to embed statistical controls in a system to limit this but it's still susceptible to error, much as with humans. Imo we're still not at the point where they're better overall decision makers than humans, even if they can be better in certain limited areas.
AI by machines is intrinsically programmed with these things in mind - structure, logic, objectivity, problem-solving. While humans rely heavily on sense experience. Can these machines really have experiences in the same way? Or is the one objective in the way in which they are programmed merely to retain information that is not yet stored in their database? I would argue that AI would probably overlook the subjective nature of humans that is so necessary in the field of law. It could be limiting. Every case is different and these machines would have to be programmed very well in order to make a well-discerned and fair judgement. There's also the issue of human error vs mechanical error.

But then again, maybe subjectivity is where human lawyers fall short? Maybe we are too emotional and not rational enough? Personally I think human affairs need to be dealt with by humans. It's an interesting topic. I don't know enough about it. This is just where my thinking has led me so far.
(edited 2 years ago)
Original post by al_fl
AI is built on hard facts, systemising processes that can be put into formulas, human judgement is based on experience and logical deductions a lot which would be criminal and unethical to leave to a robot. ive unfortunately had to take someone to court and the discussions weren't over hard facts and numbers, it was about the most logical reasoning based on what has been presented.

perhaps the more mundane work like putting together paperwork for a will or buying a house can be replicated, but law involving disputes, not so much.

This is intriguing. I would concur with your final point around more mundane work and actually we are seeing document automation in a number of areas.

When it comes to decision-making, are you suggesting that the outcome will be reached on a more factual basis, rather than a possibly emotive, but probable 'best argument on the day' approach? If so, would this be preferable?
In all areas? If someone was accused of a serious crime (perhaps an offence against the person), would an 'AI jury' allow justice to be done and seen to be done?
Original post by studygirl388
AI by machines is intrinsically programmed with one thing in mind - structure, logic, objectivity, problem-solving. Humans rely heavily on true experience. Can these machines really have experiences in the same way? Or is the one objective in the way in which they are programmed merely to retain information that is not yet stored in their database? I would argue that AI would probably overlook the subjective nature of humans that is so necessary in the field of law. It could be limiting. Every case is different and these machines would have to be programmed very well in order to make a well-discerned and fair judgement. There's also the issue of human error vs mechanical error.

But then again, maybe subjectivity is where human lawyers fall short? Maybe we are too emotional and not rational enough?Personally I think human affairs need to be dealt with by humans. It's an interesting topic. I don't know enough about it. This is just where my thinking has led me so far.

Very interesting observations - particularly your comments around subjectivity and the need for human affairs to be dealt with by humans. Would an AI system enhance access to justice though? We have a court system that in many ways is creaking under under weight and volume of cases. Would speeding up the hearing of cases through AI or other online dispute platforms, which in turn support an individual's access to justice be more preferable to a growing list of cases needing to be heard by a human judge?
With any AI based system there are two areas of uncertainty. One is what human prejudices are assimilated into functioning of the AI systems. The other is the way that data for any given case is uploaded into the system - unless a system can acquire its own evidence then it's dependent on whoever inputs that data and potentially weights its values. Both just complex versions of that old computer issue of garbage in - garbage out.
(edited 2 years ago)
But if a true artificial intelligence existed, what would be its motivation, its moral compass. Would a true artificial intelligence chose law as a career?
Original post by TCA2b
I wouldn't be so sure. AIs don't simply look at facts, and particularly in legal cases, facts may require interpretation or contextual assessment. They instead abide by algorithms (sometimes yielded through black box approaches) to enable them to make decisions. And they can be very susceptible to manipulation themselves, if you can figure out what makes them tick.


Good point you've got me there lol.

Quick Reply

Latest

Trending

Trending