The Student Room Group
City-centre campus surrounded by parks
Solent University
Southampton

Tomorrow's Lawyer: Will Artificial Intelligence replace the human lawyer?

Scroll to see replies

Original post by Solent University- TSR Talks
This is really interesting, @kwame88. Here's a question though...Is AI only as good as the information that is fed into it? With a jury trial, yes you would certainly get individual biases; that is the nature of it. But would an AI jury potentially share the bias of the creator of the AI? A human jury can check and balance any biases within then as part of their decision making process. How could an AI jury do this?

Good point. My whole thing with jury's though is your life can be literally be decided by people who may not have been listening to the whole, trial, people that don't understand the case, and of course people with there own biases. If there is someway that AI could be made to be bias free it would be great. But you've made an excellent point, I watched a show called Coded Bias and it did touch upon AI being potentially racist as it didn't recognise a lot of the black faces. Again I think in regards to your last point, AI could literally look at facts such as, fingerprinting, smartphone location, things of that nature and then a human judge could decide from what AI decided whether you did that thing or not, they would also hand down sentencing. And again with juries alot of people just conform to the majority opinion. Alot of my examples are from Netflix lol but there was a terrible case of a five year old who was abused by his mum and stepdad and eventually killed. All the jury members bar 1 wanted to charge the dad with murder. The one jury member felt that the dad should have got manslaughter. In the end he conformed as he was tired of arguing.
City-centre campus surrounded by parks
Solent University
Southampton
Interesting answer. But not one I can totally align with. Why would an artificial intelligence necessarily identify as human? So much of human identity is closely tied in to mortality, our animal origins, the cycle of life and relationships with other humans. How would a potentially isolated virtual entity relate to any of that? OK one might try to encourage human identification, but that way might be problematic - if a human identifying intelligence realised it was trapped in a machine could it cope?
Original post by Dee-Emma
But if a true artificial intelligence existed, what would be its motivation, its moral compass. Would a true artificial intelligence chose law as a career?


Very droll! If morality is broadly a human construct then would the AI's morality be determined by the original creator/programmer...?
Original post by Solent University- TSR Talks
Very droll! If morality is broadly a human construct then would the AI's morality be determined by the original creator/programmer...?

Was (with some humour intended) really drilling down into the concept of 'true' artificial intelligence, rather than a machine, which although incredibly complicated, is but an extension of the programmers own intentions and value systems.
A fair point on the development that AI still has to go through to reach the level of being a 'true' AI as you say. Do you personally believe an AI will ever attain the level that it will be regarded as a 'peer' by members of the public in the sense that a jury is a panel of your peers?
Original post by Dee-Emma
Was (with some humour intended) really drilling down into the concept of 'true' artificial intelligence, rather than a machine, which although incredibly complicated, is but an extension of the programmers own intentions and value systems.

This is a really interesting observation.

Essentially, are you saying that for the AI to reach a level of sophistication that we would be comfortable with it adjudging us, it would then have to actually choose this career path? It couldn't just be assigned to it as we would a piece of machinery to a job now? That the level of sophistication needed means it could not be forced into doing something without issues arising in that respect?
Even relatively simple things like self-checkouts need a human worker nearby for when they start screaming “unexpected item in bagging area!”

AI lacks the understanding of context that humans have. This was shown over the last few days, as Instagram’s automatic moderation tool decided that comments of monkeys emojis under black footballers’ posts weren’t racist, presumably because a monkey emoji isn’t always racist. Human moderators then removed these comments, because they can understand the context.

Also, rather than removing bias, AI can automate its creators bias. For example Amazon had to scrap its recruitment AI because it systematically selected male candidates over female ones, just because it had learned that the majority of Amazon’s current workforce was male. Some US courts already use COMPAS AI to estimate how likely prisoners are to reoffend, but that has been shown to overestimate black prisoners’ risk of reoffending, and underestimate white prisoners’ risk.

Until or unless AI develops a lot from its current state, I don’t see how it could replace lawyers.
Original post by Solent University- TSR Talks
This is a really interesting observation.

Essentially, are you saying that for the AI to reach a level of sophistication that we would be comfortable with it adjudging us, it would then have to actually choose this career path? It couldn't just be assigned to it as we would a piece of machinery to a job now? That the level of sophistication needed means it could not be forced into doing something without issues arising in that respect?

I don't specifically place a judgement on the level of sophistication needed to satisfy human needs on legal assessment, maybe people could be satisfied with sophisticated algorithms. More I was examining what 'true' AI could be. If a system has true intelligence, wouldn't that be one capable of examining its own existence and developing its own value system. Thus indeed 'forcing' such an intelligence to a specific task would have moral consequences, and possibly unpredictable practical consequences too.
Original post by Solent University- TSR Talks
If COVID and the past 18-months has shown us anything, it is that the work of lawyers can still take place remotely. Whether this is through online courts or completion of legal transactions at a distance or the provision of legal advice.

But take this one step further. All the above examples require lawyer interaction and engagement. Is that needed?

Artificial intelligence is designed to replicate human thinking and acting. It can replicate a human to notice patterns in data. Artificial intelligence can replicate a human in terms of making judgments on a person including whether a person’s smile is ‘real’ or ‘false’.

Can you foresee a legal system where no human interaction is needed and so human lawyers are replaced rather than their work replicated? We can pay for goods at a self-service check-out. We can order a taxi via an app and in the not too distant future are likely to have a driver-less car pick us up. Adverts that we see on social media are targeted at us by our viewing habits and the profile that has been built up about us. All require no other human interaction.

Why can’t this extend to the legal sector? Can you see a future where a court case is determined by AI examining the papers and coming to a decision? Would this remove all the human bias that may exist within a jury? What implications are there for someone who was perhaps charged with theft or a terrorism offence to attend court for their case to be considered by AI? Is this desirable?

Think ahead and put your thoughts below there are no right or wrong answers let’s dream about what future law and the role of lawyers and judges may look like.

Spoiler



I think the strongest argument in favour of this would be true fairness in representation. If the government gives both sides the same supercomputer as each other, then the rich can't 'buy' a more favourable verdict by employing a more expensive, more capable legal team.

I would be very hesitant in using a computer to analyse probable juror (or judge!) psychology to get them to side with you. Yes, human lawyers do this quite a lot, but to train computers how to manipulate human decisions is very thin ice that we need to get away from rather than do more with. And replacing the jury or even the judge with a computer is a non-starter. Human bias and psychological tricks are the dark side of our legal system that we all kind of accept because it's the human side, but letting a manipulative, capable computer do this as a matter of routine would be terrible.
Original post by Dee-Emma
I don't specifically place a judgement on the level of sophistication needed to satisfy human needs on legal assessment, maybe people could be satisfied with sophisticated algorithms. More I was examining what 'true' AI could be. If a system has true intelligence, wouldn't that be one capable of examining its own existence and developing its own value system. Thus indeed 'forcing' such an intelligence to a specific task would have moral consequences, and possibly unpredictable practical consequences too.

Do you think that ultimately, given the current policy emphasis on high levels of efficiency at minimal costs in the justice system, regardless of reservations this is a direction the legal system will move towards?
Original post by ThomH97
I think the strongest argument in favour of this would be true fairness in representation. If the government gives both sides the same supercomputer as each other, then the rich can't 'buy' a more favourable verdict by employing a more expensive, more capable legal team.

I would be very hesitant in using a computer to analyse probable juror (or judge!) psychology to get them to side with you. Yes, human lawyers do this quite a lot, but to train computers how to manipulate human decisions is very thin ice that we need to get away from rather than do more with. And replacing the jury or even the judge with a computer is a non-starter. Human bias and psychological tricks are the dark side of our legal system that we all kind of accept because it's the human side, but letting a manipulative, capable computer do this as a matter of routine would be terrible.

Interesting point.

Jury psychology is certainly something you are taught when you begin studying advocacy on the Bar course. Many texts have been written on everything from how to dress to how to use your hands to be suggestive to jurors.

Do you think then that, quite aside from having an AI on each side, if we were to have a super AI computer that decided legal disputes the loss of this sort of manipulation (if you want to call it that) is a good thing?
Original post by Desideri
Even relatively simple things like self-checkouts need a human worker nearby for when they start screaming “unexpected item in bagging area!”

AI lacks the understanding of context that humans have. This was shown over the last few days, as Instagram’s automatic moderation tool decided that comments of monkeys emojis under black footballers’ posts weren’t racist, presumably because a monkey emoji isn’t always racist. Human moderators then removed these comments, because they can understand the context.

Also, rather than removing bias, AI can automate its creators bias. For example Amazon had to scrap its recruitment AI because it systematically selected male candidates over female ones, just because it had learned that the majority of Amazon’s current workforce was male. Some US courts already use COMPAS AI to estimate how likely prisoners are to reoffend, but that has been shown to overestimate black prisoners’ risk of reoffending, and underestimate white prisoners’ risk.

Until or unless AI develops a lot from its current state, I don’t see how it could replace lawyers.


Interesting. At some point in the future it is doubtless that AI will reach a level of sophistication where it can properly be called AI. Do you think perhaps this will lead to a radical rethink of our legal system, the way we perceive it, what we think it is there to do?

If there is a judicial AI, no lawyers, no juries, no courtrooms, is that something you see people accepting? What do you think the parameters would be for people to accept it?
Original post by Solent University- TSR Talks
Interesting point.

Jury psychology is certainly something you are taught when you begin studying advocacy on the Bar course. Many texts have been written on everything from how to dress to how to use your hands to be suggestive to jurors.

Do you think then that, quite aside from having an AI on each side, if we were to have a super AI computer that decided legal disputes the loss of this sort of manipulation (if you want to call it that) is a good thing?

Ideally, the case would have complete information and be decided on the facts alone. But without complete surveillance this is impossible so we make do with humans. I don't like that someone's freedom can hinge on their posture in the dock or their lawyer's shoes etc, but having trained lawyers is the best intermediary between the tons of often obscure legal precedents and the accused's peers of the jury.

Even if you could trust that the computers wouldn't be tampered with, I think having a jury of your peers is paramount. If it takes a supercomputer with the entire database of courtcases to decide something is illegal, but the average person doesn't think so (or at least, believes there's enough mitigation), then I would say the defendant gets to go free. However, for an impartial computer (or one biased towards each side) to compile what it thinks are the relevant facts and prior cases, to replace human lawyers, I think that is more palatable. They would be potentially more competent than any human lawyer (with regards to research), and without the manipulation. The downside is its accuracy and impartiality - whether the people who programmed it knew enough about the law and whether their own views influenced what they told the computer to consider relevant. The computer would end up involved in all cases, rather than the small fraction for each individual lawyer.
Original post by Solent University- TSR Talks
Do you think that ultimately, given the current policy emphasis on high levels of efficiency at minimal costs in the justice system, regardless of reservations this is a direction the legal system will move towards?

I think that IT systems will increasingly augment human effort in many professional roles in the same way that simpler IT has augmented manufacturing. Whether it could truly be classified as AI I still think is debatable. Equally some human presence may be retained to 'interpret' the judgments.
Original post by Dee-Emma
I think that IT systems will increasingly augment human effort in many professional roles in the same way that simpler IT has augmented manufacturing. Whether it could truly be classified as AI I still think is debatable. Equally some human presence may be retained to 'interpret' the judgments.

...yes, although the judgment would be made by looking at the data in front of it. There may not be any context to the outcome. Would the providing of the context be where the judgment needs interpreting - or perhaps translating?
If it liberates people from much burden, why not?
Original post by ThomH97
Ideally, the case would have complete information and be decided on the facts alone. But without complete surveillance this is impossible so we make do with humans. I don't like that someone's freedom can hinge on their posture in the dock or their lawyer's s***s etc, but having trained lawyers is the best intermediary between the tons of often obscure legal precedents and the accused's peers of the jury.

Even if you could trust that the computers wouldn't be tampered with, I think having a jury of your peers is paramount. If it takes a supercomputer with the entire database of courtcases to decide something is illegal, but the average person doesn't think so (or at least, believes there's enough mitigation), then I would say the defendant gets to go free. However, for an impartial computer (or one biased towards each side) to compile what it thinks are the relevant facts and prior cases, to replace human lawyers, I think that is more palatable. They would be potentially more competent than any human lawyer (with regards to research), and without the manipulation. The downside is its accuracy and impartiality - whether the people who programmed it knew enough about the law and whether their own views influenced what they told the computer to consider relevant. The computer would end up involved in all cases, rather than the small fraction for each individual lawyer.

There is something about 'trust' here. We trust a judge to oversee a case and ensure a fair trial. We trust advocates to present a case in the best possible manner to ensure due process is followed. We trust a jury to reach a decision based on the facts in front of them. That said, a jury does not need to give a reason for its decision. The court does not need to be told what 'swung' an outcome in a particular direction. As that is the case, would an AI jury make a more factual decision, rather than (possibly) a decision based on the 'advocate of the day'?
Original post by Solent University- TSR Talks
There is something about 'trust' here. We trust a judge to oversee a case and ensure a fair trial. We trust advocates to present a case in the best possible manner to ensure due process is followed. We trust a jury to reach a decision based on the facts in front of them. That said, a jury does not need to give a reason for its decision. The court does not need to be told what 'swung' an outcome in a particular direction. As that is the case, would an AI jury make a more factual decision, rather than (possibly) a decision based on the 'advocate of the day'?

I would say a computer system that will apply to all cases needs to be proven much more fair and reliable than one that relies on random individuals for separate cases. There is also the massive change of principle from being tried in front of your peers (sure, they don't need to justify their verdict, but at least you know it's a deliberated human decision than a blanket computed true/false based on inputs and weightings that are all debatable).

I think there are three options for me. First is the ideal where there isn't the bias, misrepresentation, ignorance, manipulation etc, which I don't think is possible, at least with humans. Second is what we have now with juries and lawyers which is an acceptable compromise. By having different jurors for each case and human lawyers who can only spend a human amount of time on a case, it makes it more difficult to manipulate the jury with what shouldn't be relevant to their verdict. Having a computer calculate the best approach based on the demographics and more subtle clues from the jury upon first sight as a human lawyer tries to do (and I wish they wouldn't but at least it's only a human attempt from each side) goes against the court's purpose. There's always room for improvement with education, both factual and in critical thinking. Lastly we have a computer (as juror, lawyer and/or judge) deciding people's fates, which is authoritarian, inflexible and probably impenetrable for most people to scrutinise.
We are ducking out of this conversation now - really interesting that it was more or less a split vote in the poll. Thank you to all of you who have contributed to this discussion - we're certain that the provision of legal services will continue to evolve and alter in the future and AI will play a big role in this.

If you are interested in studying Law check out our website: https://www.solent.ac.uk/courses/undergraduate/llb-hons. We still have places for September 2021 and we are open for Clearing and you can get in touch with us: https://www.solent.ac.uk/clearing

Finally...if you are looking ahead to September 2022 and thinking about drafting your personal statement, check out this video which has been put together. It gives you an insight into what Law Schools look for in a personal statement: https://www.youtube.com/watch?v=LlnekG2-PmQ
Short answer is no. In much the same way as Computers did not replace banks, it did change what bankers did and in many cases it made bankers insanely rich. Smart lawyers will use AI to become more productive and some lawyers will become insanely rich.

Just noticed it is one year old thread :smile:
(edited 1 year ago)

Quick Reply

Latest

Trending

Trending