I believe that the problem will be the opposite - AI will behave in a manner that will be non-discriminatory, and this will cause widespread anger and unrest. Currently, the CPS sentencing guidelines are almost always applied in the most lenient possible manner. Offences such as assault on emergency worker almost never receive anything other than the minimum allowable penalty, and very often well below that. In the hands of AI, you would see people receiving the statutory penalties commensurate with their offending history and the circumstances, and you'd see a lot more custodial sentences. If you look at the demographics of offenders, that is only going to point one way - you will get a lot more young BAME males in prison, and the rallying cry of the social justice brigade will be that the judiciary has become racist robots. The alternative to this, is of course to program in additional leniency - which then makes a mockery of the sentencing guidelines.
The same would go for certain classes of accused. Currently, women almost never receive the same penalties as men for a similar offence, especially violent offences. This is due to a general judicial leniency and taking into account issues like childcare. AI would have to be programmed to take this kind of thing into account - essentially building systemic bias into the software, rather than it being judicial fiat.
As you go higher up the courts, the idea of innovation and landmark decisions would disappear, or be largely discredited. If the Master of the Rolls, or the President of the Family Division is basically Akinator with a law degree, there either won't be any novel case law, or there would be a constant fear of it being a software bug. Take for example Radmacher. Ante-nuptial contracts had never before been given weight - why has this decision gone this way? Are "judges" suddenly making a general move toward this, or is it a software problem? You can't ask an AI for a rationale - it will just regurgitate what has been fed in.