Skip to main content

Book chapter in the legal handbook Artificial Intelligence on the subject of XAI (eXplainable AI) from CH Beck Verlag

We were asked by the well-known legal publisher C.H. Beck, if we wanted to share our expertise in the field of AI in an AI book for lawyers. Said and done.

The final result (in German) can be found here for those who are interested in details.

Artificial intelligence is sometimes elusive for lawyers. To be honest, this is too brief – the topic is difficult to grasp for almost everyone. In the worst case, this leads to the rejection and non-use of such technologies.

Let semantha convince you!

A mistake? Or calculated risk?

The fact is that the work of a “white collar worker” has long been less discussed when it comes to automation. For a long time, it was believed that “blue collar workers”, i.e. people who work in manufacturing areas, could be outpaced more quickly by automation.

Today, things look different. Accept following hypothesis: AI can perform certain tasks of a lawyer better than the lawyer him- or herself – what happens then?

All that is needed to automate these tasks is software. No special hardware, no physical robots, etc. The computing time and power required for the software are available nowadays without any problems.
So we have to get used to the idea that software can take over parts of our work and that it will scale quickly. Not in years, no, in months or weeks.

Of course, in many areas we are not there yet and the AIs have various disadvantages compared to us humans that currently cannot be outweighed. Even though we are often suggested otherwise.

This is all the more true in the area of ​​law, in which facts have to be assessed and interpreted, or decisions are made taking various risks into account. AIs have their problems with such tasks.

Therefore, the motto should be not to be afraid of AI. Let’s use the advantages and opportunities that this technology offers us today and, together with AI, create something bigger. Imagine the added value of a “symbiosis” between the domain knowledge of the experts and the speed of the machines.

The only question is: where and how exactly can this be achieved. The question is not “if” it makes sense to use AI in the legal field, but rather in which areas and for which tasks AI can generate added value and where it does not make sense (yet)?

In the end, the technical development will primarily be reflected in the expectations of the clients, who will be less willing to pay for certain activities. In Canada there are the first cases[1] in which lawyers’ bills were cut because the client was able to prove that an AI can handle the issue faster and a lot cheaper. In short – simpler legal activities will in future be outsourced to machines or carried out in symbiosis with a machine.

“Anyone who, as a farmer, first used machines in the age of industrialization was far superior to the competitor with flails…”,

Unknown

Risks: The cost of not doing it

A clear-cut risk for the legal profession is certainly not doing anything. From the point of view of an outsider, it is astonishing with what incomprehension for the potential (financial) risk some lawyers still (want to) work today. Reflection on old virtues and procedures is reasonable when it comes to implementing work steps with AI; a realistic assessment of what might work and what doesn’t is important here. But a simple denial of the fact that AI is coming will, it seems, lead to painful side effects. And already in the next decade. What consequences are to be expected if one doesn’t approach the topic or distrust AI?

Examples

● Others are faster

● Others are less likely to overlook things

● Others are cheaper

● Others are more deterministic (less human bias)

● Others are more careful[1] (see above: lawyers from Canada whose salaries have been cut)

If you look at these points with developments on the market (insurance companies are already checking whether it makes sense to insure the correctness of legal AIs), a parallel development emerges that a lawyer should face. Of course, algorithmic correctness cannot be guaranteed, also not in legal cases, but that applies to humans as well. As with human errors, however, any resulting risks from faulty algorithms can also be covered by insurance. Such business models can be very interesting for the end customer. The first simple implementations in combination with litigation financiers (FlightRight, MyRight et al.) clearly show the potential for entrepreneurial law and the risks for the individual lawyer.

Make an appointment now.

In short, the aftermath of a complete negation of this issue could be devastating. Bans on LegalTech applications by lawyers’ associations[3] only delay this process, but will not prevent it[4].

AI is coming: in the best case hand in hand with the lawyers or in the worst case taking them by surprise.

In the book chapter on XAI from Beck Verlag, you can read about how things will continue, how XAI works and what approaches are already available today, which will come in the future and where the limits are.


[1] https://abovethelaw.com/2019/01/judge-penalizes-lawyers-for-not-using-artificial-intelligence/

[2] https://abovethelaw.com/2019/01/judge-penalizes- lawyers-for-not-using-artificial-intelligence /

[3] https://www.rak-hamburg.de/haben/memberservice/meldung/id/90

[4] https://www.artificiallawyer.com/2019/10/10/legaltech-on-trial-regional-german-bar-wins-ban-on-contract-platforms/

Customers

We have written extraordinary success stories with some of our clients. You can read them here: Success stories.