Get In Touch
Use of explainable artificial intelligence models in courts for dispute resolution processes (Part 1)

Blogs

  • Home
  • Blogs
  • Use of explainable artificial intelligence models in courts for dispute resolution processes (Part 1)
Use of explainable artificial intelligence models in courts for dispute resolution processes (Part 1)

Use of explainable artificial intelligence models in courts for dispute resolution processes (Part 1)

Overtime, there have been a lot of discussions around the use of Artificial intelligence (AI) in dispute resolution matters in courts. There are many ways in which machine learning algorithms will find their way into court case discussions in the coming years. Several countries have started using some or other form of AI in the dispute resolution processes. However, many of these AI/ machine learning models are not able to explain how the particular result has been arrived at (i.e., they are not interpretable AI models) – hence, the debate surrounding the need to use explainable AI models (XAI) in dispute resolution processes where black box models are used has increased. Definitely, where the stakes involved are high – relying on AI results/ predictive justice without knowing on what basis the algorithms arrived at a particular outcome would not be fair for the affected party.

Explainable AI can be helpful in dispute resolution by providing precise and understandable explanations for the decisions and actions of an AI system. This can help build trust in the system and provide valuable information and insights that can be used to resolve disputes. For ex., if a dispute arises over the decision made by an AI system, an explainable AI system can provide information on how it reached that decision, including the data and evidence that it used. This can help the parties involved in the dispute understand the decision better and provide them with a clear basis for further discussion and negotiation. In addition, explainable AI can help to identify potential biases or errors in an AI system’s decision-making process, which can be critical in dispute resolution. By providing a transparent and understandable explanation for its decisions, an explainable AI system can help identify and address any potential issues, which can help ensure that the dispute is resolved fairly and unbiasedly. Some of the methods that are currently in use for explainable AI are:

  1. Natural language generation: This involves generating human-readable text that explains the decisions and actions of an AI system in a manner that is easy to understand.
  2. Visualization: Visualization techniques, such as graphs, charts, and diagrams, can help explain the workings of an AI system in a way that is intuitive and easy to understand.
  3. Decision tree analysis: This involves using a tree-like diagram to show the decision-making process of an AI system, including the data and evidence it considers and the reasoning behind its decisions. The model is trained on a labelled dataset, where the input values are used to predict the corresponding output values. The algorithm then uses this model to predict new data sets.
  4. Feature importance analysis: This involves identifying the most critical factors that led the AI systems to arrive at a particular decision and then explaining how these factors influenced the outcome.
  5. Rule extraction: This involves identifying the specific rules or principles that an AI system uses to make a decision and explaining how these rules were applied.

However, not all AI models can be made explainable. Further, there are some models not explainable by design but could have post-hoc explanation features[1]. The diagram below represents different stages of AI explainability.

The three stages of AI explainability: Pre-modelling explainability, Explainable modelling and post-modelling explainability.

Source: Bahador Khaleghi, The How of Explainable AI: Post-modelling Explainability, see https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability-8b4cbc7adf5f

Also, the models that are not explainable may give better results in terms of precision because of their better ability to adapt to the changes overtime. The question in such cases arises if a model provides a result with an accuracy percentage of, say 95% and is not an explainable model, while another model gives a result accuracy percentage of, say 70% and is an explainable model – which one should be preferred? Or should it be that only explainable AI models can be used in dispute resolution processes in courts, or should there be the use of both explainable and non-explainable models for the same situations to be more certain with results?

With the increase in talks around predictive justice or the use of AI in the law field, it has become essential to analyze to what extent the use of AI is reliable. The recent Dutch child benefits scandal acts as a warning for the use of algorithms in public services[2]. Further, the EU AI Act also restricts the ban on high-risk AI systems[3]. Hence, it becomes essential to see the cases where models can be trained to be explainable and which forms of explainable AI can be used in dispute resolution cases to create more trust, and transparency and to provide fair justice.

This series of blogs will aim at discussing explainable AI models and their capabilities of use in the field of dispute resolution processes in courts. It will also draw on the limitations of using explainable AI models and will analyze to what extent reliance can be placed on explainable AI models for using AI in dispute resolution processes. Also, AI laws/ regulations of certain countries will be analysed to see what guidelines/ regulations exist on the use of explainable AI models. Further, an analysis will also be made on the guidelines that can be developed for the use of explainable AI models more ethically and responsibly, especially where AI is used in the dispute resolution processes.

[1] Bahador Khaleghi, The How of Explainable AI: Post-modelling Explainability (July 31, 2019), see https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability-8b4cbc7adf5f

[2] Melissa Heikklä, Dutch scandal serves as a warning for Europe over risks of using algorithms (March 29, 2022), see  https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

[3] https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0106(COD)&l=en

Leave a comment

Your email address will not be published. Required fields are marked *

The views in all sections are personal views of the author.

Feedback
Feedback
How would you rate your experience?
Do you have any additional comment?
Next
Enter your email if you'd like us to contact you regarding with your feedback.
Back
Submit
Thank you for submitting your feedback!