1. InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?
- Author
-
Tripathi, Yogesh, Donakanti, Raghav, Girhepuje, Sahil, Kavathekar, Ishan, Vedula, Bhaskara Hanuma, Krishnan, Gokul S, Goyal, Shreya, Goel, Anmol, Ravindran, Balaraman, Kumaraguru, Ponnurangam, Tripathi, Yogesh, Donakanti, Raghav, Girhepuje, Sahil, Kavathekar, Ishan, Vedula, Bhaskara Hanuma, Krishnan, Gokul S, Goyal, Shreya, Goel, Anmol, Ravindran, Balaraman, and Kumaraguru, Ponnurangam
- Abstract
Recent advancements in language technology and Artificial Intelligence have resulted in numerous Language Models being proposed to perform various tasks in the legal domain ranging from predicting judgments to generating summaries. Despite their immense potential, these models have been proven to learn and exhibit societal biases and make unfair predictions. In this study, we explore the ability of Large Language Models (LLMs) to perform legal tasks in the Indian landscape when social factors are involved. We present a novel metric, $\beta$-weighted $\textit{Legal Safety Score ($LSS_{\beta}$)}$, which encapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs' safety by considering its performance in the $\textit{Binary Statutory Reasoning}$ task and its fairness exhibition with respect to various axes of disparities in the Indian society. Task performance and fairness scores of LLaMA and LLaMA--2 models indicate that the proposed $LSS_{\beta}$ metric can effectively determine the readiness of a model for safe usage in the legal sector. We also propose finetuning pipelines, utilising specialised legal datasets, as a potential method to mitigate bias and improve model safety. The finetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\beta}$, improving their usability in the Indian legal domain. Our code is publicly released.
- Published
- 2024