AI and data-driven platforms are now having significant impacts such as influencing democratic elections, fake news, and processing automated loan applications. This is due to the growth and use of AI systems such as smartphones, search engines, decision support systems, and the availability of data and computing power. It is important to explain to non-expert users how AI systems arrive at a result in order to gain user trust. Modern AI systems such as search engines (IR systems) often rely on a complex ranking model. For example, how does a lawyer understand how some documents are retrieved for a particular query? Why is document X ranked much lower although it seems to be relevant for the search terms? Is it because of the percentage of matching keywords? Or is it due to the topics manifested by the keywords? Were the search terms phrased correctly to capture the information need? To counteract such "black box" behavior, an interdisciplinary field of research has recently emerged that is dedicated to this transparency factor, often referred to as Explanable AI (XAI).
With respect to IR systems, we focus on the challenge of explaining how we arrive at the ranking or relevance list. The idea of relevance in IR is a complex entity in itself, depending on multiple factors such as context, application scenario, and is often subjective based on the user's information needs. In the context of IR, our research on "Explainable Search" attempts to explain how an item is similar to other items. Such explanations try to shed light on the concept of relevance. This in turn depends on the degree of similarity between a user query and the underlying items in a corpus (documents, images). Our research provides textual explanations that answer questions like "Why is document X ranked Y for a given query?"; there are also visual explanations that compare and contrast images, showing regions of interest that the model considers when generating relative rankings. We explore the use of interpretable features and novel re-ranking methods that reward and penalize different facets of evidence.
- S. Polley, S. Mondal, V. S. Mannam, K. Kumar, S. Patra, A. Nürnberger: X-Vision: Explainable Image Retrieval by Re-Ranking in Semantic Space. CIKM 2022. pp. 4955-4959. https://doi.org/10.1145/3511808.3557187.
- S. Polley: Towards Explainable Search in Legal Text. ECIR (2) 2022. pp. 528-536. https://doi.org/10.1007/978-3-030-99739-7\_65.
- S. Polley, R. R. Koparde, A. B. Gowri, M. Perera, A. Nürnberger: Towards Trustworthiness in the Context of Explainable Search. SIGIR 2021. pp. 2580-2584. https://doi.org/10.1145/3404835.3462799
- S. Polley, A. Janki, M. Thiel, J. Höbel-Müller, A. Nürnberger: ExDocS: Evidence based Explainable Document Search, 44th ACM SIGIR Workshop on Causality in Search and Recommendation 2021.
- A. Dey, C. Radhakrishna, N. N. Lima, S. Shashidhar, S. Polley, M. Thiel, A. Nürnberger: Evaluating Reliability in Explainable Search, ICHMS 2021. pp. 1-4. https://doi.org/10.1109/ICHMS53169.2021.9582653