Abstract: Bias in data as well as lack of transparency and fairness in algorithms are not new problems, but with the increasing scale, complexity, and adoption, most AI systems are suffering from these issues at a level unprecedented. Information access systems are not spared since these days, almost all large-scale systems of information access are mediated by algorithms. These algorithms are optimized not only for relevance, which is subjective to begin with, but also for measures of engagement and impressions. They are picking up signals of what may be 'good' from individuals and perpetuating that through learning methods that are opaque and hard to debug. Considering 'fairness' and introducing more transparency can help, but it can also backfire or create other issues. We also need to understand how and why users of these systems engage with content. In this talk, I will share some of our attempts for bringing fairness in ranking systems and then talk about how the solutions are not that simple.
Speaker Bio: Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. His research revolves around intelligent systems. He received his PhD in Information Science from University of North Carolina (UNC) at Chapel Hill.