Ranking systems that account for the reputation of the users can be biased towards different demographic groups, especially when considering multiple sensitive attributes (e.g., gender and age). Providing guarantees of reputation independence can lead to unbiased and effective rankings. Moreover, these rankings are also robust to attacks. In a study, published by the Machine Learning …
Hands on Explainable Recommender Systems with Knowledge Graphs
This tutorial was presented at the RecSys ’22 conference, with Giacomo Balloccu, Gianni Fenu, and Mirko Marras. On the tutorial’s website, you can find the slides, the video recording of our talk, and the notebooks of the hands-on parts. The goal of this tutorial was to present the RecSys community with recent advances on explainable …
Equality of Learning Opportunity via Individual Fairness in Personalized Recommendations
The formalization of the learning opportunities that should be offered by the recommendation of online courses can lead to defining what fairness means for a platform. A post-processing approach that balances personalization and equality of recommended opportunities can lead to effective and fair recommendations. In a study published by the International Journal of Artificial Intelligence …
Regulating Group Exposure for Item Providers in Recommendation
Platform owners can seek to guarantee certain levels of exposure to providers (e.g., to bring equity or to push the sales of new providers). Rendering certain groups of providers with the target exposure, beyond-accuracy objectives experience significant gains with negligible impact in recommendation utility. In a SIGIR 2022 paper, with Mirko Marras, Guilherme Ramos, and …
Post Processing Recommender Systems with Knowledge Graphs for Recency, Popularity, and Diversity of Explanations
Being able to assess explanation quality in recommender systems and by shaping recommendation lists that account for explanation quality allows us to produce more effective recommendations. These recommendations can also increase explanation quality according to the proposed properties, fairly across demographic groups. In a SIGIR 2022 paper, with Giacomo Balloccu, Gianni Fenu, and Mirko Marras, …
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations
Enabling non-discrimination for end-users of recommender systems by introducing consumer fairness is a key problem. Current research has led to a variety of notions, metrics, and unfairness mitigation procedures. Nevertheless, only around half of the published studies are reproducible. When comparing the existing approaches under the same protocol, we get unexpected outcomes, such as the …
Enabling cross-continent provider fairness in educational recommender systems
The courses of teachers are under-recommended by state-of-the-art models, unless they belong to the country that offers more courses and attracts more ratings. Regulating how recommendations are distributed with respect to the country of provenience of the teachers enables equitable and effective recommendations (cross-continent provider fairness). In a paper published in the Future Generation Computing …
Provider fairness across continents in collaborative recommender systems
In the presence of data imbalances, where some demographic groups of providers are represented more than others, the items of all the demographic groups that are not the majority group are under-recommended. A mitigation that accounts for the representation of each demographic group allows to introduce equity in the recommendation process, without having an impact …
Evaluating the Prediction Bias Induced by Label Imbalance in Multi-label Classification
Prediction bias is a well-known problem in classification algorithms, which tend to be skewed towards more represented classes. This phenomenon is even more remarkable in multi-label scenarios, where the number of underrepresented classes is usually larger. In light of this, we present a novel measure that aims to assess the bias induced by label imbalance …
Reputation Equity in Ranking Systems
Reputation-based ranking systems can be biased towards the sensitive attributes of the users, meaning that certain demographic groups have systematically lower reputation scores. Nevertheless, if we unbias the reputation scores considering one sensitive attribute, bias still occurs when considering different sensitive attributes. For this reason, reputation scores should be unbiased independently of any sensitive attribute …