Algorithmic fairness Recommender systems

Fair Augmentation for Graph Collaborative Filtering

While fairness in Graph Collaborative Filtering remains under-explored and often inconsistent across methodologies, targeted graph augmentation can effectively mitigate demographic biases while maintaining high recommendation utility.

Fairness in recommender systems is not just an ethical challenge but a measurable, achievable goal. In a paper, in collaboration with Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda, we show how graph augmentation can systematically reduce bias while maintaining or even enhancing utility across diverse datasets and models. We also introduce FA4GCF, a reproducibility-focused framework designed to explore and extend fairness methodologies in Graph Collaborative Filtering (GCF).

Unfairness in recommendations

Recommender systems often amplify biases present in user-item interactions, resulting in unequal utility across demographic groups. In GCF, where Graph Neural Networks (GNNs) leverage high-order graph structures, these disparities can become even more pronounced. While fairness-aware algorithms exist, most focus on training-phase modifications (in-processing), leaving pre- and post-processing techniques relatively underexplored.

Fair augmentation for graph collaborative filtering

This paper presents Fair Augmentation for Graph Collaborative Filtering (FA4GCF), a comprehensive framework for exploring fairness-enhancing techniques in GCF. By addressing key gaps in previous research, the framework emphasizes both reproducibility and the practical application of fairness-aware methods.

Main contributions

  1. Reproducibility-driven methodology. FA4GCF addresses the lack of rigorous evaluations in prior work by formalizing fairness interventions, offering a consistent setup to test and extend methodologies.
  2. Fair graph augmentation. Through iterative graph modifications, FA4GCF balances recommendation utility across demographic groups by adding targeted edges. A fairness-aware loss function ensures that demographic parity improves without significant utility trade-offs.
  3. Extended sampling policies. The framework introduces advanced policies such as Interaction Recency and PageRank, which enable precise targeting of disadvantaged users and influential items during graph augmentation.
  4. Scalable evaluations across models and data. Experiments span five real-world datasets (e.g., MovieLens, Last.FM, Foursquare) and 16 recommender systems, including 11 GNN-based models, to validate the generalizability of FA4GCF.

Evaluation

The reproducibility study and experimental evaluations yielded several insights:

  • Fairness improvements. FA4GCF effectively mitigates disparities between demographic groups, particularly in larger datasets with richer interaction patterns.
  • Utility preservation. In most cases, augmented graphs maintained or enhanced overall recommendation quality while improving fairness.
  • Dataset dependency. The effectiveness of the augmentation process depends on dataset size and structure, with larger datasets demonstrating greater benefits.
  • Transferability challenges. Fairness knowledge embedded in augmented graphs was less effective when transferred to non-GNN models, pointing to opportunities for future innovation.

Future directions

The findings of FA4GCF open several opportunities for future research:

  • Developing universally transferable fairness methods. Exploring augmentations that embed fairness across diverse model architectures.
  • Broadening attribute coverage. Incorporating additional demographic attributes to address fairness comprehensively.
  • Real-world deployments. Applying FA4GCF to production-scale systems to understand its impact on live recommendations.

Conclusions

FA4GCF is a reproducibility-driven framework that advances fairness research in GCF. By offering a robust methodology for evaluating and improving consumer fairness, the framework lays the foundation for inclusive and equitable recommender systems.