Algorithmic fairness Recommender systems

Enhancing recommender systems with provider fairness through preference distribution awareness

Users in specific geographic areas often have distinct preferences regarding the provenance of the items they consume. However, current recommender systems fail to align these preferences with provider visibility, resulting in demographic inequities. By employing re-ranking, it is possible to achieve preference distribution-aware provider fairness, ensuring equitable recommendations with minimal trade-offs in effectiveness.

Recommender systems have become integral to how we interact with digital platforms, driving content discovery across domains like e-commerce, education, and entertainment. While these systems have traditionally focused on maximizing accuracy and user satisfaction, fairness considerations—especially for content providers—are becoming increasingly critical. Fairness ensures equitable visibility for providers and aligns recommendations with diverse user preferences.

In a study, in cooperation with Elizabeth Gómez, David Contreras, and Maria Salamó, and published by the International Journal of Information Management Data Insights (Elsevier), we introduce a novel framework for enhancing provider fairness in recommender systems. By addressing geographic demographic disparities and incorporating user preferences, we propose an approach to achieve preference distribution-aware provider fairness.

The Challenge

Current recommender systems often exhibit biases toward majority groups in training data, such as over-representing items from dominant geographic regions. This bias limits smaller or underrepresented providers from reaching relevant audiences, affecting their visibility and business opportunities.

Traditional approaches to provider fairness primarily focus on ensuring visibility for provider groups but fail to account for who receives these recommendations. This perspective can lead to irrelevant or culturally mismatched suggestions for users.

Contributions

Our study introduces a re-ranking algorithm designed to:

  1. Align recommendations with user preferences for providers from specific demographic groups.
  2. Mitigate geographic demographic disparities while maintaining recommendation effectiveness.

The approach groups users and providers by geographic continents, analyzing preference patterns. It then adjusts recommendation lists through a bucket-based re-ranking process, ensuring fairness while preserving the distribution of user preferences.

Methodology: the DAP-Fair algorithm

The proposed Distribution-Aware Provider Fairness (DAP-Fair) algorithm operates in two main steps:

  1. Bucket creation. User-item pairs are grouped into buckets based on geographic origins. These buckets are ranked based on relevance and the extent of underrepresentation.
  2. Re-ranking. Items are selected for recommendation lists to balance underrepresented groups while minimizing losses in relevance.

Three phases ensure comprehensive mitigation:

  • Strict alignment with user and provider demographics.
  • Relaxation of constraints when strict alignment is infeasible.
  • Optimization to fill recommendation slots with the most relevant items.

Results and insights

Using two datasets, Book-Crossing (books) and COCO (courses), we show the effectiveness of DAP-Fair at:

  • Reduction in disparity. The algorithm achieved significant reductions in disparity compared to baseline models. For instance, the disparity in the Books dataset decreased from 4.9% (BPRMF) to under 3%.
  • Retention of effectiveness. Despite fairness adjustments, recommendation quality—measured by Normalized Discounted Cumulative Gain (NDCG)—remained largely unaffected.

Conclusions

Our research highlights the importance of considering both users’ preferences and providers’ demographics in achieving fairness. The bucket-based re-ranking framework represents a solution applicable to various fairness contexts.

Future work will extend the approach to include dynamic attributes, such as temporal changes in user preferences, or explore fairness beyond geographic demographics.