Algorithmic fairness User profiling

Toward a Responsible Fairness Analysis: From Binary to Multiclass and Multigroup Assessment in Graph Neural Network-Based User Modeling Tasks

By transitioning from binary to multiclass and multigroup fairness metrics, hidden biases in GNN-based user modeling are uncovered. Achieving true fairness requires fine-grained evaluation of real-world data distributions to ensure equity across all user groups and attributes.

In an era dominated by artificial intelligence, ensuring fairness in automated decision-making has emerged as a critical priority. In this study, in collaboration with Erasmo Purificato, and Ernesto William De Luca, and published by the Minds and Machine journal (Springer), we address the limitations of existing fairness metrics.

Contextualizing the problem

User modeling and fairness challenges. User modeling techniques, essential in applications like social networks and recommender systems, rely on machine learning to classify user attributes such as age or gender. Traditionally, fairness metrics have focused on binary classifications, which oversimplify real-world scenarios where sensitive attributes often span multiple groups (e.g., age ranges or racial categories). The prevalent practice of binarizing these attributes compromises the integrity of fairness evaluations and obscures biases against specific groups.

Graph Neural Networks (GNNs). Graph Neural Networks (GNNs) are state-of-the-art tools for user modeling tasks. They leverage graph structures to model relationships among users and their interactions. However, existing fairness evaluations for GNNs primarily use binary metrics, which may not adequately reflect real-world biases.

Our contributions

We propose novel extensions to existing fairness metrics, moving beyond binary evaluations to encompass multiclass and multigroup scenarios. Their work answers two key research questions:

  1. Impact of multigroup metrics. How do multigroup fairness metrics affect the evaluation of model fairness compared to binary metrics?
  2. Improving bias detection. Can multiclass and multigroup metrics uncover biases and guide mitigation strategies more effectively than traditional methods?

Methodology

Extended Fairness Metrics. The paper extends four fairness metrics—statistical parity, equal opportunity, overall accuracy equality, and treatment equality—to handle multiclass and multigroup settings. Specifically:

  • Multiclass and multigroup statistical parity. Ensures that each sensitive group and class combination has equal probabilities of being assigned to a target class.
  • Multiclass and multigroup equal opportunity: Requires that all groups and classes achieve equal true positive rates.
  • Multiclass and multigroup overall accuracy equality. Ensures that the overall accuracy of predictions is consistent across all combinations of sensitive groups and classes, maintaining balanced performance.
  • Multiclass and multigroup treatment equality. Requires that the ratio of errors (false positives and false negatives) is consistent across all sensitive group and class combinations, ensuring uniform misclassification rates.

Experimental framework. We tested our metrics on four real-world datasets (Alibaba, JD, Pokec, and NBA) using two state-of-the-art GNN models, CatGCN and RHGN. The experiments spanned binary, multigroup, and multiclass scenarios to compare fairness assessments across different settings.

Main Findings

  1. Limitations of binary metrics
    • Binary metrics often fail to identify disadvantaged subgroups within larger categories.
    • Binarization can create misleading perceptions of fairness.
  2. Advantage of multigroup metrics:
    • Multigroup analysis revealed hidden biases that binary metrics could not detect.
    • Fine-grained assessments uncovered specific subgroups affected by unfair treatment.
  3. Enhanced bias detection
    • Multiclass and multigroup metrics provided a more nuanced view of fairness.
    • These metrics helped identify cases where binarization masked significant biases, allowing for targeted mitigation.

Practical implications

Real-World Applications. The proposed metrics offer actionable insights for improving fairness in various applications:

  • Recommender systems. Avoiding biases in recommendations for diverse demographic groups.
  • Social media platforms. Ensuring equitable representation and engagement across user communities.

Responsible AI Development. This research aligns with the principles of Responsible AI, emphasizing accountability and transparency. By adopting multiclass and multigroup metrics, developers can design AI systems that better respect societal values.

Conclusions

The shift from binary to multiclass and multigroup fairness metrics represents an advance towards more responsible and ethical AI. By adopting these metrics, we can build AI systems that are not only more accurate but also more equitable and inclusive.

For further details, we have made the source code and datasets available here.