Modern user profiling approaches capture different forms of interactions with the data, from user-item to user-user relationships. Graph Neural Networks (GNNs) have become a natural way to model these behaviors and build efficient and effective user profiles. However, each GNN-based user profiling approach has its own way of processing information, thus creating heterogeneity that does not favor the benchmarking of these techniques. Hence, standardizing the input needed to run three state-of-the-art GNN-based models for user profiling tasks and being able to assess how fair each model is, remains an open challenge.
In our SIGIR ’23 demo, built with Mohamed Abdelrazek, Erasmo Purificato, and Ernesto William De Luca, we overcome this challenge by proposing FairUP, a novel framework to assess algorithmic fairness on GNN-based models for user profiling tasks. The presented framework is founded on our previous and first-of-its-kind analysis of the fairness of state-of-the-art GNN-based behavioral user profiling models, published in our CIKM 2022 study.
The source code is available at https://link.erasmopurif.com/FairUP-source-code, and the web application is available at https://link.erasmopurif.com/FairUP.
FairUP empowers researchers and practitioners to simultaneously examine classification performance and fairness metrics scores of the included models. The framework, whose architecture is shown below, presents several components, which allow end-users to:
- compute the fairness of the input dataset by means of a pre-processing fairness metric, i.e., disparate impact;
- mitigate the unfairness of the dataset, if needed, by applying different debiasing methods, i.e., sampling, reweighting, and disparate impact remover;
- standardize the input (a graph in Neo4J or NetworkX format) for each of the included GNNs;
- train one or more GNN models, specifying the parameters for each of them;
- evaluate post-hoc fairness by exploiting four metrics, i.e. statistical parity, equal opportunity, overall accuracy equality, treatment equality.