Community-Aware Graph Transformers for Reducing Degree Bias in Node Representation Learning

Abstract

Graphs offer a flexible platform to represent relational data in various fields including social networks, communication networks and biological networks and financial systems. Conventional graph representation learning algorithms, especially Graph Neural Networks (GNNs), are usually prone to degree bias, where high-degree nodes dominate the information-flowing process, resulting in unbalanced embedding’s and impaired generalization, and in heterogeneous networks in particular. To address this problem, we introduce a Community-Aware Graph Transformer (CGT) with the purpose of incorporating community structure information into the attention mechanism to improve the aggregation of information of nodes and alleviate the degree bias. The real-life network data, MIT Reality Mining and Enron Email, and Facebook Social networks were preprocessed and merged into one unified dataset with the extracted graph features, including node degree, clustering coefficient, and PageRank. Classical machine learning models (Logistic Regression, Random Forest, Gradient Boosting, SVM, KNN, Decision Tree) and deep learning models (DNN, RNN, LSTM, GRU, CNN1D, Bi-LSTM) were used to estimate the predictive performance of node embedding’s produced by CGT. Findings indicate that the deep learning models performed better, and the RNN-based models presented the best accuracy (99.97 percent), precision (99.90 percent), recall (99.87 percent), F1-Score (99.89 percent), and Cohen Kappa (99.95 percent). Classical ML models that are ensemble based like the Random Forest and Gradient Boosting also performed outstandingly as they reached 100% on all metrics whereas the simplistic models demonstrated slight constraints. 

Country : Iran

1 Zaid Mousa Abbood Al-Shaibany

  1. Department of Computer Engineering - Software, Faculty of Technical and Engineering, Islamic Azad University, South Tehran Branch, Iran

IRJIET, Volume 9, Issue 10, October 2025 pp. 96-104

doi.org/10.47001/IRJIET/2025.910013

References

  1. V.T. Hoang, H. J. Jeon, E. S. You, Y. Yoon, S. Jung, and O. J. Lee, “Graph Representation Learning and Its Applications: A Survey,” Sensors, vol. 23, no. 8, pp. 1–104, 2023, doi: 10.3390/s23084168.
  2. A.Subramonian, J. Kang, and Y. Sun, “Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks,” Adv. Neural Inf. Process. Syst., vol. 37, no. NeurIPS, pp. 1–47, 2024.
  3. V.L. Dao, C. Bothorel, and P. Lenca, “Community structure: A comparative evaluation of community detection methods,” Netw. Sci., vol. 8, pp. 1–41, Jan. 2020, doi: 10.1017/nws.2019.59.
  4. X. Yang, M. Yan, S. Pan, X. Ye, and D. Fan, “Simple and Efficient Heterogeneous Graph Neural Network,” Proc. AAAI Conf. Artif. Intell., vol. 37, pp. 10816–10824, Jun. 2023, doi: 10.1609/aaai.v37i9.26283.
  5. A.Garg, “Graph Transformers without Positional Encodings,” arXiv:2401.17791v3, 2024, [Online]. Available: http://arxiv.org/abs/2401.17791
  6. A.Mara, J. Lijffijt, S. Günnemann, and T. De Bie, “A Systematic Evaluation of Node Embedding Robustness,” Proc. Mach. Learn. Res., vol. 198, no. LoG, 2022.
  7. Y. Zhao, X. Li, Y. Zhu, J. Li, S. Wang, and B. Jiang, “A Scalable Deep Network for Graph Clustering via Personalized PageRank,” Appl. Sci., vol. 12, no. 11, 2022, doi: 10.3390/app12115502.
  8. A.N.S. Kinasih, A. N. Handayani, J. T. Ardiansah, and N. S. Damanhuri, “Comparative analysis of decision tree and random forest classifiers for structured data classification in machine learning,” Sci. Inf. Technol. Lett., vol. 5, no. 2, pp. 13–24, 2024, doi: 10.31763/sitech.v5i2.1746.
  9. V. T. Hoang, H. J. Jeon, and O. J. Lee, “Mitigating Degree Bias in Graph Representation Learning With Learnable Structural Augmentation and Structural Self-Attention,” IEEE Trans. Netw. Sci. Eng., pp. 1–15, 2025, doi: 10.1109/TNSE.2025.3563697.
  10. Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep Graph Contrastive Representation Learning,” arXiv:2006.04131v2, pp. 1–17, 2020, [Online]. Available: http://arxiv.org/abs/2006.04131
  11. A.E. Samy, Z. T. Kefato, and Š. Girdzijauskas, “Data-Driven Self-Supervised Graph Representation Learning,” Front. Artif. Intell. Appl., vol. 372, pp. 629–636, 2023, doi: 10.3233/FAIA230325.
  12. R. Wang, X. Wang, C. Shi, and L. Song, “Uncovering the Structural Fairness in Graph Contrastive Learning,” Adv. Neural Inf. Process. Syst., vol. 35, no. NeurIPS, pp. 1–18, 2022.
  13. K. Xu, S. Jegelka, W. Hu, and J. Leskovec, “How powerful are graph neural networks?,” 7th Int. Conf. Learn. Represent. ICLR 2019, pp. 1–17, 2019.
  14. C. Ying et al., “Do Transformers Really Perform Bad for Graph Representation?,” Adv. Neural Inf. Process. Syst., vol. 34, no. February 2024, pp. 28877–28888, 2021.
  15. D. Chen, L. O’Bray, and K. Borgwardt, “Structure-Aware Transformer for Graph Representation Learning,” Proc. Mach. Learn. Res., vol. 162, pp. 3469–3489, 2022.
  16. Z. Zhang, Q. Liu, Q. Hu, and C. K. Lee, “Hierarchical Graph Transformer with Adaptive Node Sampling,” Adv. Neural Inf. Process. Syst., vol. 35, no. NeurIPS, pp. 1–17, 2022.
  17. J. Zhu, C. Gao, Z. Yin, X. Li, and J. Kurths, “Propagation Structure-Aware Graph Transformer for Robust and Interpretable Fake News Detection,” in Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, in KDD ’24. New York, NY, USA: Association for Computing Machinery, 2024, pp. 4652–4663. doi: 10.1145/3637528.3672024.
  18. V. T. Hoang and O. J. Lee, “Transitivity-Preserving Graph Representation Learning for Bridging Local Connectivity and Role-Based Similarity,” Proc. AAAI Conf. Artif. Intell., vol. 38, no. 11, pp. 12456–12465, 2024, doi: 10.1609/aaai.v38i11.29138.
  19. Q. Wu, W. Zhao, Z. Li, D. Wipf, and J. Yan, “NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification,” Adv. Neural Inf. Process. Syst., vol. 35, no. November, 2022.
  20. M. Li, S. Zhou, Y. Chen, C. Huang, and Y. Jiang, “EduCross: Dual adversarial bipartite hypergraph learning for cross-modal retrieval in multimodal educational slides,” Inf. Fusion, vol. 109, p. 102428, 2024, doi: https://doi.org/10.1016/j.inffus.2024.102428.
  21. L. Bai et al., HAQJSK: Hierarchical-Aligned Quantum Jensen-Shannon Kernels for Graph Classification (Extended Abstract). 2025. doi: 10.1109/ICDE65448.2025.00398.
  22. Z. Liu, T. K. Nguyen, and Y. Fang, “On Generalized Degree Fairness in Graph Neural Networks,” Proc. 37th AAAI Conf. Artif. Intell. AAAI 2023, vol. 37, pp. 4525–4533, 2023, doi: 10.1609/aaai.v37i4.25574.
  23. S. Kojaku, J. Yoon, I. Constantino, and Y. Y. Ahn, “Residual2Vec: Debiasing graph embedding with random graphs,” Adv. Neural Inf. Process. Syst., vol. 29, no. NeurIPS 2021, pp. 24150–24163, 2021.
  24. Z. Liu, T. K. Nguyen, and Y. Fang, Tail-GNN: Tail-Node Graph Neural Networks, vol. 1, no. 1. Association for Computing Machinery, 2021. doi: 10.1145/3447548.3467276.
  25. T. Zhao, Y. Liu, L. Neves, O. Woodford, M. Jiang, and N. Shah, “Data Augmentation for Graph Neural Networks,” 35th AAAI Conf. Artif. Intell. AAAI 2021, vol. 12B, pp. 11015–11023, 2021, doi: 10.1609/aaai.v35i12.17315.
  26. W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph Structure Learning for Robust Graph Neural Networks,” Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pp. 66–74, 2020, doi: 10.1145/3394486.3403049.
  27. J. Kang, Y. Zhu, Y. Xia, J. Luo, and H. Tong, “RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network,” WWW 2022 - Proc. ACM Web Conf. 2022, pp. 1214–1225, 2022, doi: 10.1145/3485447.3512169.