Link Inference Attacks in Vertical Federated Graph Learning - EURECOM
Communication Dans Un Congrès Année : 2024

Link Inference Attacks in Vertical Federated Graph Learning

Oualid Zari
  • Fonction : Auteur
  • PersonId : 1452344
Ayşe Ünsal
Melek Önen

Résumé

Vertical Federated Graph Learning (VFGL) is a novel privacy-preserving technology that enables entities to collaborate on training Machine Learning (ML) models without exchanging their raw data. In VFGL, some of the entities hold a graph dataset capturing sensitive user relations, as in the case of social networks. This collaborative effort aims to leverage diverse features from each entity about shared users to enhance predictive models or recommendation systems, while safeguarding data privacy in the process. Despite these advantages, recent studies have revealed a critical vulnerability that appears in intermediate data representations, which may inadvertently expose link information in the graph. This work proposes a novel Link Inference Attack (LIA) that exploits gradients as a new source of link information leakage. Assuming a semi-honest adversary, we demonstrate through extensive experiments on seven real-world datasets that our LIA outperforms state-of-the-art attacks, achieving over 10% higher Area Under the Curve (AUC) in some instances, thereby highlighting a significant risk of link information leakage through gradients. Our attack's effectiveness primarily stems from label information embedded in gradients, as evidenced by comparison with a label-only LIA. We analytically derive our Label-based LIA's accuracy using graph characteristics, assessing target graph vulnerability. To address these vulnerabilities, we evaluate two types of defenses: edge perturbation based on differential privacy and a novel label perturbation approach, demonstrating that our proposed label perturbation defense is more effective against all attack types across all datasets examined, offering a more favorable privacy-utility trade-off. Our comprehensive analysis shows why LIAs are effective and identifies potential defenses, highlighting the need for further research to improve the security of VFGL systems against link information leakage.

1.

Fichier principal
Vignette du fichier
LIA_VFGL_CR.pdf (625.7 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04811920 , version 1 (29-11-2024)

Licence

Identifiants

  • HAL Id : hal-04811920 , version 1

Citer

Oualid Zari, Chuan Xu, Javier Parra-Arnau, Ayşe Ünsal, Melek Önen. Link Inference Attacks in Vertical Federated Graph Learning. ACSAC 2024 - 40th Annual Computer Security Applications Conference, Dec 2024, Honolulu, United States. ⟨hal-04811920⟩
0 Consultations
0 Téléchargements

Partager

More