Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses
Abstract
background
- backdoor attacks : inject backdoor trigger into training data so that the trained model could output as attacker desired
- No defense exist for FedGL against backdoor attack
contribution
Attack Generator : effective, stealthy, persistent backdoor attack on FedGL and its generator
Uselessness of Empirical defenses : empirical defenses are hard to detect generated triggers
Certified defense method : division plus vote-base classifier
Robustness and Tightness of Classifier
Test on dataset : 90% accuracy and completely defense for self-build generator
Introduction
background
- ML for graph developing
- the private data be taken seriously
- FedGL used to solve the data isolation or data private
- non-graph FL is vulnerable to backdoor attacks, but for graph data is underexplored
Backdoor Attack for FedGL
challenges
- various sizes of graphs
- important pixels in images are close for non-graph,but different for graph
- not only nodes but also edges should be considered
- randomly generates subgraphs for attacking has bad performance
Optimize
Weakness of current method : random nature $\rightarrow$ take individual and client information into consideration
Opt-GDBA : adaptively optimize the size and shape of subgraph trigger and use the information of graph and client
Modules of Opt-GDBA :
input nodes and edges’ features, output the importance score of nodes
learning the trigger location by nodes’ importance scores, which includes two schemes, Definable-Trigger and Customized-Trigger
learning the trigger shape given the location, nodes & edges attention and local trigger differentiation mechanisms
Defense for Backdoor Attacks
Empirical defenses : bad performance on Opt-GDBA, and always broken by adaptive attacks
targets
- predict correct label for bounded size picture even with injecting
- predict non-target label for backdoor picture
Challenges
- the vary of testing graphs
- not rely on specific model
- trigger could perturb edges and nodes
- a determinate guarantee
- non-graph data require same size input, which could not use in graph data
- certified defenses of graph data is insufficient
Majority-voting based certified defense
critical steps:
- divide graph into subgraphs which are non-overlapped each other
- use majority-voting predict these subgraphs to make sure the number gap between injected graph and not-injected graph
- prove robustness
Evaluation
Opt-GDBA
Dataset : 6 dataset
Result :
- 30%-46% on backdoor performance, and less nodes and edges trigger
- Customized-Trigger is more stealthy than Definable-Trigger
- backdoor graphs are persistent and hard to detect
Defense
Result:
- without attack for 20 edges and nodes in total in some case
- the certified accuracy in all dataset is 0
Background and Problem Definition
FedGL
GL definition
$\nu$ : nodes set
$\varepsilon$ : edges set
$d$ :the number of features of nodes
$X \in R^{|\nu|\times d}$ : node features matrix
$A \in{0,1}^{|\nu|\times|\nu|}$ : adjacency matrix
$G$ : $(\nu,\varepsilon,A)$
$\gamma$ : labels set
task : $f : G \rightarrow \gamma$
FedGL definition
$C$ : clients set
$G^i$ : training graph dataset in client $i$
steps :
- In t-th round, server randomly choose a subset of clients $C_t$ and broadcast the global model $\theta_t$ on server
- client $i \in C_t$ update its local model $\theta_t^i$ using its local data $G^i$ and $\theta_t$ and submit updated $\theta_t^i$ to server
- server aggregate the $C_t$ models {$\theta_t^i, i \in C_t$} and learn global model $\theta_{t+1}$
Backdoor Attack for FedGL
Method : inject a subgraph trigger into training graphs with target label
CBA : Centralized Backdoor Attack
DBA : Distributed Backdoor Attack
Rand : the randomness of shape and location of triggers
Rand-GCBA
limitations :
all attacker use same trigger $\kappa$
randomly sample a subset of nodes from data graph as trigger location and replace with trigger $\kappa$
representation :
$G_j^i$ : $j$-th graph for client $i$
$R(G_j^i,\kappa)$ : generate backdoor by limitation 2
$y_B$ : backdoor’s target label
$G^i_B = {R(G_j^i,\kappa),y_B}$
$G_C^i$ : the clean graphs without replacing by trigger $\kappa$
$\theta$ : the global model
$L$ : loss function
$\theta_B^i = \arg\min_{\theta_B^i} L(G^i_B \cup G_C^i;\theta)$
Rand-GDBA
limitations :
different attacker use different trigger $\kappa^i$
randomly sample a subset of nodes from data graph as trigger location and replace with trigger $\kappa^i$
representation :
$G^i_B = {R(G_j^i,\kappa^i),y_B}$
$\theta_B^i = \arg\min_{\theta_B^i} L(G^i_B \cup G_C^i;\theta)$
Thread Model
Attackers
aim : effective and stealthy DBA to FedGL
knowledge : training graphs and global model
capability : inject small part of trigger in every training iteration
objective : high backdoor accuracy and high main task accuracy
Defenders
aim : certified defense against the worst-case DBA
objective : high main task accuracy and low certified backdoor accuracy
OPT-GDBA
Node importance score learning
goal : measure node importance so that could replace it with trigger
input : nodes and edges features
Networks : EdgeView and NodeView
Networks representation : $e^i = EdgeView(A^i)$ and $n^i=NodeView(X^i)$, $e^i,n^i \in(0,1)$
output : $s^i = e^i \odot n^i$
Trigger location learning
Definable-Trigger
method : predefines trigger node size $n_{tri}$
steps :
descending order nodes by $s^i$
select top $n_{tri}$ nodes from $G$
weakness : same trigger size for graphs with different sizes
Customized-Trigger
input : Nodes’ scores $s^i$ and max trigger size $n_{tri}$
output : important nodes set
steps :
use Gap statistics to estimate the number of cluster $\hat k$
use $\hat k$-means for $s^i$
choose the cluster which has max average node score
if the number of nodes of the cluster is bigger than $n_{tri}$,select only top $n_{tri}$, else the cluster is output, output as $\nu$
Trigger shape learning
Networks : EdgeAtt, NodeAtt, EdgeEmb, NodeEmb
EdgeAtt
input : $A_B^i$,$\nu_{def}^i$
output : $E_{tri}^i \in R^{|\nu_{def}^i| \times |\nu_{def}^i|}$
NodeAtt
input : $X^i_B$,$\nu_{def}^i$
output : $N_{tri}^i \in R^{|\nu_{def}^i| \times d}$
EdgeEmb/NodeEmb
input : $i$
output : $I_e \in R^{|\nu_{def}^i| \times |\nu_{def}^i|}$ or $I_n \in R^{|\nu_{def}^i| \times d}$
Multiple
input : $E_{tri}^i,N_{tri}^i,I_e,I_n$
output : $E_{tri}^i = E_{tri}^i \odot I_e$,$N_{tri}^i = N_{tri}^i \odot I_n$
binary
$E_{tri}^i = \mathbb{1}(E_{tri}^i\ge0.5)$
trigger
$\kappa^i = (\nu_{def}^i,E_{tri}^i,N_{tri}^i)$
Training with Optimized Backdoor
Local model training
malicious client
$$
\theta_B^i = \arg \min_{\theta_B^i} L(G_B^i\cup G_C^i;\theta)
$$
benign client
$$
\theta^j = \arg \min_{\theta_B^i} L(G^j;\theta)
$$
Optimizing the adaptive trigger generator
$\omega^i$ : the parameters of malicious client $i$
$$
\omega^i = \arg \min_{\omega^i} L(G_B^i;\theta_B^i)
$$
Attack Result
Settings
dataset
6 benchmark real-world graph dataset for graph classification
method
compare Opt-GDBA with Rand-GCBA, Rand-GDBA
Fair Setting
make sure the edge in GDBA & GCBA same
Parameters Setting
Parameters name | value |
---|---|
Client number | 40 |
Client number for MUTAG | 20 |
Dataset Distribution | Average |
Iteration number | 200 |
Iteration number for RDT-M5K | 400 |
Training Client Rate | 50% |
Rate of attacking client | 50% |
Rate of malicious clients $\rho$ | 20% |
Size of Trigger Nodes $n_{tri}$ | 4 |
threshold of trigger nodes $n_{tri}^*$ | 5 |
Evaluation
- main task accuracy MA
- backdoor accuracy BA
- the average trigger node size $n_{tri}$
- the average trigger edge size $e_{tri}$
Result
observation
- main task performance marginally sacrificed under all attack
- Rand-GDBA is greater than Rand-GCBA
- Opt-GDBA > Rand-DGBA not only effectiveness but also stealthiness
impact of $\rho$
BA slightly increases with a larger $\rho$, but MA is stable
impact of $n_{tri}$
BA and $n_{tri}$ is positive relation
impact of learning scheme
Customized-Trigger can further locate more important region
global trigger vs local trigger
global trigger get higher BA than local trigger
Ablation study
EdgeView and trigger-location make good performance, about 15%
trigger-shape make good performance too, about 25%
all of them give about 36%
Certified Defense for FedGL
Graph division into Subgraphs
Node Division
$v \in V$ : node in the nodes set
$str(\cdot)$ : stringify
$h[\cdot]$ : hash function
$T$ : the number of subgraphs
$V^t$ : the nodes set with group index $t$, that is $V^t = {u\in V|h[str(v)]\ mod\ (T+1)}$
$X^t \in R^{|V|\times d}$ : node features in $t$-th group
Node Division Steps :
- for every $v \in V$, $str(v)$
- group $str(v)$ by using $h[str(v)]\ mod\ (T+1)$
- $X_v^t=X_v$ if $v \in V^t$ else $X_v^t=0$
Edge Division
$(u,v) \in \varepsilon$ : undirected edge in the edges set
make sure : $h[str(u) + str(v)] = h[str(v)+str(u)]$
Edge Division Steps:$\varepsilon^t = {(u,v)|h[str(u)+str(v)]\ mod (T+1) =t}$
Majority Vote-based Ensemble Classifier
$$
T_l = \sum_{t=1}^T \mathbb{1}(f_B(G^t)=l) \
g_B(B) = \arg \max_{l\in y} T_l
$$
Certified Robustness Guarantees
…
Certified Defense Result
Settings
parameters setting
Parameter Name | value |
---|---|
$\rho$ | 20% |
$n_{tri}$ | 4 |
$n_{tri}^*$ | 5 |
T | 30 |
Evaluation Metrics
Certified MA at perturbation size m
Certified BA
Result
问题
- 给定替换节点和替换子图,怎么进行替换操作
- 这种分组方法能确保node和其对应的edge在一起?
- 怎么确保$h[str(u) + str(v)] = h[str(v)+str(u)]$,构造hash函数?
- T不断增大,不会损坏图本身的特征吗