Adversarial Fortification#

Introduction#

When building machine learning models, there is a risk that malicious actors will take part in the training and negatively affect the model to harm its performance.

There are many ways of doing this, and OctaiPipe has identified a set of mitigation techniques that deal with the most likely and impactful adversarial attacks.

This page details general implementations for adversarial fortification. To read about fortification specific to OctaiPipe’s FL XGBoost, see the page on Adversarial Fortification for XGBoost.

Adversarial attacks#

Below are the two examples of adversarial attacks that OctaiPipe have identified.

Data poisoning#

This attack occurs when a malicious actor alters the data to intentionally throw off the training of a model. For example, label flipping is a problem where images for a computer vision model are mislabeled so that the local model learns to associate an image with the incorrect label. This can happen by intentional mislabelling, but also through human error.

Byzantine attack#

Also known as model poisoning, this is when one of the clients send up erroneous model updates. Model weights contributed could be random or multiplied by some factor. This throws off the model convergence of the global model, lowering its performance.

Mitigation techniques#

Below are the three mitigation techniques used by OctaiPipe to mitigate these adversarial attacks.

Checking the training config file#

To change what data is being used, or to prevent the model from training, malicious clients can change the local configuration file used for OctaiPipe FL.

To protect against this, OctaiPipe checks the local configuration file against the one defined by the developer to make sure they are the same.

Checking the training docker image#

The local training can also be tampered with by changing the docker image that is used to run local training of models.

To protect against this, OctaiPipe checks the local docker image against the one in the image registry, to make sure that they are the same.

Configuration#

Checking the training config and docker image are controlled by the FL strategy and are enabled by default. To disable these checks, you can run the following code when running FL:

 1from octaipipe.federated_learning.run_fl import OctaiFL
 2
 3config_path = 'path to definition file'
 4octaifl = OctaiFL(config_path)
 5
 6strategy = {
 7    'adv_fort': {
 8      'config_check': False,
 9      'image_check': False
10    }
11}
12
13octaifl.strategy.set_config(strategy)
14
15octaifl.run()

Changing the FL aggregation strategy#

When the FL server aggregates model parameters from the local models to create the global model, it uses a specific aggregation strategy. The default strategy is Federated Average (McMahan et. al, 2016), where global model weights are created using a weighted average of all local models. This works best in ideal scenarios with heterogenous data and no adversarial attacks.

However, there are strategies that are more robust to adversarial attacks, that algorithmically exclude model update outliers. The ones included in OctaiPipe are:

Fedmedian#

Paper by Yin et. al, 2018

Takes the median of local model weights. This is better at not considering malicious outlier weight values.

FedTrimmedAvg#

Paper by Yin et. al, 2021

For each model weight, the values of the gradient from the FL clients within the quantile range [beta, 1-beta] are used for computing the weighted average. Taking the average within a quantile range is better than just the average/median to eliminate the malicious outlier weight values.

Krum#

Paper by Blanchard et. al, 2017

Krum selects a subset of m of local model updates for aggregation by comparing the similarity between the local updates and selecting those local models that are most similar. Rigorous similarity comparison means a better chance to eliminate outliers.

FedProx#

Paper by Li et. al, 2018

FedProx introduces a proximal term to the local model’s loss function, which helps to tackle heterogeneity in federated networks. This term penalizes the distance between the local model parameters and the global model parameters, thus promoting more stable convergence, especially in heterogeneous settings.

See suggested implementation [here](https://flower.ai/docs/framework/ref-api/flwr.server.strategy.FedProx.html#flwr.server.strategy.FedProx)

OctaiPipe’s implementation of FedProx modifies the loss function based on the proximal_mu value which must be included in the startegy like so:

1octaifl = OctaiFL(config_path)
2strategy = {
3    'strategy_name': 'fed_prox',
4    'proximal_mu': 0.1  # Adjust this value as needed
5}
6
7octaifl.strategy.set_config(strategy)

promixmal_mu is the weight of the proximal term used in the optimization. 0.0 makes this strategy equivalent to FedAvg, and the higher the coefficient, the more regularization will be used (that is, the client parameters will need to be closer to the server parameters during training).

General Configuration#

The FL aggregation strategies above can be set using a similar methodology as for configuring the config and docker image checks:

 1from octaipipe.federated_learning.run_fl import OctaiFL
 2
 3config_path = 'path to definition file'
 4octaifl = OctaiFL(config_path)
 5
 6strategy = {
 7    'strategy_name': 'fed_median' # one of fed_avg, fed_median, fed_trimmed_avg, fed_prox, krum
 8}
 9
10octaifl.strategy.set_config(strategy)
11
12octaifl.run()