Adversarial Fortification for XGBoost#

Note

Before proceeding note that OctaiPipe does not currently support running of FLXGBoost on arm32 devices.

Introduction#

Alongside OctaiPipe’s implementation of Federated XGBoost Implementation with OctaiPipe comes a group of

functionalities that protect the model from adversarial attacks and ensure good model performance even in the face of malicious actors.

Little research has been done on adversarial attacks for FL XGBoost, making this a novel approach to ensuring security and model stability.

Adversarial attacks#

To begin with, let’s define what an adversarial attack is before giving some examples of what these might look like.

An adversarial attack is a situation in which one or more of the clients in the federated training round tries to intentionally or by chance worsen the performance of the model. This could take many forms.

Changing the training config#

One way of disrupting the model training is to change the configuration file on one of the devices. To protect against this, OctaiPipe has implemented a check that compares the config file for each client at each training round to the one defined by the user.

Changing the learning rate (eta)#

The eta (or learning rate) of a local model can also be changed to inflate the updates made by one of the local models. OctaiPipe protects against this by comparing the learning rate of each incoming model to the learning rate set by the user in the model parameters.

Abnormally large gain#

This is a group of issues that all result in the same problem, that the model gain of a local model is hugely different from that of the other clients. This could happen if a malicious client provides the FL server with a local model that has not been trained, adding noise to the global model and reducing performance.

To protect against this, local models with gain values that exceed a maximum threshold are not included in the global model.

The maximum threshold is computed using Tukey’s method with threshold being: \(Q_{75} + IQR \times k\)

The factor k should be set by the user and has to be a positive number. It is recommended to set the variable between 0.5 and 5, with higher values allowing the gain of an individual client to be greater before it is excluded from training, see this page on Tukey outliers for more information on how to set the k factor.

It is worth mentioning that a gain value outside the threshold is not definitely a malicious client, but could just be a valid outlier. It is up to the user to set the k factor depending on their appetite for outliers in relation to local model gain.

Configuration in OctaiPipe#

So, how do we configure adversarial fortification for XGBoost in OctaiPipe?

It is quite simple. If nothing is configured, adversarial fortification is always implemented to protect about the most obvious attacks. Changes to the FL configuration file or model learning rate are done by default. Protection against abnormally large gain values is also done, with a k factor of 5, setting it to the upper bound to catch the most extreme outliers.

However, this can be configured using the FL strategy in the same way other strategies are configured (see Strategy).

See the examples below:

Setting the gain factor to 0.5#

 1from octaipipe.federated_learning.run_fl import OctaiFL
 2
 3config_path = 'path to definition file'
 4octaifl = OctaiFL(config_path)
 5
 6strategy = {
 7    'adv_fort': {
 8      'gain_factor': 0.5
 9    }
10}
11
12print(octaifl.strategy.get_config())
13
14octaifl.strategy.set_config(strategy)
15
16print(octaifl.strategy.get_config())
17
18octaifl.run()

Removing all adversarial fortification#

 1from octaipipe.federated_learning.run_fl import OctaiFL
 2
 3config_path = 'path to definition file'
 4octaifl = OctaiFL(config_path)
 5
 6strategy = {
 7    'adv_fort': {
 8      'gain_factor': False,
 9      'eta': False,
10      'config_check': False,
11      'image_check': False
12    }
13}
14
15print(octaifl.strategy.get_config())
16
17octaifl.strategy.set_config(strategy)
18
19print(octaifl.strategy.get_config())
20
21octaifl.run()