Aggregation Strategies#

The strategy abstraction relates to device selection, model aggregation and evaluation and can be customised. We currently support federated learning (FL) strategies supporting PyTorch, Scikit-learn and XGBoost.

Customise Strategy Parameters#

Here’s an example of how you would customise your strategy config parameters:

  1. Instantiate the OctaiFL class

  2. Check the default strategy config using strategy.default.

  3. Use set_config to set the update config parameters.

  4. run FL using OctaiFL.run().

 1from octaipipe.federated_learning.run_fl import OctaiFL
 2
 3FlYml = 'path to definition file'
 4OctaiFl = OctaiFL(FlYml)
 5print(OctaiFl.strategy.default)
 6strategy = {
 7    'min_fit_clients': 5,
 8    'min_evaluate_clients': 5,
 9    'min_available_clients': 5,
10    'num_rounds': 3,
11    'round_timeout': 600,
12}
13print(OctaiFl.strategy.get_config())
14OctaiFL.strategy.set_config(strategy)
15print(OctaiFl.strategy.get_config())
16
17OctaiFL.run()

The strategy interface also provides options to load the configuration of a previously completed FL model training run. This allows us to replicate any previous FL experiment conditions. The following shows code that could be used to load an example previous experiment.

1OctaiFL.strategy.load_experiment(experiment_id="previous_experiment_id", inplace=True)

Other aggregation algorithms#

You can also set other aggregation algorithms by either specifying a strategy in the FL yaml config or in python code. For NN and Scikit-learn, set the strategy_name field to one of the following:

  • fed_avg - this is the default Federated Average

  • fed_median - Federated Median

  • fed_trimmed_avg - Federated Trimmed Average

  • krum - Krum strategy

  • fed_prox - Federated Proximal

For more detailed information on these strategies, please refer to the section on Changing the FL aggregation strategy.

The XGBoost and k-FED algorithms have custom strategies defined here: Federated XGBoost, k-FED Clustering