The strategy abstraction relates to device selection, model aggregation and evaluation and can be customised. We currently support two federated learning (FL) strategies tailored to different types of machine learning models: one for PyTorch and scikit-learn models, and another for XGBoost models.

Both of these are extensions of Flower’s built in flwr.server.strategy.FedAvg strategy.

Customise Strategy Parameters#

Here’s an example of how you would customise your strategy config parameters:

  1. Instantiate the OctaiFL class

  2. Check the default strategy config using strategy.default.

  3. Use set_config to set the update config parameters.

  4. run FL using

 1from octaipipe.federated_learning.run_fl import OctaiFL
 3FlYml = 'path to definition file'
 4OctaiFl = OctaiFL(FlYml)
 6strategy = {
 7    'min_fit_clients': 5,
 8    'min_evaluate_clients': 5,
 9    'min_available_clients': 5,
10    'num_rounds': 3,

The strategy interface also provides options to load the configuration of a previously completed FL model training run. This allows us to replicate any previous FL experiment conditions. The following shows code that could be used to load an example previous experiment.

1OctaiFL.strategy.load_experiment(experiment_id="previous_experiment_id", inplace=True)

Other aggregation algorithms#

You can also set other aggregation algorithms by either specifying a strategy in the FL yaml config or in python code. You set the strategy_name field to one of the following:

  • fed_avg - this is the default Federated Average

  • fed_median - Federated Median

  • fed_trimmed_avg - Federated Trimmed Average

  • krum - Krum strategy

  • fed_prox - Federated Proximal

For more detailed information on these strategies, please refer to the section on Changing the FL aggregation strategy.