Strategy#

The strategy abstraction relates to device selection, model aggregation and evaluation and can be customised. We currently support two federated learning (FL) strategies tailored to different types of machine learning models: one for PyTorch and scikit-learn models, and another for XGBoost models.

Both of these are extensions of Flower’s built in flwr.server.strategy.FedAvg strategy.

Customise Strategy Parameters#

Here’s an example of how you would customise your strategy config parameters:

  1. Instantiate the OctaiFL class

  2. Check the default strategy config using strategy.default.

  3. Use set_config to set the update config parameters.

  4. run FL using OctaiFL.run().

 1from octaipipe.federated_learning.run_fl import OctaiFL
 2
 3FlYml = 'path to definition file'
 4OctaiFl = OctaiFL(FlYml)
 5print(OctaiFl.strategy.default)
 6strategy = {
 7    'min_fit_clients': 5,
 8    'min_evaluate_clients': 5,
 9    'min_available_clients': 5,
10    'num_rounds': 3,
11}
12print(OctaiFl.strategy.get_config())
13OctaiFL.strategy.set_config(strategy)
14print(OctaiFl.strategy.get_config())
15
16OctaiFL.run()

The strategy interface also provides options to load the configuration of a previously completed FL model training run. This allows us to replicate any previous FL experiment conditions. The following shows code that could be used to load an example previous experiment.

1OctaiFL.strategy.load_experiment(experiment_id="previous_experiment_id", inplace=True)