Release Notes#


2.3.3#

Release Date: July 16, 2024

  • Documentation improvements

2.3.2#

Release Date: July 16, 2024

  • Fixed an issue where cloud resources are not torn down properly due to deprecated kubernetes resources

2.3.1#

Release Date: July 15, 2024

  • Fixed an issue with setting up MLOPS policies

2.3.0#

Release Date: July 12, 2024

The OctaiPipe version 2.3 release brings a number of new features which brings OctaiPipe to new types of devices as well as making FL more secure and robust to adversarial attacks. The new release brings you the following:

New features#

  • OctaiPipe Inference on Android phones. You can now run model inference on Android phones using the new OctaiOxide Library

  • Adversarial Fortification. A number of configurable features adding fortification against Byzantine and data poisoning attacks for all FL models

  • Save best round. You can now save the model from the FL round that performed the best on the evaluation dataset

  • Federated EDA. An exploratory data analysis suite that allows you to get metadata and column statistics from a set of devices, without moving raw data

  • Unsupervised federated learning. An implementation of the k-FED algorithm both for FL training and inference

  • Running multiple OctaiPipe devices on one machine for FL experimentation and simulation

Additional updates#

  • Added an octaipipe user docker image to make containers more secure

  • Added more efficient paging, sorting and filtering to the front end

  • Added the ability to set resource reservations and limits for OctaiPipe containers running at the edge

  • Added additional messages if an FL round fails to define where the error is coming from

  • Created a more cost-efficient and configurable Azure marketplace installation of OctaiPipe

  • Fixed a bug where FL did not re-authenticate after running for a long time

  • General updates to documentation and Tutorials

  • Smaller bug fixes and security improvements

Known issues#

The update to bring a more secure octaipipe user image works for all Linux based machines, except for Windows machines running Linux in WSL.

2.2.10#

Release Date: April 30, 2024

  • Updated documentation on Azure marketplace VM sizing

2.2.9#

Release Date: April 29, 2024

  • Fixed device_groups not being able to be assigned by themselves in FL config

  • Fixed mismatching Python versions for test, Jupyter and docker images. All now Python 3.9.19

2.2.8#

Release Date: April 23, 2024

  • VM sizes in Azure Marketplace install now configurable

2.2.7#

Release Date: April 20, 2024

  • Added benchmarking capabilities to OctaiPipe

  • Each action get it’s own deployment children record

  • Improved status updates for deployment children across policies

  • Documentation updates

2.2.6#

Release Date: April 17, 2024

  • Fixed docker compose file from device registration being http not https

  • Fixed docker compose network syntyax to latest version

  • Fixed missing tag for OctaiPipe Edge Client in docker hub

2.2.5#

Release Date: April 16, 2024

  • Device deployments are now tracked during FL experiments giving more info on active and dropped devices in debug logs.

  • FL minimum device threshold is enforced

  • Fixes a bug in logging OctaiSpec Pydantic blueprints

  • Docs updates

2.2.2#

Release Date: March 27, 2024

Bug fixes#

  • Fixed Portal path for tutorial

  • General documentation updates

2.2.1#

Release Date: March 25, 2024

Marketplace documentation#

Added how-to page for installing OctaiPipe from Azure Marketplace

2.2.0#

Release Date: March 21, 2024

Version 2.2 of OctaiPipe continues our journey enabling popular model types to be trained with a federated approach and reducing the footprint so they can run on increasingly constrained edge devices. This release contains the following major changes:

Federated XGBoost#

XGBoost (Extreme Gradient Boosting) is a powerful and popular machine learning algorithm combining multiple weak learners to create a strong predictive model. It often out-performs other algorithms in both speed and accuracy while using fewer resources and producing more explainable results than deep learning approaches.

OctaiPipe V2.2 introduces a novel approach to training XGBoost models in a federated way that minimises the number of interactions between edge devices and the Federated Learning server ensuring robustness to intermittent network connections.

A new tutorial explains how to define and run an XGBoost Federated Learning workload.

Initial support for Web Assembly#

Currently, OctaiPipe uses Docker containers to encapsulate data processing, model training and model inference workloads for execution on edge devices. This requires the edge devices to support Linux and Docker and results in edge workload containers of up to 1GB for common model types.

OctaiPipe V2.2 introduces the OctaiOxide inference image which uses Web Assembly (WASM) as a new encapsulation method for smaller and faster workloads. A new tutorial demonstrates this with SGD regressor and neural network models that are significantly smaller and faster than previously.

Improvements to Model monitoring and continuous learning#

This release builds on the MLOps capabilities added in V1.6 that enabled definition of monitoring policies for models deployed to edge devices. A new demonstration video will be released shortly showing how a policy can detect data drift and trigger a new federated learning workload before updating the model to reflect its new environment.

Support for self-service installation of OctaiPipe from the Azure marketplace#

Since V2.0, the OctaiPipe portal can be installed in customer Azure or AWS cloud environments. The Azure installation process has now been simplified so new customers can self-install OctaiPipe into their Azure subscription without the need for hands-on support. This allows organisations to rapidly evaluate how OctaiPipe can reduce costs and improve efficiency training, deploying and managing machine learning solutions on edge devices.

Continuous improvements to platform reliability, security and costs#

During every release cycle, OcaiPipe is updated to fix bugs, reduce complexity, increase security and reduce running costs. A comprehensive set of security and vulnerability tests are completed to ensure new capabilities are not degrading performance in this area. In V2.2, we have:

  • Removed the requirement to use a separate VPN connection to access the Jupyter notebooks as all OctaiPipe access is secured with a single sign on authority.

  • Moved the Portal to run in Kubernetes and optimised node sizes improving cloud agnosticism and further reducing running costs.

  • Optimised K8s node size to reduce costs.

  • Implement TLS certificate for FL communication rounds.

  • Recommend cost reduction techniques in documentation.

  • Future proof Azure deployment by moving Kubenetes to use Managed Identity.

2.1.7#

Release Date: February 1, 2024

  • Fixed documentation not being properly added to Docker image in Portal

2.1.6#

Release Date: January 31, 2024

  • Fixed bug where updating a model failed if the Portal endpoint did not have working SSL cert

2.1.5#

Release Date: January 31, 2024

  • Fixe bug where model load specs are overwritten if provided by user in FL

2.1.4#

Release Date: January 29, 2024

  • Fixed bug in NN zoo client where client did not allow uer to configure if Portal endpoint has working SSL cert

2.1.3#

Release Date: January 25, 2024

  • Fixed intermittent bug in monitoring database when service takes a long way to start

2.1.2#

Release Date: January 23, 2024

  • Model versioning also properly implemented for FL models

2.1.1#

Release Date: January 17, 2024

Fixed bug where FL workload setup connects directly to device to check column names for data to train on.

2.1.0#

Release Date: December 21, 2023

Introduction of the OctaiPipe Edge Client#

A light weight client that runs on the edge device and queries the database for incoming commands to run, such as deployments or FL training instructions

  • Improves scalability by running all commands on the device, meaning the resources of the development workspace do not slow down device deployments

  • Enhanced security by removing the need to store and use device credentials to connect to the device

  • Removed the need for ensuring devices are directly accessible from workspace through for example a VPN

API endpoint security#

The backend API endpoints have been locked down such users, devices, and the FL training processes can only access data about themselves. For example, a device can only access data about that device, and not other devices.

Introduction of OctaiPipe core and lite#

OctaiPipe core and lite allow users to run scaled down, lightweight versions of OctaiPipe, to be used on lower powered devices or in setting where minimal memory and CPU usage are of interest.

For example, using OctaiPipe Lite, a user can train a federated Sklearn model and deploy it to a device using an image 3 times smaller than previously possible.

MQTT data loading and writing#

The ability to read and write data using MQTT has been added to OctaiPipe. It is also possible to use OctaiPipe to spin up an MQTT broker on your edge devices.

Bug fixes#

  • Fixed problem when trying to log into OctaiPipe as a user from a local machine

  • Fixed issue where documentation would not build if versions of OctaiPipe and the Portal mismatched

2.0.3#

Release Date: October 4, 2023

Minor updates and bug fixes.

2.0.0#

Release Date: September 20, 2023

Web UI#

In v2.0, we have added a web front end that makes it faster and easier to manage devices, experiments, models and deployments. A new notifications page allows you to monitor ongoing deployments and experiments in one place so you can easily see when anything needs your attention.

The combination of the familiar and productive Jupyter notebook development environment and the new web management UI, further simplifies and automates the use of Federated Learning on Edge devices for OctaiPipe users.

Independent Security review#

As part of release v2, we coordinated with an external penetration testing team to ensure our platform is secure. We updated all deployed components to the latest security patches and enforced TLS certificate on our rest API. Automated penetrations tests are now part of our continuous build process.

Azure and AWS Availability#

OctaiPipe is now available in both Azure and AWS as a Platform as a Service (PaaS) solution. By delivering the federated learning server and database of devices, experiments, models, and deployments into your cloud infrastructure, it ensures you remain in control of your data and intellectual property. This further improves the data security and privacy of your system, driving data compliance.

Updated Examples and demo videos#

We have created videos showing how to use OctaiPipe for the following activities:

  • Data preprocessing

  • Database connection

  • Model inference

  • Experiment tracking

  • Federated Learning classification example

  • Federated Learning regression example

Known Issues#

The current version of Kubeflow has a UI bug resulting in notebooks displaying an error message after creation. Waiting for a few seconds and then refreshing the page should resolve the issue and display the notebook.

1.6.0#

Release Date: August 16, 2023

Introducing MLOps functionality and adding flexibility to your FL data sources.

MLOps#

usage/Tutorials/02_MLOps-global-and-local-policies/MLOps-global-and-local-policies

Breaking Changes#

  • FL Config Data Specs:

Used to be: datastore_type was the same for all devices:

input_data_specs:
  datastore_type: influxdb
  query_type: dataframe
  query_values:
    devices:
      - device: default
        query_template_path: ./configs/data/influx_query_def.txt
        start: "2022-11-10T00:00:00.000Z"
        stop: "2022-11-11T00:00:00.000Z"
      - device: FL-01
        query_template_path: ./configs/data/influx_query_1.txt
        start: "2022-11-10T00:00:00.000Z"
        stop: "2022-11-11T00:00:00.000Z"

Now : each device must have their own datastore_type:

input_data_specs:
  devices:
    - device: default
      datastore_type: influxdb
      query_type: dataframe
      query_template_path: ./configs/data/influx_query_def.txt
      query_values:
        start: "2022-11-10T00:00:00.000Z"
        stop: "2022-11-11T00:00:00.000Z"
        bucket: cmapss-bucket
        measurement: sensors-raw
        tags: {}
    - device: FL-01
      datastore_type: influxdb
      query_type: dataframe
      query_template_path: ./configs/data/influx_query_1.txt
      query_values:
        start: "2022-11-10T00:00:00.000Z"
        stop: "2022-11-11T00:00:00.000Z"
        bucket: cmapss-bucket
        measurement: sensors-raw
        tags: {}

This allows more flexibility in case your devices data storages are heterogeneous.

Known Issues#

Entering a blank custom notebook image in kubeflow, if left running can lead to permanent user 500 error. This can be resolved by deleting the notebook using kubectl. If you are using OctaiPipe from our infrastructure, contact your support specialist.

1.5.1#

Release Date: May 24, 2023

Introducing new User Interface module, explicit device environment variables definition and other improvements.

User Interface#

  • All functionality is now assembled under one module which becomes your primary interface

  • octaideploy cli interface is deprecated, use octaipipe.deployments module instead

Env Variables Definition#

  • You can now supply a dictionary of environment variables to your devices via env key in the deployment config

  • If you don’t specify any datasource-related variables, octaipipe will, as previously, assume the variables you specify under datasources are set on the Devices

  • Limitation: for now it is impossible to give different devices different variables/variable values

Known Issues#

Entering a blank custom notebook image in kubeflow, if left running can lead to permanent user 500 error. This can be resolved by deleting the notebook using kubectl. If you are using OctaiPipe from our infrastructure, contact your support specialist.

1.4.0#

Release Date: April 02, 2023

This sees the major release of Cloud Deployment as well as a number of bug fixes and improvements.

Cloud Deployment#

  • Ability to deploy all pipeline steps to the cloud

  • Model deployment to production get API endpoint plus an option to run them periodically

  • Monitoring database to collect metrics from all of your cloud deployments

Known Issues#

Entering a blank custom notebook image in kubeflow, if left running can lead to permanent user 500 error. This can be resolved by deleting the notebook using kubectl. If you are using OctaiPipe from our infrastructure, contact your support specialist.


1.3.0#

Release Date: March 22, 2023

Release 1.2.10 is a minor release that includes bug fixes and improvements.

Minor Changes and Bug Fixes#

  • Logging for Grafana deployment now shows URL pointing to dashboard rather than Grafana main page

  • Grafana theme now defaults to white

  • Fixed a bug where an error is thrown if OCTAIPIPE_DEBUG logging is set to True

  • Added database credentials to OctaiPipe docker images

  • Moved torch installation to [local] to cut down default image size

  • Switched to using local data store for model training and evaluation tests to save time on pytests

  • Add njobs=-1 as default to sklearn random forest regressor and classifier

  • Added a “no scaling” option to base pytorch model for FL to allow for custom preprocessing step

  • Added pull request template to guide pull request creation

  • Solved an issue where DeviceClient would not be able to add a device to database if it was removed in the same session

  • ModelClient checks Experiment table for experiment before posting record with provided experiment ID. This prevent silent error of not being able to create record in database due to missing experiment record

  • Added OctaiPipe login, where credentials saved in ~/.octaipipe/oct_creds.env are loaded into environment when OctaiPipe is imported

  • Credentials in ~/.octaipipe/oct_creds.env and ~/.octaipipe/octaipipe.json are moved to FL server on deployment

Known Issues#

The current version of Kubeflow Pipelines results in an Internal Server Error when trying to retrieve outputs of a pipeline run


1.2.10#

Release Date: March 14, 2023

A small release comprising several minor fixes in code, tests and CI.

Minor Changes and Bug Fixes#

  • Stabilized Ubuntu version for the testing environment

  • Fixed a bug with train-val-test split writing out only the test data

  • Updated model types in the documentation

  • Removed hardcoding secrets in tests

  • Fixed a bug with experiment record not creating if FL fails

  • Changed default namespace for the deployments to colab

  • Secured OctaiClient calls in setup.py

  • pem files for device connection are now saved to their own folder in ~/.octaipipe

Known issues#

Latest Kubeflow pipelines version results in Internal Server Error when trying to pull pipeline output directly to Python/Jupyter Notebook using octaikube.utils.get_outputs()


1.2.2#

Release Date: February 1, 2023

Minor Changes and Bug Fixes#

  • Fixed a bug used to assume ‘start’ field in the query_values, which is incorrect for local source and other non-influx sources


1.2.1#

Release Date: December 23, 2022

This is a minor release that includes bug fixes and improvements.

Minor Changes and Bug Fixes#

  • Pem files that are used for device connection are stored.

  • Experiment record is now created even if the experiment fails

  • Fixed train-val-test splitting for local files

  • Torch added to setup

Known Issues#

Cannot return pipeline output directly in Python/Jupyter Notebook using ock.utils.get_outputs() due to internal Kubeflow error


1.2.0#

Release Date: December 22, 2022

In this release we are thrilled to introduce Federated Learning.

Federated Learning#

  • OctaiFL and associated classes implemented.

  • OctaiFL.run() sends client images to selected devices using OctaiDeploy and spins up an aggregation server in kubernetes.

  • In current version the run() method will wait until the training is done and then tear down resources.

  • Experimentation concept is added to OctaiClient, each FL run creates an experiment record with linked models.

  • Evaluation table was created to store model evaluation metrics status after each training cycle.

Minor Changes and Bug Fixes#

  • Feature lagger target label hotfix

  • Model client find id hotfix

  • Now uses a python3.9 virtual environment

  • Removal of prophet

  • Added sleep time to pytest

  • Requests retry hotfix

  • Removed mlprodict and pyquicksetup

  • Updated readme for xgboost

  • Added workflow and related dockerfile

  • Version for docs taken from env variable

  • Odbc build hotfix

  • Xgboost feature names hotfix

  • Pip install local for xgboost

  • Log file size limit dix


1.1.0#

Release Date: November 28, 2022

In this release we are excited to introduce AutoML pipelines. We also focus on improved security and deployment features.

AutoML#

New features and improvements#

  • Security enhancements to OctaiClient with API endpoints secured so that only users registered in an organization and t-dab admins can access.

  • Introduction of AutoML pipelines to OctaiPipe core.

  • Improved Custom step integration with OctaiClient.

Minor Changes and Bug Fixes#

  • Added more standard models into OctaiPipe Library

  • Fixed bugs with model filenames and logging in ModelBase

  • Other minor bugfixes


1.0.4#

Release Date: October 31, 2022

New features and improvements#

  • Introduction of pyproject.toml taking care of build dependencies installation

  • Hotfix in model_client.find_previous_version()

  • Hotfixes in test cases


1.0.0#

Release Date: October 20, 2022

The 1.0 release is the first step towards producing a strong IoT ML market offering. The MVP consists of the development platform, allowing us to develop, deploy and monitor ML solutions, with deployment available to edge or cloud environments.

The first users of the platform are T-DAB data scientists, using it on internal projects. This will provide valuable feedback to the development team for bug fixes, performance improvements and new features.

Core Pipeline Steps:#

  • Preprocessing

  • Feature Engineering

  • Model Training

  • Model Inference

  • Data Drift

Edge#

  • Edge Device registration & infrastructure installation

  • Running Pipelines on the edge devices

  • Deploying Models to Edge Devices

  • Grafana monitoring dashboard at the edge

Cloud#

  • Running Pipelines in Kubeflow (using OctaiKube)

  • Grafana monitoring dashboard on the cloud

OctaiClient Metadata#

  • Recording Model parameters to the OctaiClient DB

  • Recording deployment parameters to the OctaiClient DB