Model Inference Step#

To run locally: model_inference

An ML lifecycle can be broken up into two main, distinct parts. The first is the training phase, in which the model is created or “trained” by running a specified subset of the dataset through the model. ML inference is the second phase, in which the model is deployed to make predictions on live data, giving actionable outputs.

The inference step follows this workflow which corresponds to sections in the inference config file:

  1. Load an existing model:

    • model_specs

  2. Load live test data:

    • input_data_specs

  3. Perform model inference at intervals:

    • run_specs

  4. Save live predictions:

    • output_data_specs

The following is an example of a config file together with descriptions of its parts.

Step config example#

 1name: model_inference
 4  datastore_type: influxdb
 5  query_type: dataframe
 6  query_template_path: ./configs/data/inference_query.txt
 7  query_values:
 8    start: 5d
 9    bucket: live-metrics
10    measurement: def-metrics
11    tags:
12      MY_TAG: value_0
13  data_converter: {}
16  - datastore_type: influxdb
17    settings:
18      bucket: "test-write"
19      measurement: "model-outputs"
20  - datastore_type: local
21    settings:
22      path: "./logs/results"
25  name: Machine_RUL
26  type: ridge_reg
27  version: "2.0"
30  prediction_period: 10s
31  save_results: True
32  onnx_pred: false

Input and Output Data Specs#

input_data_specs and output_data_specs follow a standard format for all the pipeline steps; see Octaipipe Steps.

Model Specs#

25 model_specs:
26     name: Machine_RUL
27     type: ridge_reg
28     version: '2.0'

Specifications of the model to be loaded for inference. See Model Training Step.

Run Specs#

30 run_specs:
31     prediction_period: 10s
32     save_results: True
33     onnx_pred: false
  • prediction_period: sleeping time interval between predictions.

  • save_results: boolean of whether to save the results both locally and to the output object in Influxdb or the sql database.

  • onnx_pred: boolean of whether to use the onnx file of the trained model for inference. If false, the joblib model file is used.