Skip to content

Commit

Permalink
feat(examples): add AI Wire Component Sense-Hat demo (#3824)
Browse files Browse the repository at this point in the history
* Added AI Wire Component Sense-Hat demo folder

Added titles to the steps

Added forgotten copyright notice

Signed-off-by: Mattia Dal Ben <[email protected]>

Removed unused evaluation script

Added forgotten copyright notice

Signed-off-by: Mattia Dal Ben <[email protected]>

Add empty folder to tracking to ensure Triton is happy

Signed-off-by: Mattia Dal Ben <[email protected]>

* Removed copy-paste error

Signed-off-by: Mattia Dal Ben <[email protected]>

* Added copyright notice to model configuration files

* Added copyright notice to snapshot files

Signed-off-by: Mattia Dal Ben <[email protected]>

* Added forgotten ensamble pipeline folder

Signed-off-by: Mattia Dal Ben <[email protected]>

* Added note about used backends

Signed-off-by: Mattia Dal Ben <[email protected]>

* Fixed typo

Signed-off-by: Mattia Dal Ben <[email protected]>
  • Loading branch information
mattdibi authored Jun 8, 2022
1 parent 7b6d6c1 commit 38f3f6b
Show file tree
Hide file tree
Showing 15 changed files with 2,411 additions and 0 deletions.
159 changes: 159 additions & 0 deletions kura/examples/scenarios/org.eclipse.kura.example.ai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Kura AI Wire Component demo

Kura AI Wire Component SenseHat-based demo.

The goal of this repository is to demo the AI inference capabilities of Kura Wires through the use of [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server). This project trains a [Autencoder](https://en.wikipedia.org/wiki/Autoencoder) in [Tensorflow](https://www.tensorflow.org/) using the data extracted from a Raspberry Pi equipped with a [Sense HAT](https://www.raspberrypi.com/products/sense-hat/) to detect anomalies in the data retrieved by the Sense HAT.

## Project structure

The repository is organized into three main directories:
- **models**: is the Triton server [model repository](https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md) and contains the trained models generated by the training environment.
- **training**: contains the training environment which is comprised of the dataset and the sources for generating the autoencoder model for anomaly detection.
- **snapshots**: contains the Kura snapshots for the Wire Graph used for training, the one used for the inference, and the pre-configured SenseHat driver and assets.

## Prerequisites

In order to be able to execute this demo on a RaspberryPi board, these requirements have to be met:
- Configured SenseHat: see [SenseHat documentation](https://www.raspberrypi.com/documentation/accessories/sense-hat.html)
- I2C interface should be unlocked using `sudo raspi-config`
- The following Deployment Packages must be installed: `org.eclipse.kura.ai.triton.server_*.dp`, `org.eclipse.kura.wire.ai.component.provider_*.dp`, `org.eclipse.kura.example.driver.sensehat_*.dp`, `org.eclipse.kura.wire.script.filter`
- Apply the [driver snapshot](snapshots/sensehat-driver.xml) and verify it works by selecting the "asset-sensehat" and clicking on "Data": a reading of the values should be successfully triggered
- Apply the [H2DB configuration snapshot](snapshots/h2-config.xml): this will create an H2 web server running on http://192.168.2.8:9123 (use the raspberry IP address)
- Open port 9123 in Kura Firewall

## Steps to reproduce the demo

- **Step 1: Data Collection** is not mandatory, some data is available in the [training folder](training/). However, since the environmental conditions may be different from where the data was originally collected, it is recommended to recollect the data to have a reliable anomaly detector.

- **Step 2: Training** performs the training of the models that are later loaded in the inference server.

- **Step 3: Inference** describes how to run the inference server and how to set up the Anomaly Detector in Kura.

## 1. Data collection

The [data collection wire graph snapshot](snapshots/graph-data-collection.xml) allows collecting data in the "sensehat" table of the default H2DB instance.

To later extract the collected data in a CSV file, access the H2 web console using username `SA` and a blank password, and execute the following statement from the H2 web console:
```
CALL CSVWRITE ('/home/pi/data.csv', 'SELECT * FROM "sensehat"')
```
The data will be saved in CSV format under `/home/pi/data.csv`. The recommended number of training examples to collect is around 30'000.

## 2. Training

### Training environment setup

The creation of a Python virtual environment is highly recommended. Create a new environment with the following:

```bash
python3 -m venv .venv
```

Activate it with:

```bash
source .venv/bin/activate
```

Then update `pip` and install the training environment requirements:

```bash
pip3 install --upgrade pip
```

```bash
pip3 install -r training/requirements.txt
```

### Model training

Decompress the datasets

```bash
cd training && unzip *.zip
```

Train the model with the data provided in this repository with:

```bash
./train.py
```

Train script options:

```bash
usage: train.py [-h] [-t TRAIN_DATA_PATH] [-s SAVED_MODEL_NAME]

Training script for Kura AI Wire Component anomaly detection

optional arguments:
-h, --help show this help message and exit
-t TRAIN_DATA_PATH, --train_data_path TRAIN_DATA_PATH
Path to .csv training set (default: new-train-raw.csv)
-s SAVED_MODEL_NAME, --saved_model_name SAVED_MODEL_NAME
Folder where the trained model will be saved to (default: saved_model/autoencoder)
```

Move the trained model in the Triton model repository and rename it to `model.savedmodel`

```bash
mkdir models/tf_autoencoder_fp32/1/
```

```bash
cp -r training/saved_model/autoencoder models/tf_autoencoder_fp32/1/model.savedmodel
```

## 3. Inference

### Run the inference server

For running these models inside Triton, navigate to this repository and run:

```bash
docker run --rm \
-p4000:8000 \
-p4001:8001 \
-p4002:8002 \
--shm-size=150m \
-v $(pwd)/models:/models \
nvcr.io/nvidia/tritonserver:22.01-py3 \
tritonserver --model-repository=/models --model-control-mode=explicit
```

> **Note**: This demo leverages the `ensemble`, `python` and `tensorflow` backends of the Triton Inference Server, only containers built with these backends will work.
Expected models folder structure:

```bash
models
├── ensemble_pipeline
│   ├── 1
│   └── config.pbtxt
├── postprocessor
│   ├── 1
│   │   └── model.py
│   └── config.pbtxt
├── preprocessor
│   ├── 1
│   │   └── model.py
│   └── config.pbtxt
└── tf_autoencoder_fp32
├── 1
│   └── model.savedmodel
│   ├── assets
│   ├── keras_metadata.pb
│   ├── saved_model.pb
│   └── variables
│   ├── variables.data-00000-of-00001
│   └── variables.index
└── config.pbtxt
```

### Kura setup

First, create an `org.eclipse.kura.ai.triton.server.TritonServerService` instance under "Services". Configure it by setting the **Nvidia Triton Server address** to the IP of the machine where the inference server is running. Add `preprocessor,postprocessor,tf_autoencoder_fp32,ensemble_pipeline` to the **Inference Models** list. If you used the command above to run the docker container, you shouldn't need to modify **Nvidia Triton Server ports**.

After having set up the inference service, apply the [anomaly detection snapshot](snapshots/graph-anomaly-detector.xml) to create the Kura Wire Graph that performs anomaly detection. In summary, such graph reads the inputs from the "asset-sensehat" every 1s, performs the inference using an AI Wire Component, and processes the outputs using a Script Filter to color by red the led matrix of the SenseHat if an anomaly has occurred, green if no anomaly is detected. Since the training was done also on the accelerometer features, it is sufficient to move the RaspberryPi to trigger the detection of an anomaly.

A possible expansion of this demo could consider using the other output of the inference process, namely `ANOMALY_SCORE0`, to change the gradient of the led coloring so that it will be brighter according to extent of the measured anomaly.
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
# Copyright (c) 2022 Eurotech and/or its affiliates and others
#
# This program and the accompanying materials are made
# available under the terms of the Eclipse Public License 2.0
# which is available at https://www.eclipse.org/legal/epl-2.0/
#
# SPDX-License-Identifier: EPL-2.0
#
# Contributors:
# Eurotech

name: "ensemble_pipeline"
platform: "ensemble"
max_batch_size: 0
input [
{
name: "ACC_X"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "ACC_Y"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "ACC_Z"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "GYRO_X"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "GYRO_Y"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "GYRO_Z"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "HUMIDITY"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "PRESSURE"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "TEMP_HUM"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
input [
{
name: "TEMP_PRESS"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
output [
{
name: "ANOMALY_SCORE0"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
output [
{
name: "ANOMALY0"
data_type: TYPE_BOOL
dims: [ 1 ]
}
]
ensemble_scheduling {
step [
{
model_name: "preprocessor"
model_version: -1
input_map{
key: "ACC_X"
value: "ACC_X"
}
input_map{
key: "ACC_Y"
value: "ACC_Y"
}
input_map{
key: "ACC_Z"
value: "ACC_Z"
}
input_map{
key: "GYRO_X"
value: "GYRO_X"
}
input_map{
key: "GYRO_Y"
value: "GYRO_Y"
}
input_map{
key: "GYRO_Z"
value: "GYRO_Z"
}
input_map{
key: "HUMIDITY"
value: "HUMIDITY"
}
input_map{
key: "PRESSURE"
value: "PRESSURE"
}
input_map{
key: "TEMP_HUM"
value: "TEMP_HUM"
}
input_map{
key: "TEMP_PRESS"
value: "TEMP_PRESS"
}
output_map {
key: "INPUT0"
value: "preprocess_out"
}
},
{
model_name: "tf_autoencoder_fp32"
model_version: -1
input_map {
key: "INPUT0"
value: "preprocess_out"
}
output_map {
key: "OUTPUT0"
value: "autoencoder_output"
}
},
{
model_name: "postprocessor"
model_version: -1
input_map {
key: "RECONSTR0"
value: "autoencoder_output"
}
input_map {
key: "ORIG0"
value: "preprocess_out"
}
output_map {
key: "ANOMALY_SCORE0"
value: "ANOMALY_SCORE0"
}
output_map {
key: "ANOMALY0"
value: "ANOMALY0"
}
}
]
}
Loading

0 comments on commit 38f3f6b

Please sign in to comment.