From 80bb98cfdd856f6f05f38586de728a63416285f5 Mon Sep 17 00:00:00 2001 From: xanier Date: Wed, 9 Dec 2020 02:14:19 -0700 Subject: [PATCH 01/12] updated the documentation --- README.md | 20 ++--- cvat/apps/documentation/installation.md | 27 +----- .../installation_automatic_annotation.md | 84 +++++++++++++++++++ 3 files changed, 95 insertions(+), 36 deletions(-) create mode 100644 cvat/apps/documentation/installation_automatic_annotation.md diff --git a/README.md b/README.md index de2baca2c9ea..9721f67c311f 100644 --- a/README.md +++ b/README.md @@ -61,16 +61,16 @@ via its command line tool and Python library. ## Deep learning models for automatic labeling -| Name | Type | Framework | -| ------------------------------------------------------------------------------------------------------- | ---------- | ---------- | -| [Deep Extreme Cut](/serverless/openvino/dextr/nuclio) | interactor | OpenVINO | -| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | -| [Mask RCNN](/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | detector | OpenVINO | -| [YOLO v3](/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | detector | OpenVINO | -| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | -| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | -| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | -| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | +| Name | Type | Framework | CPU | GPU | +| ------------------------------------------------------------------------------------------------------- | ---------- | ---------- | --- | --- | +| [Deep Extreme Cut](/serverless/openvino/dextr/nuclio) | interactor | OpenVINO | X | +| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | X | X | +| [Mask RCNN](/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | detector | OpenVINO | X | +| [YOLO v3](/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | detector | OpenVINO | X | +| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | X | +| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | X | +| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | X | +| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | X | ## Online demo: [cvat.org](https://cvat.org) diff --git a/cvat/apps/documentation/installation.md b/cvat/apps/documentation/installation.md index 8abd316daa1e..9b23485cd392 100644 --- a/cvat/apps/documentation/installation.md +++ b/cvat/apps/documentation/installation.md @@ -290,32 +290,7 @@ docker-compose -f docker-compose.yml -f components/analytics/docker-compose.anal ### Semi-automatic and automatic annotation -- You have to install `nuctl` command line tool to build and deploy serverless - functions. Download [the latest release](https://github.com/nuclio/nuclio/releases). -- Create `cvat` project inside nuclio dashboard where you will deploy new - serverless functions and deploy a couple of DL models. Commands below should - be run only after CVAT has been installed using docker-compose because it - runs nuclio dashboard which manages all serverless functions. - -```bash -nuctl create project cvat -``` - -```bash -nuctl deploy --project-name cvat \ - --path serverless/openvino/dextr/nuclio \ - --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ - --platform local -``` - -```bash -nuctl deploy --project-name cvat \ - --path serverless/openvino/omz/public/yolo-v3-tf/nuclio \ - --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ - --platform local -``` - -Note: see [deploy.sh](/serverless/deploy.sh) script for more examples. +Please follow [instructions](/cvat/apps/documentation/installation_automatic_annotation.md) ### Stop all containers diff --git a/cvat/apps/documentation/installation_automatic_annotation.md b/cvat/apps/documentation/installation_automatic_annotation.md new file mode 100644 index 000000000000..938ae00130cb --- /dev/null +++ b/cvat/apps/documentation/installation_automatic_annotation.md @@ -0,0 +1,84 @@ + +### Semi-automatic and automatic annotation + +- To bring up cvat with auto annotation tool, **do not use** `docker-compose up`.If you did first make sure all containers are stopped `docker-compose down` + + + From cvat root directory, you need to run: + ```bash + docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d + ``` + If you did any changes to the docker-compose files, make sure to add `--build` at the end. + + To stop the containers, simply run: + + ```bash + docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml down + ``` + + +- You have to install `nuctl` command line tool to build and deploy serverless + functions. Download [version 1.5.8](https://github.com/nuclio/nuclio/releases). + It is important that the version you download matches the version in + [docker-compose.serverless.yml](/components/serverless/docker-compose.serverless.yml) + when you downloaded the nuclio give them proper permission and do a softlin + ``` + sudo chmod +x nuctl--linux-amd64 + sudo ln -sf $(pwd)/nuctl--linux-amd64 /usr/local/bin/nuctl + ``` + +- Create `cvat` project inside nuclio dashboard where you will deploy new + serverless functions and deploy a couple of DL models. Commands below should + be run only after CVAT has been installed using docker-compose because it + runs nuclio dashboard which manages all serverless functions. + + ```bash + nuctl create project cvat + ``` + + ```bash + nuctl deploy --project-name cvat \ + --path serverless/openvino/dextr/nuclio \ + --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ + --platform local + ``` + + ```bash + nuctl deploy --project-name cvat \ + --path serverless/openvino/omz/public/yolo-v3-tf/nuclio \ + --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ + --platform local + ``` + + + If your function is running on GPU, you should add `--resource-limit nvidia.com/gpu=1` to the above command or, alternatively, add gpu resources dircetly into the function.yaml see [tensorflow-fast-rcnn-gpu](../../../serverless/tensorflow/ + faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml) + + - Note: see [deploy.sh](/serverless/deploy.sh) script for more examples. + +####Debugging: + +- You can open nuclio dashboard at [localhost:8070](http://localhost:8070). Make sure status of your functions are up and running without any error. + + +- To check for internal server errors, run `docker ps -a` to see the list of containers. Find the container that you are interested, e.g. `nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu`. Then check its logs by + + ```bash + docker logs + ``` + e.g., + + ```bash + docker logs nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu + ``` + + +- If you would like to debug a code inside a container, you can use vscode to directly attach to a container [instructions](https://code.visualstudio.com/docs/remote/attach-container). To apply changes, makse sure to restart the container. + ```bash + docker stop + ``` + and then + ```bash + docker start + ``` + Do not use nuclio dashboard to stop the container since with any change, it rebuilds the container and you'll lose your changes. \ No newline at end of file From 1a8095ec1abdf21889eaad96d54ff5ea325ea5d1 Mon Sep 17 00:00:00 2001 From: xanier Date: Wed, 9 Dec 2020 02:14:58 -0700 Subject: [PATCH 02/12] boosting nuclio version to 1.5.8 --- components/serverless/docker-compose.serverless.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/components/serverless/docker-compose.serverless.yml b/components/serverless/docker-compose.serverless.yml index 3309ab495138..3be469b12fad 100644 --- a/components/serverless/docker-compose.serverless.yml +++ b/components/serverless/docker-compose.serverless.yml @@ -2,7 +2,7 @@ version: '3.3' services: serverless: container_name: nuclio - image: quay.io/nuclio/dashboard:1.4.8-amd64 + image: quay.io/nuclio/dashboard:1.5.7-amd64 restart: always networks: default: From 288dcabd718209ca306792ab32db1a88abb83862 Mon Sep 17 00:00:00 2001 From: xanier Date: Wed, 9 Dec 2020 02:15:31 -0700 Subject: [PATCH 03/12] fixed bug for png alpha channel --- .../faster_rcnn_inception_v2_coco/nuclio/model_loader.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py b/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py index 8158eee31288..74aa85bcac1b 100644 --- a/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py +++ b/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio/model_loader.py @@ -35,7 +35,8 @@ def infer(self, image): width, height = image.size if width > 1920 or height > 1080: image = image.resize((width // 2, height // 2), Image.ANTIALIAS) - image_np = np.array(image.getdata()).reshape((image.height, image.width, 3)).astype(np.uint8) + image_np = np.array(image.getdata())[:, :3].reshape( + (image.height, image.width, -1)).astype(np.uint8) image_np = np.expand_dims(image_np, axis=0) return self.session.run( From 10d48d80a8ec5aefb176233296b705ce5217847a Mon Sep 17 00:00:00 2001 From: xanier Date: Wed, 9 Dec 2020 02:16:37 -0700 Subject: [PATCH 04/12] added support for tensorflow gpu --- serverless/deploy.sh | 4 + .../nuclio/function.yaml | 134 ++++++++++++++++++ .../nuclio/main.py | 48 +++++++ .../nuclio/model_loader.py | 44 ++++++ 4 files changed, 230 insertions(+) create mode 100644 serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml create mode 100644 serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py create mode 100644 serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py diff --git a/serverless/deploy.sh b/serverless/deploy.sh index 20face66bf55..860e0bdfaa62 100755 --- a/serverless/deploy.sh +++ b/serverless/deploy.sh @@ -46,6 +46,10 @@ nuctl deploy --project-name cvat \ --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \ --platform local +nuctl deploy --project-name cvat \ + --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio" \ + --platform local + nuctl deploy --project-name cvat \ --path "$SCRIPT_DIR/pytorch/foolwood/siammask/nuclio" \ --platform local diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml new file mode 100644 index 000000000000..d64bace3e86e --- /dev/null +++ b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml @@ -0,0 +1,134 @@ +metadata: + name: tf-faster-rcnn-inception-v2-coco-gpu + namespace: cvat + annotations: + name: Faster RCNN via Tensorflow GPU + type: detector + framework: tensorflow + spec: | + [ + { "id": 1, "name": "person" }, + { "id": 2, "name": "bicycle" }, + { "id": 3, "name": "car" }, + { "id": 4, "name": "motorcycle" }, + { "id": 5, "name": "airplane" }, + { "id": 6, "name": "bus" }, + { "id": 7, "name": "train" }, + { "id": 8, "name": "truck" }, + { "id": 9, "name": "boat" }, + { "id":10, "name": "traffic_light" }, + { "id":11, "name": "fire_hydrant" }, + { "id":13, "name": "stop_sign" }, + { "id":14, "name": "parking_meter" }, + { "id":15, "name": "bench" }, + { "id":16, "name": "bird" }, + { "id":17, "name": "cat" }, + { "id":18, "name": "dog" }, + { "id":19, "name": "horse" }, + { "id":20, "name": "sheep" }, + { "id":21, "name": "cow" }, + { "id":22, "name": "elephant" }, + { "id":23, "name": "bear" }, + { "id":24, "name": "zebra" }, + { "id":25, "name": "giraffe" }, + { "id":27, "name": "backpack" }, + { "id":28, "name": "umbrella" }, + { "id":31, "name": "handbag" }, + { "id":32, "name": "tie" }, + { "id":33, "name": "suitcase" }, + { "id":34, "name": "frisbee" }, + { "id":35, "name": "skis" }, + { "id":36, "name": "snowboard" }, + { "id":37, "name": "sports_ball" }, + { "id":38, "name": "kite" }, + { "id":39, "name": "baseball_bat" }, + { "id":40, "name": "baseball_glove" }, + { "id":41, "name": "skateboard" }, + { "id":42, "name": "surfboard" }, + { "id":43, "name": "tennis_racket" }, + { "id":44, "name": "bottle" }, + { "id":46, "name": "wine_glass" }, + { "id":47, "name": "cup" }, + { "id":48, "name": "fork" }, + { "id":49, "name": "knife" }, + { "id":50, "name": "spoon" }, + { "id":51, "name": "bowl" }, + { "id":52, "name": "banana" }, + { "id":53, "name": "apple" }, + { "id":54, "name": "sandwich" }, + { "id":55, "name": "orange" }, + { "id":56, "name": "broccoli" }, + { "id":57, "name": "carrot" }, + { "id":58, "name": "hot_dog" }, + { "id":59, "name": "pizza" }, + { "id":60, "name": "donut" }, + { "id":61, "name": "cake" }, + { "id":62, "name": "chair" }, + { "id":63, "name": "couch" }, + { "id":64, "name": "potted_plant" }, + { "id":65, "name": "bed" }, + { "id":67, "name": "dining_table" }, + { "id":70, "name": "toilet" }, + { "id":72, "name": "tv" }, + { "id":73, "name": "laptop" }, + { "id":74, "name": "mouse" }, + { "id":75, "name": "remote" }, + { "id":76, "name": "keyboard" }, + { "id":77, "name": "cell_phone" }, + { "id":78, "name": "microwave" }, + { "id":79, "name": "oven" }, + { "id":80, "name": "toaster" }, + { "id":81, "name": "sink" }, + { "id":83, "name": "refrigerator" }, + { "id":84, "name": "book" }, + { "id":85, "name": "clock" }, + { "id":86, "name": "vase" }, + { "id":87, "name": "scissors" }, + { "id":88, "name": "teddy_bear" }, + { "id":89, "name": "hair_drier" }, + { "id":90, "name": "toothbrush" } + ] + +spec: + description: Faster RCNN from Tensorflow Object Detection GPU API + runtime: 'python:3.6' + handler: main:handler + eventTimeout: 30s + + build: + image: cvat/tf.faster_rcnn_inception_v2_coco_gpu + baseImage: tensorflow/tensorflow:2.1.1-gpu + + directives: + preCopy: + - kind: RUN + value: apt install curl + - kind: WORKDIR + value: /opt/nuclio + + postCopy: + - kind: RUN + value: curl -O http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz + - kind: RUN + value: tar -xzf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz && rm faster_rcnn_inception_v2_coco_2018_01_28.tar.gz + - kind: RUN + value: ln -s faster_rcnn_inception_v2_coco_2018_01_28 faster_rcnn + - kind: RUN + value: pip install pillow pyyaml + resources: + limits: + nvidia.com/gpu: "1" + + triggers: + myHttpTrigger: + maxWorkers: 2 + kind: 'http' + workerAvailabilityTimeoutMilliseconds: 10000 + attributes: + maxRequestBodySize: 33554432 # 32MB + + platform: + attributes: + restartPolicy: + name: always + maximumRetryCount: 3 diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py new file mode 100644 index 000000000000..8bcad27cf1f0 --- /dev/null +++ b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py @@ -0,0 +1,48 @@ +import json +import base64 +import io +from PIL import Image +import yaml +from model_loader import ModelLoader + + +def init_context(context): + context.logger.info("Init context... 0%") + model_path = "/opt/nuclio/faster_rcnn/frozen_inference_graph.pb" + model_handler = ModelLoader(model_path) + setattr(context.user_data, 'model_handler', model_handler) + functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml")) + labels_spec = functionconfig['metadata']['annotations']['spec'] + labels = {item['id']: item['name'] for item in json.loads(labels_spec)} + setattr(context.user_data, "labels", labels) + context.logger.info("Init context...100%") + +def handler(context, event): + context.logger.info("Run faster_rcnn_inception_v2_coco model") + data = event.body + buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8'))) + threshold = float(data.get("threshold", 0.5)) + image = Image.open(buf) + + (boxes, scores, classes, num_detections) = context.user_data.model_handler.infer(image) + + results = [] + for i in range(int(num_detections[0])): + obj_class = int(classes[0][i]) + obj_score = scores[0][i] + obj_label = context.user_data.labels.get(obj_class, "unknown") + if obj_score >= threshold: + xtl = boxes[0][i][1] * image.width + ytl = boxes[0][i][0] * image.height + xbr = boxes[0][i][3] * image.width + ybr = boxes[0][i][2] * image.height + + results.append({ + "confidence": str(obj_score), + "label": obj_label, + "points": [xtl, ytl, xbr, ybr], + "type": "rectangle", + }) + + return context.Response(body=json.dumps(results), headers={}, + content_type='application/json', status_code=200) diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py new file mode 100644 index 000000000000..74aa85bcac1b --- /dev/null +++ b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py @@ -0,0 +1,44 @@ + +import numpy as np +from PIL import Image +import tensorflow.compat.v1 as tf +tf.disable_v2_behavior() + +class ModelLoader: + def __init__(self, model_path): + self.session = None + + detection_graph = tf.Graph() + with detection_graph.as_default(): + od_graph_def = tf.GraphDef() + with tf.gfile.GFile(model_path, 'rb') as fid: + serialized_graph = fid.read() + od_graph_def.ParseFromString(serialized_graph) + tf.import_graph_def(od_graph_def, name='') + + config = tf.ConfigProto() + config.gpu_options.allow_growth = True + self.session = tf.Session(graph=detection_graph, config=config) + + self.image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') + self.boxes = detection_graph.get_tensor_by_name('detection_boxes:0') + self.scores = detection_graph.get_tensor_by_name('detection_scores:0') + self.classes = detection_graph.get_tensor_by_name('detection_classes:0') + self.num_detections = detection_graph.get_tensor_by_name('num_detections:0') + + def __del__(self): + if self.session: + self.session.close() + del self.session + + def infer(self, image): + width, height = image.size + if width > 1920 or height > 1080: + image = image.resize((width // 2, height // 2), Image.ANTIALIAS) + image_np = np.array(image.getdata())[:, :3].reshape( + (image.height, image.width, -1)).astype(np.uint8) + image_np = np.expand_dims(image_np, axis=0) + + return self.session.run( + [self.boxes, self.scores, self.classes, self.num_detections], + feed_dict={self.image_tensor: image_np}) From cbeb4f9b01745af86bc0738ae72cc1077f382342 Mon Sep 17 00:00:00 2001 From: xanier Date: Wed, 9 Dec 2020 02:46:45 -0700 Subject: [PATCH 05/12] fixed typos --- cvat/apps/documentation/installation_automatic_annotation.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cvat/apps/documentation/installation_automatic_annotation.md b/cvat/apps/documentation/installation_automatic_annotation.md index 938ae00130cb..1c976e127180 100644 --- a/cvat/apps/documentation/installation_automatic_annotation.md +++ b/cvat/apps/documentation/installation_automatic_annotation.md @@ -1,7 +1,7 @@ ### Semi-automatic and automatic annotation -- To bring up cvat with auto annotation tool, **do not use** `docker-compose up`.If you did first make sure all containers are stopped `docker-compose down` +- To bring up cvat with auto annotation tool, **do not use** `docker-compose up`.If you did, first make sure all containers are stopped `docker-compose down` From cvat root directory, you need to run: @@ -21,7 +21,7 @@ functions. Download [version 1.5.8](https://github.com/nuclio/nuclio/releases). It is important that the version you download matches the version in [docker-compose.serverless.yml](/components/serverless/docker-compose.serverless.yml) - when you downloaded the nuclio give them proper permission and do a softlin + After downloading the nuclio, give it a proper permission and do a softlink ``` sudo chmod +x nuctl--linux-amd64 sudo ln -sf $(pwd)/nuctl--linux-amd64 /usr/local/bin/nuctl From 7352d7f5c89b8a435edf8ae00cd6f9ed2a3c5160 Mon Sep 17 00:00:00 2001 From: Ali Jahani Date: Thu, 10 Dec 2020 01:57:07 -0700 Subject: [PATCH 06/12] Update cvat/apps/documentation/installation_automatic_annotation.md Co-authored-by: Andrey Zhavoronkov --- cvat/apps/documentation/installation_automatic_annotation.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cvat/apps/documentation/installation_automatic_annotation.md b/cvat/apps/documentation/installation_automatic_annotation.md index 938ae00130cb..78306301b69f 100644 --- a/cvat/apps/documentation/installation_automatic_annotation.md +++ b/cvat/apps/documentation/installation_automatic_annotation.md @@ -21,7 +21,7 @@ functions. Download [version 1.5.8](https://github.com/nuclio/nuclio/releases). It is important that the version you download matches the version in [docker-compose.serverless.yml](/components/serverless/docker-compose.serverless.yml) - when you downloaded the nuclio give them proper permission and do a softlin + when you downloaded the nuclio give them proper permission and do a softlink ``` sudo chmod +x nuctl--linux-amd64 sudo ln -sf $(pwd)/nuctl--linux-amd64 /usr/local/bin/nuctl @@ -81,4 +81,4 @@ ```bash docker start ``` - Do not use nuclio dashboard to stop the container since with any change, it rebuilds the container and you'll lose your changes. \ No newline at end of file + Do not use nuclio dashboard to stop the container since with any change, it rebuilds the container and you'll lose your changes. From 9a6d71e0dab8f82401288126f24432b08f8721ee Mon Sep 17 00:00:00 2001 From: xanier Date: Tue, 15 Dec 2020 02:10:34 -0700 Subject: [PATCH 07/12] Addressing pr comments - improved documentation and removed code duplication --- .../installation_automatic_annotation.md | 51 ++++--- serverless/{deploy.sh => deploy_CPU.sh} | 7 +- serverless/deploy_GPU.sh | 15 ++ .../nuclio/function.yaml | 134 ------------------ .../nuclio/main.py | 48 ------- .../nuclio/model_loader.py | 44 ------ 6 files changed, 51 insertions(+), 248 deletions(-) rename serverless/{deploy.sh => deploy_CPU.sh} (97%) create mode 100755 serverless/deploy_GPU.sh delete mode 100644 serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml delete mode 100644 serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py delete mode 100644 serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py diff --git a/cvat/apps/documentation/installation_automatic_annotation.md b/cvat/apps/documentation/installation_automatic_annotation.md index 400ee207d5d2..39caaa3f5d28 100644 --- a/cvat/apps/documentation/installation_automatic_annotation.md +++ b/cvat/apps/documentation/installation_automatic_annotation.md @@ -1,10 +1,10 @@ -### Semi-automatic and automatic annotation +### Semi-automatic and Automatic Annotation -- To bring up cvat with auto annotation tool, **do not use** `docker-compose up`.If you did, first make sure all containers are stopped `docker-compose down` - - From cvat root directory, you need to run: +> **⚠ WARNING: Do not use `docker-compose up`** +> If you did, make sure all containers are stopped by `docker-compose down`. +- To bring up cvat with auto annotation tool, from cvat root directory, you need to run: ```bash docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d ``` @@ -27,10 +27,7 @@ sudo ln -sf $(pwd)/nuctl--linux-amd64 /usr/local/bin/nuctl ``` -- Create `cvat` project inside nuclio dashboard where you will deploy new - serverless functions and deploy a couple of DL models. Commands below should - be run only after CVAT has been installed using docker-compose because it - runs nuclio dashboard which manages all serverless functions. +- Create `cvat` project inside nuclio dashboard where you will deploy new serverless functions and deploy a couple of DL models. Commands below should be run only after CVAT has been installed using `docker-compose` because it runs nuclio dashboard which manages all serverless functions. ```bash nuctl create project cvat @@ -49,14 +46,30 @@ --volume `pwd`/serverless/openvino/common:/opt/nuclio/common \ --platform local ``` + **Note:** + - See [deploy_cpu.sh](/serverless/deploy_cpu.sh) for more examples. + #### GPU Support + You will need to install Nvidia Container Toolkit and make sure your docker supports GPU. Follow [Nvidia docker instructions](https://www.tensorflow.org/install/docker#gpu_support). + Also you will need to add `--resource-limit nvidia.com/gpu=1` to the nuclio deployment command. + As an example, below will run on the GPU: + + ```bash + nuctl deploy tf-faster-rcnn-inception-v2-coco-gpu \ + --project-name cvat --path "serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio" --platform local \ + --base-image tensorflow/tensorflow:2.1.1-gpu \ + --desc "Faster RCNN from Tensorflow Object Detection GPU API" \ + --image cvat/tf.faster_rcnn_inception_v2_coco_gpu \ + --resource-limit nvidia.com/gpu=1 + ``` + - If your function is running on GPU, you should add `--resource-limit nvidia.com/gpu=1` to the above command or, alternatively, add gpu resources dircetly into the function.yaml see [tensorflow-fast-rcnn-gpu](../../../serverless/tensorflow/ - faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml) + **Note:** + - Since the model is loaded during deployment, the number of GPU functions you can deploy will be limited to your GPU memory. - - Note: see [deploy.sh](/serverless/deploy.sh) script for more examples. + - See [deploy_gpu.sh](/serverless/deploy_gpu.sh) script for more examples. -####Debugging: +####Debugging Nuclio Functions: - You can open nuclio dashboard at [localhost:8070](http://localhost:8070). Make sure status of your functions are up and running without any error. @@ -73,12 +86,12 @@ ``` -- If you would like to debug a code inside a container, you can use vscode to directly attach to a container [instructions](https://code.visualstudio.com/docs/remote/attach-container). To apply changes, makse sure to restart the container. +- If you would like to debug a code inside a container, you can use vscode to directly attach to a container [instructions](https://code.visualstudio.com/docs/remote/attach-container). To apply your changes, make sure to restart the container. ```bash - docker stop + docker restart ``` - and then - ```bash - docker start - ``` - Do not use nuclio dashboard to stop the container since with any change, it rebuilds the container and you'll lose your changes. + + + + > **⚠ WARNING:** + > Do not use nuclio dashboard to stop the container because with any modifications, it rebuilds the container and you will lose your changes. \ No newline at end of file diff --git a/serverless/deploy.sh b/serverless/deploy_CPU.sh similarity index 97% rename from serverless/deploy.sh rename to serverless/deploy_CPU.sh index 860e0bdfaa62..8bea665424ab 100755 --- a/serverless/deploy.sh +++ b/serverless/deploy_CPU.sh @@ -1,4 +1,5 @@ #!/bin/bash +# Sample commands to deploy nuclio functions on GPU SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" @@ -47,15 +48,15 @@ nuctl deploy --project-name cvat \ --platform local nuctl deploy --project-name cvat \ - --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio" \ + --path "$SCRIPT_DIR/pytorch/foolwood/siammask/nuclio" \ --platform local nuctl deploy --project-name cvat \ - --path "$SCRIPT_DIR/pytorch/foolwood/siammask/nuclio" \ + --path "$SCRIPT_DIR/pytorch/saic-vul/fbrs/nuclio" \ --platform local nuctl deploy --project-name cvat \ - --path "$SCRIPT_DIR/pytorch/saic-vul/fbrs/nuclio" \ + --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \ --platform local nuctl get function diff --git a/serverless/deploy_GPU.sh b/serverless/deploy_GPU.sh new file mode 100755 index 000000000000..f0b89649fc88 --- /dev/null +++ b/serverless/deploy_GPU.sh @@ -0,0 +1,15 @@ +#!/bin/bash +# Sample commands to deploy nuclio functions on GPU + +SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" + +nuctl create project cvat + +nuctl deploy --project-name cvat \ + --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \ + --platform local --base-image tensorflow/tensorflow:2.1.1-gpu \ + --desc "Faster RCNN from Tensorflow Object Detection GPU API" \ + --image cvat/tf.faster_rcnn_inception_v2_coco_gpu \ + --resource-limit nvidia.com/gpu=1 + +nuctl get function diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml deleted file mode 100644 index d64bace3e86e..000000000000 --- a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml +++ /dev/null @@ -1,134 +0,0 @@ -metadata: - name: tf-faster-rcnn-inception-v2-coco-gpu - namespace: cvat - annotations: - name: Faster RCNN via Tensorflow GPU - type: detector - framework: tensorflow - spec: | - [ - { "id": 1, "name": "person" }, - { "id": 2, "name": "bicycle" }, - { "id": 3, "name": "car" }, - { "id": 4, "name": "motorcycle" }, - { "id": 5, "name": "airplane" }, - { "id": 6, "name": "bus" }, - { "id": 7, "name": "train" }, - { "id": 8, "name": "truck" }, - { "id": 9, "name": "boat" }, - { "id":10, "name": "traffic_light" }, - { "id":11, "name": "fire_hydrant" }, - { "id":13, "name": "stop_sign" }, - { "id":14, "name": "parking_meter" }, - { "id":15, "name": "bench" }, - { "id":16, "name": "bird" }, - { "id":17, "name": "cat" }, - { "id":18, "name": "dog" }, - { "id":19, "name": "horse" }, - { "id":20, "name": "sheep" }, - { "id":21, "name": "cow" }, - { "id":22, "name": "elephant" }, - { "id":23, "name": "bear" }, - { "id":24, "name": "zebra" }, - { "id":25, "name": "giraffe" }, - { "id":27, "name": "backpack" }, - { "id":28, "name": "umbrella" }, - { "id":31, "name": "handbag" }, - { "id":32, "name": "tie" }, - { "id":33, "name": "suitcase" }, - { "id":34, "name": "frisbee" }, - { "id":35, "name": "skis" }, - { "id":36, "name": "snowboard" }, - { "id":37, "name": "sports_ball" }, - { "id":38, "name": "kite" }, - { "id":39, "name": "baseball_bat" }, - { "id":40, "name": "baseball_glove" }, - { "id":41, "name": "skateboard" }, - { "id":42, "name": "surfboard" }, - { "id":43, "name": "tennis_racket" }, - { "id":44, "name": "bottle" }, - { "id":46, "name": "wine_glass" }, - { "id":47, "name": "cup" }, - { "id":48, "name": "fork" }, - { "id":49, "name": "knife" }, - { "id":50, "name": "spoon" }, - { "id":51, "name": "bowl" }, - { "id":52, "name": "banana" }, - { "id":53, "name": "apple" }, - { "id":54, "name": "sandwich" }, - { "id":55, "name": "orange" }, - { "id":56, "name": "broccoli" }, - { "id":57, "name": "carrot" }, - { "id":58, "name": "hot_dog" }, - { "id":59, "name": "pizza" }, - { "id":60, "name": "donut" }, - { "id":61, "name": "cake" }, - { "id":62, "name": "chair" }, - { "id":63, "name": "couch" }, - { "id":64, "name": "potted_plant" }, - { "id":65, "name": "bed" }, - { "id":67, "name": "dining_table" }, - { "id":70, "name": "toilet" }, - { "id":72, "name": "tv" }, - { "id":73, "name": "laptop" }, - { "id":74, "name": "mouse" }, - { "id":75, "name": "remote" }, - { "id":76, "name": "keyboard" }, - { "id":77, "name": "cell_phone" }, - { "id":78, "name": "microwave" }, - { "id":79, "name": "oven" }, - { "id":80, "name": "toaster" }, - { "id":81, "name": "sink" }, - { "id":83, "name": "refrigerator" }, - { "id":84, "name": "book" }, - { "id":85, "name": "clock" }, - { "id":86, "name": "vase" }, - { "id":87, "name": "scissors" }, - { "id":88, "name": "teddy_bear" }, - { "id":89, "name": "hair_drier" }, - { "id":90, "name": "toothbrush" } - ] - -spec: - description: Faster RCNN from Tensorflow Object Detection GPU API - runtime: 'python:3.6' - handler: main:handler - eventTimeout: 30s - - build: - image: cvat/tf.faster_rcnn_inception_v2_coco_gpu - baseImage: tensorflow/tensorflow:2.1.1-gpu - - directives: - preCopy: - - kind: RUN - value: apt install curl - - kind: WORKDIR - value: /opt/nuclio - - postCopy: - - kind: RUN - value: curl -O http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz - - kind: RUN - value: tar -xzf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz && rm faster_rcnn_inception_v2_coco_2018_01_28.tar.gz - - kind: RUN - value: ln -s faster_rcnn_inception_v2_coco_2018_01_28 faster_rcnn - - kind: RUN - value: pip install pillow pyyaml - resources: - limits: - nvidia.com/gpu: "1" - - triggers: - myHttpTrigger: - maxWorkers: 2 - kind: 'http' - workerAvailabilityTimeoutMilliseconds: 10000 - attributes: - maxRequestBodySize: 33554432 # 32MB - - platform: - attributes: - restartPolicy: - name: always - maximumRetryCount: 3 diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py deleted file mode 100644 index 8bcad27cf1f0..000000000000 --- a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/main.py +++ /dev/null @@ -1,48 +0,0 @@ -import json -import base64 -import io -from PIL import Image -import yaml -from model_loader import ModelLoader - - -def init_context(context): - context.logger.info("Init context... 0%") - model_path = "/opt/nuclio/faster_rcnn/frozen_inference_graph.pb" - model_handler = ModelLoader(model_path) - setattr(context.user_data, 'model_handler', model_handler) - functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml")) - labels_spec = functionconfig['metadata']['annotations']['spec'] - labels = {item['id']: item['name'] for item in json.loads(labels_spec)} - setattr(context.user_data, "labels", labels) - context.logger.info("Init context...100%") - -def handler(context, event): - context.logger.info("Run faster_rcnn_inception_v2_coco model") - data = event.body - buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8'))) - threshold = float(data.get("threshold", 0.5)) - image = Image.open(buf) - - (boxes, scores, classes, num_detections) = context.user_data.model_handler.infer(image) - - results = [] - for i in range(int(num_detections[0])): - obj_class = int(classes[0][i]) - obj_score = scores[0][i] - obj_label = context.user_data.labels.get(obj_class, "unknown") - if obj_score >= threshold: - xtl = boxes[0][i][1] * image.width - ytl = boxes[0][i][0] * image.height - xbr = boxes[0][i][3] * image.width - ybr = boxes[0][i][2] * image.height - - results.append({ - "confidence": str(obj_score), - "label": obj_label, - "points": [xtl, ytl, xbr, ybr], - "type": "rectangle", - }) - - return context.Response(body=json.dumps(results), headers={}, - content_type='application/json', status_code=200) diff --git a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py b/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py deleted file mode 100644 index 74aa85bcac1b..000000000000 --- a/serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/model_loader.py +++ /dev/null @@ -1,44 +0,0 @@ - -import numpy as np -from PIL import Image -import tensorflow.compat.v1 as tf -tf.disable_v2_behavior() - -class ModelLoader: - def __init__(self, model_path): - self.session = None - - detection_graph = tf.Graph() - with detection_graph.as_default(): - od_graph_def = tf.GraphDef() - with tf.gfile.GFile(model_path, 'rb') as fid: - serialized_graph = fid.read() - od_graph_def.ParseFromString(serialized_graph) - tf.import_graph_def(od_graph_def, name='') - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - self.session = tf.Session(graph=detection_graph, config=config) - - self.image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') - self.boxes = detection_graph.get_tensor_by_name('detection_boxes:0') - self.scores = detection_graph.get_tensor_by_name('detection_scores:0') - self.classes = detection_graph.get_tensor_by_name('detection_classes:0') - self.num_detections = detection_graph.get_tensor_by_name('num_detections:0') - - def __del__(self): - if self.session: - self.session.close() - del self.session - - def infer(self, image): - width, height = image.size - if width > 1920 or height > 1080: - image = image.resize((width // 2, height // 2), Image.ANTIALIAS) - image_np = np.array(image.getdata())[:, :3].reshape( - (image.height, image.width, -1)).astype(np.uint8) - image_np = np.expand_dims(image_np, axis=0) - - return self.session.run( - [self.boxes, self.scores, self.classes, self.num_detections], - feed_dict={self.image_tensor: image_np}) From e0b577e9993efbdbe4919aa83c0403ce0e7f6682 Mon Sep 17 00:00:00 2001 From: xanier Date: Tue, 15 Dec 2020 02:16:36 -0700 Subject: [PATCH 08/12] removed extra spaces --- .../documentation/installation_automatic_annotation.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/cvat/apps/documentation/installation_automatic_annotation.md b/cvat/apps/documentation/installation_automatic_annotation.md index 39caaa3f5d28..e3343211ffd5 100644 --- a/cvat/apps/documentation/installation_automatic_annotation.md +++ b/cvat/apps/documentation/installation_automatic_annotation.md @@ -16,7 +16,6 @@ docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml down ``` - - You have to install `nuctl` command line tool to build and deploy serverless functions. Download [version 1.5.8](https://github.com/nuclio/nuclio/releases). It is important that the version you download matches the version in @@ -48,6 +47,7 @@ ``` **Note:** - See [deploy_cpu.sh](/serverless/deploy_cpu.sh) for more examples. + #### GPU Support You will need to install Nvidia Container Toolkit and make sure your docker supports GPU. Follow [Nvidia docker instructions](https://www.tensorflow.org/install/docker#gpu_support). Also you will need to add `--resource-limit nvidia.com/gpu=1` to the nuclio deployment command. @@ -62,8 +62,6 @@ --resource-limit nvidia.com/gpu=1 ``` - - **Note:** - Since the model is loaded during deployment, the number of GPU functions you can deploy will be limited to your GPU memory. @@ -73,7 +71,6 @@ - You can open nuclio dashboard at [localhost:8070](http://localhost:8070). Make sure status of your functions are up and running without any error. - - To check for internal server errors, run `docker ps -a` to see the list of containers. Find the container that you are interested, e.g. `nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu`. Then check its logs by ```bash @@ -85,13 +82,10 @@ docker logs nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu ``` - - If you would like to debug a code inside a container, you can use vscode to directly attach to a container [instructions](https://code.visualstudio.com/docs/remote/attach-container). To apply your changes, make sure to restart the container. ```bash docker restart ``` - - > **⚠ WARNING:** > Do not use nuclio dashboard to stop the container because with any modifications, it rebuilds the container and you will lose your changes. \ No newline at end of file From c018dd0044598463d6c61d9997441c6a12fea588 Mon Sep 17 00:00:00 2001 From: xanier Date: Tue, 15 Dec 2020 02:19:36 -0700 Subject: [PATCH 09/12] Update nuclio to 1.5.8 --- components/serverless/docker-compose.serverless.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/components/serverless/docker-compose.serverless.yml b/components/serverless/docker-compose.serverless.yml index 3be469b12fad..de94f6166b83 100644 --- a/components/serverless/docker-compose.serverless.yml +++ b/components/serverless/docker-compose.serverless.yml @@ -2,7 +2,7 @@ version: '3.3' services: serverless: container_name: nuclio - image: quay.io/nuclio/dashboard:1.5.7-amd64 + image: quay.io/nuclio/dashboard:1.5.8-amd64 restart: always networks: default: From 7425adb4c2bd362d098ffb1f0addf7faf673932b Mon Sep 17 00:00:00 2001 From: xanier Date: Tue, 15 Dec 2020 02:21:57 -0700 Subject: [PATCH 10/12] fixed typo --- serverless/deploy_CPU.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/serverless/deploy_CPU.sh b/serverless/deploy_CPU.sh index 8bea665424ab..b970f825e2f2 100755 --- a/serverless/deploy_CPU.sh +++ b/serverless/deploy_CPU.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Sample commands to deploy nuclio functions on GPU +# Sample commands to deploy nuclio functions on CPU SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" From cbe7f9c1fd2e713f87ea1d16593b0ff02a2f7452 Mon Sep 17 00:00:00 2001 From: xanier Date: Tue, 15 Dec 2020 02:24:23 -0700 Subject: [PATCH 11/12] removed extra cpu deployment --- serverless/deploy_CPU.sh | 4 ---- 1 file changed, 4 deletions(-) diff --git a/serverless/deploy_CPU.sh b/serverless/deploy_CPU.sh index b970f825e2f2..a86149fef7ab 100755 --- a/serverless/deploy_CPU.sh +++ b/serverless/deploy_CPU.sh @@ -55,8 +55,4 @@ nuctl deploy --project-name cvat \ --path "$SCRIPT_DIR/pytorch/saic-vul/fbrs/nuclio" \ --platform local -nuctl deploy --project-name cvat \ - --path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \ - --platform local - nuctl get function From 18c18dd037726f12146dc679987828b365cfea77 Mon Sep 17 00:00:00 2001 From: xanier Date: Tue, 15 Dec 2020 02:25:16 -0700 Subject: [PATCH 12/12] renamed files --- serverless/{deploy_CPU.sh => deploy_cpu.sh} | 0 serverless/{deploy_GPU.sh => deploy_gpu.sh} | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename serverless/{deploy_CPU.sh => deploy_cpu.sh} (100%) rename serverless/{deploy_GPU.sh => deploy_gpu.sh} (100%) diff --git a/serverless/deploy_CPU.sh b/serverless/deploy_cpu.sh similarity index 100% rename from serverless/deploy_CPU.sh rename to serverless/deploy_cpu.sh diff --git a/serverless/deploy_GPU.sh b/serverless/deploy_gpu.sh similarity index 100% rename from serverless/deploy_GPU.sh rename to serverless/deploy_gpu.sh