百度智能云

All Product Document

          Intelligent Edge

          Operation Guide

          A real “muck throwing” scenario and solutions are described in Overview. Because the muck recognition model is a private model and cannot be made public, this chapter uses ssd_mobilenet_v1_coco_2017_11_17 which is open in github as an alternative model to completely demonstrate how to operate the edge video AI.

          Prerequisite Preparation

          • You should have a Baidu AI Cloud. If not, click Register.
          • A Raspberry Pi should be available. Raspberry Pi 4 is used in this experiment. Other models of Raspberry Pi (Raspberry Pi 3 Model B+) or virtual machines of Ubuntu16.04 can also be used.
          • A camera should be available. Raspberry Pi Camera Module v2 is used in this experiment. Other [USB cameras](https:// item.jd.com/31856488610.html) or IP cameras can also be used.
          • An object recognition model should be available. ssd_mobilenet_v1_coco_2017_11_17 is used in this experiment. This model can detect 90 kinds of objects. For specific list, please see: mscoco_label_map.
          • The Raspberry Pi has been connected to the cloud BIE. For the configuration process, please see Getting Started.

          Simulation Scenarios

          1.The camera is connected to the Raspberry Pi to detect objects in the field of view in real time. 2.When a target object is detected, the frame extracting image is saved, and a message is sent to the edge hub module synchronously. If no target object is detected, the frame extracting image is discarded. 3.It supports to detect multi-target objects. The detected objects in this experiment include: scissorslaptopsbookskeyboards and people.


          Through this tutorial, you can learn

          1.How to build an edge AI hardware environment. 2.How to configure an edge AI module. 3.How to verify an edge AI detection result.


          Operating Procedures

          How to build an edge AI hardware environment

          Raspberry Pi camera modules, USB cameras and IP cameras can be used as the camera, and different cameras have different configuration methods:

          • Raspberry Pi camera module For building an environment for the Raspberry Pi camera module, please see Installation and test of Raspberry Pi 3B cameras
          • USB camera: For building an environment for the USB camera, please see the official documentation Use a standard USB camera.

            By default, after the Raspberry Pi camera module and the USB camera are connected to the Raspberry Pi, the device /dev/video0 is added in the system. The parameter 0 is used in the subsequent [video-infer module configuration](#module 4: video-infer).

          • IP camera: The Raspberry Pi has to access the IP camera through the rtsp protocol. You can first test whether the IP camera can be accessed through the rtsp protocol by tools such as VLC media player. As shown below:

            vlc1.png

            vlc2.png

            The general format of the rtsp protocol address is rtsp://<username>:<password>@<ip>:<port>/<Streaming/channels/stream_number>, and the parameters are explained as follows:

            • <username>: Login name of the camera: generally, it can be found on the camera base
            • <password>: Login password of the camera: generally, it can be found on the camera base
            • <ip>: The IP address assigned to the camera by the router/switch
            • <port>: The port number of the RTSP protocol: generally, it defaults to be 554,
            • <Streaming/channels/stream_number>: Camera channel

          How to configure an edge AI module

          In addition to the agent module created when Raspberry Pi is initially connected to BIE, 4 additional modules have to be added to the edge core - Raspberry Pi:

          S/N Module name Module type
          1 agent baetyl-agent Communication module for the edge core and the cloud BIE
          2 localhub baetyl-hub Message module of the local MQTT Broker in the edge core
          3 function-manager baetyl-function-manager Edge function management module
          4 function-python baetyl-function-python36-opencv41 Post-processing function to process AI inference results
          5 video-infer baetyl-video-infer AI inference module to load AI models and perform AI inferences

          Each module has to be configured accordingly. You can click here to download detailed configurations: baetyl video-infer-demo.

          Module Relation Diagram

          The calling relationship between the modules is shown in the figure below:

          module-arch.png

          Detailed Description of Module Functions:

          1. video-infer

            • Capture video data
            • Extract frame of videos
            • Conduct AI inferences on frame extracting images based on AI models.
            • Call the post-processing function manager function-manager through the GRPC interface.
          2. function-manger

            • Receive calls from the video-infer module.
            • Assign a specific post-processing function function-python for video-infer.
          3. function-python

            • Real post-processing function with opencv 4.1 pre-installed, post-process the AI inference results of the video-infer module.
            • Return the post-processing result to the video-infer module.
          4. video-infer:

            • Decide whether to save the frame extracting image and whether to send the mqtt message to the hub module based on the result returned by the post-processing function.

          Module 1: Localhub

          The following storage volumes have to be bound to the localhub module:

          Storage volume Storage volume template Version Host directory Container directory Read only Configuration files
          mt-localhub-conf baetyl module configuration V1 var/db/baetyl/mt-localhub-conf/V1 etc/baetyl true service.yml
          mt-localhub-log Storage volume with empty directory V1 var/db/baetyl/mt-localhub-log var/log/baetyl false None
          mt-localhub-data Storage volume with empty directory V1 var/db/baetyl/mt-localhub-data var/db/baetyl/data false None
          • The mt-localhub-conf storage volume has a corresponding configuration file service.yml. For configuration details, please see baetyl_video-infer-demo/var/db/baetyl/mt-localhub-conf/V1/service.yml.
          • Port mapping has to be bound to the localhub module to expose the container’s 1883 port mapping. The configuration is shown in the figure below: image.png

          Module 2: Function-manager

          The following storage volumes have to be bound to the function-manager module:

          Storage volume Storage volume template Version Host directory Container directory Read only Storage volume configuration file
          mt-function-manager-conf baetyl module configuration V3 var/db/baetyl/mt-function-manager-conf/V3 etc/baetyl true service.yml
          demo-log Storage volume with empty directory V1 var/db/baetyl/demo-log var/log/baetyl false None
          • The mt-function-manager-conf storage volume has a corresponding configuration file service.yml. For configuration details, please see baetyl_video-infer-demo/var/db/baetyl/mt-function-manager-conf/V3/service.yml.

          Module 3: Function-python

          The image used by function-python is: hub.baidubce.com/baetyl/baetyl-function-python36:0.1.6-opencv41, as shown in the figure below:

          image.png

          The following storage volumes have to be bound to the function-python module:

          Storage volume Storage volume template Version Host directory Container directory Read only Storage volume configuration file
          mt-function-python-conf|baetyl module configuration V1 var/db/baetyl/mt-function-python-conf/V1 etc/baetyl true service.yml
          mt-function-python-code Custom storage volume V2 var/db/baetyl/mt-function-python-code/V2 var/db/baetyl/code true analyse.py
          mt-image-data Storage volume with empty directory V1 var/db/baetyl/mt-image-data var/db/baetyl/image false None
          demo-log Storage volume with empty directory V1 var/db/baetyl/demo-log var/log/baetyl false None
          • The mt-function-python-conf storage volume has a corresponding configuration file service.yml. For configuration details, please see baetyl_video-infer-demo/var/db/baetyl/mt-function-python-conf/V1/service.yml.
          • The mt-function-python-code storage volume has a corresponding configuration file analyse.py. For configuration details, please see baetyl_video-infer-demo/var/db/baetyl/mt-function-python-conf/V1/analyse.py . In the BIE console, find the custom storage volume mt-function-python-code, enter the storage volume, click Create an editable file , copy the python code in analyse.py to the text box on the right of the interface, and click Save to complete the configuration, as shown in the figure below:

            mt-function-python-code2.png

          analyse.py processing function resolution:

          #!/usr/bin/env python 
          # -*- coding:utf-8 -*- 
          """
          function to analyse video infer result in python 
          """
          
          import numpy as np 
          
          location = "var/db/baetyl/image/{}.jpg" 
          #Define the objects to be recognized. The list of recognizable objects can be found in the mscoco_label_map mentioned in this article. 
          classes = { 
                  1: 'person',73: 'laptop',76: 'keyboard',77: 'cell phone',84: 'book',87: 'scissors' 
          } 
          
          def handler(event, context): 
              """
              function handler 
              """
              data = np.fromstring(event, np.float32) 
              mat = np.reshape(data, (-1, 7)) 
              objects = [] 
              scores = {} 
              
              for obj in mat: 
                  clazz = int(obj[1]) 
                  if clazz in classes: 
                      score = float(obj[2]) 
                      if classes[clazz] not in scores or scores[classes[clazz]] < score: 
                          scores[classes[clazz]] = score 
                      #Post-processing logic, if the score of the AI inference result is less than 0.6, it is directly ignored, and only objects with score>=0.6 are added to the object array 
                      if score < 0.6: 
                          continue 
                      objects.append({ 
                          'class': classes[clazz], 
                          'score': score, 
                          'left': float(obj[3]), 
                          'top': float(obj[4]), 
                          'right': float(obj[5]), 
                          'bottom': float(obj[6]) 
                      }) 
              #Define the return value        
              res = {} 
              #If len(objects) == 0 , it means no object is recognized, and the frame extracting image is discarded, otherwise, the video-infer module saves the frame extracting image. 
              res["imageDiscard"] = len(objects) == 0 
              res["imageObjects"] = objects 
              res["imageScores"] = scores 
              res["messageTimestamp"] = int(context["messageTimestamp"]/1000000) 
              #Only when an object is recognized, a message is sent to the localhub 
              if len(objects) != 0: 
                  res["imageLocation"] = location.format(context["messageTimestamp"]) 
                  res["publishTopic"] = "video/infer/result" 
          
              return res 


          Module 4: Video-infer

          The following storage volumes have to be bound to the video-infer module:

          Storage volume Storage volume template Version Host directory Container directory Read only Storage volume configuration file
          video-infer-conf baetyl module configuration V4 var/db/baetyl/video-infer-conf/V4 etc/baetyl true service.yml
          infer-person-model Custom storage volume V1 var/db/baetyl/infer-person-model/V1 var/db/baetyl/model true frozen_inference_graph.pb,ssd_mobilenet_v1_coco_2017_11_17.pbtxt
          mt-image-data Storage volume with empty directory V1 var/db/baetyl/mt-image-data var/db/baetyl/image false None
          demo-log Storage volume with empty directory V1 var/db/baetyl/demo-log var/log/baetyl false None
          • The mt-localhub-conf storage volume has a corresponding configuration file service.yml. For configuration details, please see baetyl_video-infer-demo/var/db/baetyl/mt-localhub-conf/V1/service.yml.
          • The model files frozen_inference_graph.pb and ssd_mobilenet_v1_coco_2017_11_17.pbtxt have to be packaged into a zip compressed package and uploaded to the custom storage volume infer-person-model. After it is uploaded, the system automatically decompresses it, as shown in the figure below:

            model-file-zip.png

          • If it is a Raspberry Pi camera module or a USB camera, the video-infer module also has to be mounted to the external device /dev/video0, and the configuration is shown in the figure below:

            image.png

          video-infer-conf configuration resolution

          The configuration of video-infer-conf in this case is as follows. For detailed configuration logic of parameters, please see baetyl-video-infer Configuration.

          hub: 
            address: 'tcp://localhub:1883' 
            username: test 
            password: hahaha 
            cleansession: true 
          video: 
            uri: '0' 
            # uri: 'rtsp://admin:admin@192.168.1.2:554/Streaming/channels/1/' 
            limit: 
              fps: 1 
          infer: 
            model: var/db/baetyl/model/frozen_inference_graph.pb 
            config: var/db/baetyl/model/ssd_mobilenet_v1_coco_2017_11_17.pbtxt 
          process: 
            before: 
              swaprb: true 
              width: 300 
              hight: 300 
            after: 
              function: 
                name: analyse 
          functions: 
            - name: analyse 
              address: 'function-manager:50051' 
          logger: 
            path: var/log/baetyl/infer-service.log 
            level: debug 
          • If it is a Raspberry Pi camera module or a USB camera, uri is configured as the device number. If /dev/video0 corresponds to the camera, uri is configured as 0, and the device address /dev/video0 has to be mapped into the container.
          • If it is an IP camera, uri is configured as the rtsp protocol access address of the IP camera. If the rtsp protocol access address of the IP camera is rtsp://admin:admin@192.168.1.2:554/Streaming/channels/1/, uri is configured as this address.
          • If it is to read the existing video file information for frame extraction analysis, uri is configured as the address of the video file, and the video file is mounted to the video-infer module in the form of custom storage volume.

          How to verify an edge AI detection result

          Step 1: Distribute a Configuration

          After all configurations are completed in the cloud, release the configuration as an official version, distribute it to the edge core device, and check the module is started normally.

          Check whether the distributed version is effective and the current status of the edge core service module in the BIE console, as shown in the figure below:

          core-version.png

          Log in to the Raspberry Pi and check the operating status of the service module through the docker ps command, as shown in the figure below:

          dockerps.png

          Step 2: Use MQTTBox to Subscribe Localhub

          It is mentioned that “When a target object is detected, the frame extracting image is saved, and a message is sent to the edge hub module synchronously” in [simulation scenarios](#simulation scenarios). In order to monitor messages sent to the hub module, the MQTTBox tool is used to subscribe to messages of the topic video/infer/result in advance, as shown in the figure below:

          localhub.png

          Step 3: Use a Camera to Detect Objects

          1.Hold the camera and rotate it one round, so that the camera can scan the scissors, laptops, books and keyboards on the desk and people sitting in the station. 2.View the message interface of the MQTT Box which subscribes to the hub module in real time. Every time when a target object is detected, the MQTT Box can receive a message.

          scissors.png

          MQTT message resolution

          Json format the message subscribed by MQTTBox, and get the following result:

          { 
          	 	 "imageCaptureTime": "2019-12-05T06:32:01.095503816Z", 
          	 	 "imageDiscard": false, 
          	 	 "imageHight": 480, 
          	 	 "imageInferenceTime": 0.680533552, 
          	 	 "imageLocation": "var/db/baetyl/image/1575527521095503816.jpg", 
          	 	 "imageObjects": [{ 
          	 	 	 "bottom": 0.7979179620742798, 
          	 	 	 "class": "scissors", 
          	 	 	 "left": 0.4228750169277191, 
          	 	 	 "right": 0.7018369436264038, 
          	 	 	 "score": 0.9887169599533081, 
          	 	 	 "top": 0.29671576619148254 
          	 	 }], 
          	 	 "imageProcessTime": 0.691838184, 
          	 	 "imageScores": { 
          	 	 	 "book": 0.0685628280043602, 
          	 	 	 "cell phone": 0.040239665657281876, 
          	 	 	 "keyboard": 0.03728525713086128, 
          	 	 	 "person": 0.037795186042785645, 
          	 	 	 "scissors": 0.9887169599533081 
          	 	 }, 
          	 	 "imageWidth": 640, 
          	 	 "messageTimestamp": 1575527521095, 
          	 	 "publishTopic": "video/infer/result" 
          	 } 

          From the above information, the following conclusions can be drawn:

          • The detected object is a pair of scissors: "class": "scissors"
          • AI infers that the score for scissors is 0.988: "scissors": 0.9887169599533081
          • Picture has been saved: "imageLocation": "var/db/baetyl/image/1575527521095503816.jpg"

          Step 4: Verify the Object in the Saved Picture

          1.Log into the Raspberry Pi and you can see many frame extracting pictures have been saved, as shown in the figure below:

          pic.png

          2.Download 1575527521095503816.jpg to the local computer and confirm that the object in the picture is a pair of scissors, which is consistent with the message received by MQTTBox, as shown in the figure below:

          1575527521095503816.jpg

          baetyl-video-infer Adaptive Model List

          Models to which the video-infer module is adaptive are listed below:

          <tr> 
              <td rowspan="7">TensorFlow</td>    
              <td rowspan="7">Object Detection</td>   
              <td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/faster_rcnn_inception_v2_coco_2018_01_28/faster_rcnn_inception_v2_coco_2018_01_28.zip" target="_blank" rel="noopener" >faster_rcnn_inception_v2_coco_2018_01_28</a> </td> 
          </tr> 
          
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/faster_rcnn_resnet50_coco_2018_01_28/faster_rcnn_resnet50_coco_2018_01_28.zip" target="_blank" rel="noopener" >faster_rcnn_resnet50_coco_2018_01_28 </a></td></tr> 
          
              <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/mask_rcnn_inception_v2_coco_2018_01_28/mask_rcnn_inception_v2_coco_2018_01_28.zip" target="_blank" rel="noopener" >mask_rcnn_inception_v2_coco_2018_01_28 </a></td></tr> 
          
          <tr><td ><a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/ssd_inception_v2_coco_2017_11_17/ssd_inception_v2_coco_2017_11_17.zip" target="_blank" rel="noopener" >ssd_inception_v2_coco_2017_11_17 </a></td></tr> 
          
          <tr><td ><a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/ssd_mobilenet_v1_coco_2017_11_17/ssd_mobilenet_v1_coco_2017_11_17.zip" target="_blank" rel="noopener" >ssd_mobilenet_v1_coco_2017_11_17 </a></td></tr> 
          <tr><td ><a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/ssd_mobilenet_v2_coco_2018_03_29/ssd_mobilenet_v2_coco_2018_03_29.zip" target="_blank" rel="noopener" >ssd_mobilenet_v2_coco_2018_03_29 </a></td></tr> 
          <tr><td ><a href="https://baetyl.cdn.bcebos.com/ai/models/Tensorflow/Object-Detection/ssd_mobilenet_v1_ppn_shared_box_predictor_coco14_sync_2018_07_03/ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03.zip" target="_blank" rel="noopener" >ssd_mobilenet_v1_ppn_shared_box_predictor_coco14_sync_2018_07_03 </a></td></tr> 
          
           <tr> 
              <td rowspan="12">DarkNet</td>    
              <td rowspan="5">Object Detection</td>   
              <td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/YoLo-Object-detection/yolov3.weights" target="_blank" rel="noopener" > yolov3 </a></td>  
          </tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/YoLo-Object-detection/yolov3-spp.weights" target="_blank" rel="noopener" >yolov3-ssp </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/YoLo-Object-detection/yolov3-tiny.weights" target="_blank" rel="noopener" >yolov3-tiny </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/YoLo-Object-detection/yolov2.weights" target="_blank" rel="noopener" >yolov2 </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/YoLo-Object-detection/yolov2-tiny.weights" target="_blank" rel="noopener" >yolov2-tiny </a></td></tr> 
          
          <tr> 
              <td rowspan="7">ImageNet Classification</td>   
              <td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/densenet201.weights" target="_blank" rel="noopener" > densenet201 </a></td> 
          </tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/darknet.weights" target="_blank" rel="noopener" >darknet </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/extraction.weights" target="_blank" rel="noopener" >extraction </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/darknet19.weights" target="_blank" rel="noopener" >darknet19 </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/darknet19_448.weights" target="_blank" rel="noopener" >darknet19_448 </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/darknet53.weights" target="_blank" rel="noopener" >darknet53 </a></td></tr> 
          <tr><td > <a href="https://baetyl.cdn.bcebos.com/ai/models/Darknet/ImageNet-Classification/darknet53_448.weights" target="_blank" rel="noopener" >darknet53_448 </a></td></tr> 
          Framework Type Model

          Currently, video-infer mainly supports the models of the two frameworks TensorFlow and DarkNet. If you want to run models of other frameworks on the edge, such as PaddlePaddle and Caffe frameworks, please see BIE and EasyEdge Integration. Models of any framework can be converted into model services that can be operated on the edge through EasyEdge to realize Model as a Service.

          Previous
          Overview
          Next
          Edge Stream Computing Creek Practice