百度智能云

All Product Document

          Intelligent Edge

          Operation Guide

          This tutorial explains how BIE integrates with EasyEdge.

          Get the Example Model MobileNet-SSD

          The original model can be a model trained by the user or a ready-made model. In this tutorial, an object detection in-depth learning model based on Caffe and open in github is used MobileNet-SSD. The model label file label_list.txt is missing in this model, and its download link is provided later in this tutorial. The test image used to verify the model is also from the github project.

          Upload a Model in EasyEdge

          • Open EasyEdge official website, click Use now to log into the EasyEdge console. The login account used here is the account of Baidu AI Cloud, which is the same as the BIE account.

          image.png

          • Enter EasyEdge and click Upload a model.

          raw-model-list.png

          • On the Upload an original model interface, enter the original model information:

            • Model name: Custom, MobileNet-SSD-Caffe is entered here
            • Model type: Object detection
            • Model framework: Caffe
            • Model network: MobileNetV1-SSD
            • Network structure: Select the file with the suffix .prototxt in the model directory, such as deploy.prototxt. Click here to download.
            • Network parameter: Select the file with the suffix .caffemodel in the model directory, such as mobilenet_iter_73000.caffemodel. Click here to download.
            • Model label: Select the file with the suffix .txt in the model directory, such as label_list.txt. Click here to download.
            • Other configurations: Preset a default file
            • Custom grouping: Custom, Caffe is entered here
            • Feature description: Custom, MobileNet SSD Demo is entered here

          upload-raw-model.png

          Generate an End Model

          • Click Verify and submit the model. After the verification is passed, enter the Generate an end model menu, and set the end environment to which the model is adaptive, as shown in the figure below:

          create-edge-model.png

          • Here, models adaptive to the two application platforms Linux-arm64 and Linux-amd64 are generated. The relationship between application platforms and software and hardware is shown in the following table:
          Application platform Chip/hardware Operating system
          Linux-arm64 General x86 chip Linux
          Linux-amd64 General ARM chip Linux
          • Click Generate a model and enter the model generation stage. This process takes about a few minutes, and it actually depends on the model size.

          create-edge-model-2.png

          • After the model is generated, enter the SDK list, you can see two versions of the MobileNet-SSD-Caffe model, which are adaptive to different application platforms:

            • V1 version: Adaptive to the Linux-amd64 platform
            • V2 version: Adaptive to the Linux-arm64 platform

          edge-model-list.png

          Create an EasyEdge Model Storage Volume in BIE

          bie_offical.png

          • Enter the storage volume list page and click Create a storage volume:

            • Storage volume name: Custom, easyedge-model is entered here
            • Storage volume template: Choose EasyEdge template
            • Tags: Custom and optional
            • Description: Custom and optional

          create_volume.png

          • Enter the easyedge-model storage volume, and click Import a model:

            • Application platform: Linux-arm64. This experiment is conducted on Raspberry Pi that operates GNU/Linux aarch64.
            • Model name: MobileNet-SSD-Caffe, select the end model generated by EasyEdge before

          import_model.png

          • Click OK to complete the model introduction. You can see a RES directory in the file list, which means that the EasyEdge model is imported successfully.

          model_file.png

          • Click Release a new version to generate V1 version of the storage volume, and this version is bound to subsequent model services.

          release_new_version.png

          Create a Service in the Core

          create_core.png

          • Add a service in the core:

            • Name: Custom, MobileNet-SSD-Caffe is entered here.
            • Source: Official
            • Module: Select easyedge-inference
            • Image address: Drop down to select the latest version, and hub.baidubce.com/baetyl/easyedge-inference:0.4.3 is selected here.
            • Description: Custom, Edge object detection service” is entered here.
            • Port mapping: 8088:8088

          create_service_3.png

          • Click OK, and the service is created.
          • Click View, edit the MobileNet-SSD-Caffe service just created, and mount the storage volume. You can also directly mount storage volumes when you create a service.

          conf_service.png

          • Enter the Storage volume mounting tab from the pop-up, click Add a storage volume mounting, and set the mounting information:

            • Storage volume: Select easyedge-model.
            • Version: Select V1 version.
            • Container directory: Use the default var/db/baetyl. When the container service is started on the edge core, the storage volume is mounted under the /var/db/baetyl directory in the container by default. This directory is used when Container startup parameter is set.

          band_volume.png

          • Click OK, and the storage volume is mounted.
          • Switch to the Service tab page from the pop-up, open Senior settings, configure Startup parameters of the easyedge-inference service. A total of 5 parameters are required to add in sequence:

            • bash
            • run.sh
            • /var/db/baetyl/RES: The path here is an absolute path. Add ‘/’ for the container directory under which the storage volume ismounted. If an absolute path is used for the container directory under which the storage volume is mounted, this ‘/’ can be omitted.
            • 0.0.0.0
            • 8088: Here, 8088 is the operating port number of the object detection service in the container, and it has been mapped to the outside of the container together with the port number.
          • Click OK, and the service is configured.

          Distribute an Application

          • After the service is configured, you have to release the configuration as a version, as shown in the figure below. Release it as the V2 version.

          new_core_conf_version.png

          • Click Distribute to distribute this version to the edge core.
          • View the operating state of the service. running means the service is started normally, but if the service is always in restarting state, you have to enter the container to view the container log.

          service_status.png

          Model Verification

          After the MobileNet-SSD-Caffe service is started normally, you have to verify whether the object detection model is operating normally on the edge node.

          Browser Verification

          Open the console of the object detection service through a browser, and the access address is: http://[IP address of the edge core device]:8088/. If you can see the following interface, it means the service is running normally.

          service_console.png

          Next, you can upload sample pictures to verify the model effect. Here are a few sample results:

          000001_result.png

          000542_result.png

          001150_result.png

          001763_result.png

          API Verification

          As a container application running in the edge core, MobileNet-SSD-Caffe also provides an API access interface externally, supports being called by other applications and returns object detection results.

          The API access interface is provided by EasyEdge SDK. For usage of a specific API, please see EasyEdge SDK documentation. This documentation is applicable to both Windows and Linux versions.

          The example of python code calling an interface is given below for description:

          • Copy the following python code, save it locally, and name it test_easyedge_api.py.
          import requests 
          
          with open('./1.jpg', 'rb') as f: 
              img = f.read() 
          
          # params is GET params,  data is POST Body 
          result = requests.post('http://[IP address of the edge core device]:8088/', params={'threshold': 0.1}, data=img).json() 
          
          print result 
          • Download the picture 1.jpg and save it in the same directory as test_easyedge_api.py.
          • Execute test_easyedge_api.py
          python test_easyedge_api.py 
          • The interface returns the following JSON as the result.
          { 
              "error_code": 0, 
              "cost_ms": 179, 
              "results": [ 
                  { 
                      "index": 8, 
                      "confidence": 0.9999642372131348, 
                      "y2": 0.9531263113021851, 
                      "label": "cat", 
                      "y1": 0.0014175414107739925, 
                      "x2": 0.9970248937606812, 
                      "x1": 0.0014758188044652343
                  } 
              ] 
          } 
          Previous
          EasyEdge Overview
          Next
          Video-infer Realizes Edge Video AI Inference