Uncategorized

Deploy Azure IoT Edge Modules with Kubernetes Semantics (Virtual Kubelet)

You might be familiar with Kubernetes, because you’re here in this post.

In Azure, there are the following 2 projects to integrate Azure IoT Edge with popular Kubernetes. These projects have different architectures and different goals as follows :

  • Azure IoT Edge Connector for Kubernetes
    This project leverages a Virtual Kubelet to Azure IoT Edge.
    As you will see later in this post, you can deploy (including update) and configure Edge modules with standard Kubernetes manners with this Virtual Kubelet provider.
    For instance, you can reuse existing .yaml file and deploy same configuration to a group of multiple devices on IoT Hubs.
  • Azure IoT Edge on Kubernetes
    This project runs IoT Edge components, such as, modules, networks, volumes, and so forth, on native Kubernetes API.
    By creating Custom Resource Definitions (CRD) with Kubernetes API server, an IoT Edge agent (Operator in Kubernetes) running on each devices automatically translates from IoT Edge application model (such as, modules, createOptions settings, and so forth) to Kubernetes native objects (such as, pods, deployments, services, and so forth).
    Configurations for modules, networks, volumes, and so forth on IoT Edge devices are maintained and managed by native Kubernetes scheduler, then you can take advantages of Kubernetes including a failure recovery (availability), flexible resource assignments, so on and so forth.

In this post and next, I’ll show you what it is and what it addresses for these 2 projects.
This time, we see the former “Azure IoT Edge Connector for Kubernetes” (iot-edge-virtual-kubelet-provider) in this post.

Updated : See my next post “Manage Azure IoT Edge resources with native Kubernetes API” for “Azure IoT Edge on Kubernetes” project.

With Azure IoT Edge Provider for Virtual Kubelet, you can handle a variety of modules on multiple IoT devices using your familiar Kubernetes tools, utilities, and reusable configurations (yamls).

What is Virtual Kubelet Provider ?

With a virtual kubelet, you can create a virtual node in Kubernetes cluster.
A regular kubelet runs on each node to keep containers running. On contrary, a virtual kubelet is managed by virtual server or 3rd party services, such as, ACI (Azure Container Instance), AWS Fargate, Alibaba Cloud ECI, so on and so forth.

With IoT Edge provider for virtual kubelet, an IoT Hub resource (in cloud) is translated into a virtual node and a deployment config for IoT Edge modules is translated into a pod in Kubernetes. (This is different with a translation in “Azure IoT Edge on Kubernetes” project.)
Thus you can configure IoT Edge modules with standard Kubernetes configurations.

Prepare Your Hub and Devices

In the example in this post, we use 1 IoT Hub and 3 Edge-enabled devices, in which 2 devices have a tag testtag='A', and 1 device has a tag testtag='B'.

Now let’s start to provision these resources.

1. Prepare an IoT Hub

First, create an IoT Hub resource on Azure Portal. (See below.)

After a hub is provisioned, click “IoT Edge” menu and add 3 devices (device01, device02, and device03) in this hub. (Click “Add an IoT Edge device” button to register a device.)
These registered devices are not still connected to the real devices.

For each registered devices, click “Device Twin” and set corresponding tags as follows.

After you’ve completed settings, please copy a hub’s connection string for each devices (device01, device02, and device03).

2. Prepare Devices

Now prepare 3 physical devices and connect these devices into a previous hub.
Instead of using real (physical) devices, here we use Azure VM (virtual machine) for development purpose.
To run Azure VM for IoT Edge device, create Ubuntu VM on Azure and install IoT Edge runtime. Or deploy this template on Azure instead.

After you’ve created 3 VMs (device01, device02, and device03) with IoT Edge runtime, please set a Hub’s connection string in /etc/iotedge/config.yaml (see below) in each devices.

...# Manual provisioning configurationprovisioning:  source: "manual"  device_connection_string: "{ADD DEVICE CONNECTION STRING HERE}"...

After you’ve completed, restart an IoT Edge runtime in each devices.

sudo systemctl restart iotedge

Finally, make sure the security daemon is successfully running in each devices.

sudo systemctl status iotedge

Connect a Cluster with Virtual Kubelet Provider

In regular tasks with Azure IoT, we’ll deploy modules using deployment manifest (deployment.json). (See my early post “Building Azure IoT Edge Module with Message Routing“.)
However, once your hub and devices are prepared, you can use your familiar Kubernetes manners and reuse (bring) existing Kubernetes configurations in IoT Edge.

First, we must provision a Kubernetes cluster to manage our Edge deployments.
In this post, we use a managed Kubernetes cluster, AKS (Azure Kubernetes Service).

Note : IoT Edge Virtual Kubelet Provider doesn’t depend on AKS, then you might be able to use other Kubernetes cluster servers, including a local development environment, such as, Minikube, k3d, …

Now, create a AKS resource in Azure Portal.
As you can see below, here we use only 1 node for a cluster, since we don’t use these nodes for deployments. (This node is just used for Kubernetes system objects.)

 

After an AKS cluster is generated, open Azure Cloud Shell in web browser and run the following CLI command to use kubectl command in this shell.

az aks get-credentials \  --resource-group {AKS Resource Group Name} \  --name {AKS Name} \  --subscription {Subscription ID}

When you run the following command, you will find a single node, which is provisioned by previous AKS installation.

kubectl get node
NAME   STATUS   ROLES   AGE   VERSIONaks-agentpool-15488478-0   Readyagent   23m   v1.15.10

Now let’s start to install IoT Edge provider in your AKS cluster.
First, run the following command to download IoT Edge provider from a GitHub repo.

git clone https://github.com/Azure/iot-edge-virtual-kubelet-providercd iot-edge-virtual-kubelet-provider

You should grant IoT Edge provider to access your Edge-enabled devices through your IoT Hub.
For this settings, copy a connection string in your IoT Hub using Azure Portal. (Click “Shared access policies” menu and click “iothubowner” in IoT Hub’s blade.)

As you can see in /src/charts/iot-edge-connector/values.yaml, the provider image (microsoft/iot-edge-vk-provider) uses a secret “hub0-cs” on store “my-secrets” in order to refer a connection string for an IoT Hub.
Thus we should set a previously copied connection string in the following secret on Kubernetes cluster.

kubectl create secret generic my-secrets \  --from-literal=hub0-cs='{iothubowner-connection-string}'

Note : If you possess multiple Hubs, you can also setup multiple secrets in a single cluster. (Please change values.yaml at that time.) As you see later, each Hub is identified as corresponding Kubernetes “node”.

Now let’s install an IoT Edge connector in our AKS cluster. (Here we use Helm 3 which is pre-installed in Azure Cloud Shell.)

helm install hub0 src/charts/iot-edge-connector

Note : If RBAC is disabled in your cluster (enabled by default), please run a command with the following option.
helm install hub0 src/charts/iot-edge-connector --set rbac.install=false

After a few minutes, you will find a node “iot-edge-connector-hub0” as follows.
In IoT Edge virtual kubelet provider, hub-connected all devices are abstracted by this virtual node. When you want to deploy containers in IoT Edge devices, you create a pod on this node in Kubernetes semantics.

kubectl get node
NAME   STATUS   ROLES   AGE   VERSIONaks-agentpool-15488478-0   Readyagent   38h   v1.15.10iot-edge-connector-hub0Readyagent   15m   v1.13.1-vk-v0.9.0-1-g7b92d1ee-dev
kubectl describe node iot-edge-connector-hub0
Name:   iot-edge-connector-hub0Roles:  agentLabels: alpha.service-controller.kubernetes.io/exclude-balancer=truebeta.kubernetes.io/os=linuxkubernetes.io/hostname=iot-edge-connector-hub0kubernetes.io/os=linuxkubernetes.io/role=agenttype=virtual-kubeletAnnotations:node.alpha.kubernetes.io/ttl: 0CreationTimestamp:  Mon, 06 Apr 2020 21:55:50 +0000Taints: node.kubernetes.io/network-unavailable:NoSchedulevirtual-kubelet.io/provider=iotedge:NoScheduleUnschedulable:  falseConditions:  ...Addresses:Capacity: cpu: 20 memory:  100Gi pods:1Allocatable: cpu: 20 memory:  100Gi pods:1System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System:   Linux Architecture:   amd64 Container Runtime Version: Kubelet Version:v1.13.1-vk-v0.9.0-1-g7b92d1ee-dev Kube-Proxy Version:PodCIDR: 10.244.1.0/24Allocated resources:  (Total limits may be over 100 percent, i.e., overcommitted.)  Resource   Requests  Limits  --------   --------  ------  cpu0 (0%)0 (0%)  memory 0 (0%)0 (0%)  ephemeral-storage  0 (0%)0 (0%)Events:  ...

Note : Please see the health information (events or logs) for details, and check if there’s no errors for installation.

# Check if the pod's status (Edge connector's pod) is RUNNINGkubectl get pods# Get detailed event logs, if it's not correctly running# (Replace pod's name into yours)kubectl describe pod hub0-iot-edge-connector-6c9d649995-lf8sb# Clean up (remove) installation, if neededhelm uninstall hub0

Deploy with Standard Kubernetes APIs !

Now you can deploy and configure IoT Edge modules (containers) with standard Kubernetes configurations.

In this post, we deploy simple modules in my early post “Building Azure IoT Edge Module with Message Routing” on both device01 and device03 (these devices have a tag with testtag='A').
In this example, we have 2 modules (containers), module01 and module02, and these are connected by message routing each other. (See “Building Azure IoT Edge Module with Message Routing” for details.) When you post a message into module01 using port 8081, then the message is passed into module02 and is written in logs on module02.
Here I’ve already published these container images for module01 and module02 in my Docker Hub’s repo (“tsmatz/iot-module-sample01:1.0.0” and “tsmatz/iot-module-sample02:1.0.0” respectively) and you don’t need to build these containers. (You can also publish containers in your repository of Azure Container Registry and use them.)

Now we apply the following Kubernetes yaml configuration (sample01.yaml).
Please compare with the original deployment.json in Azure IoT.

sample01.yaml

---apiVersion: apps/v1beta2kind: Deploymentmetadata:  name: sampledeploy01spec:  selector:matchLabels:  app: sampleapp01  replicas: 1  strategy:type: RollingUpdaterollingUpdate:  maxSurge: 0%  maxUnavailable: 100%  template:metadata:  labels:app: sampleapp01  annotations:isEdgeDeployment: "true"targetCondition: "tags.testtag='A'"priority: "15"loggingOptions: ""spec:  affinity:podAntiAffinity:  requiredDuringSchedulingIgnoredDuringExecution:  - labelSelector:  matchExpressions:  - key: appoperator: Invalues:- sampleapp01topologyKey: "kubernetes.io/hostname"  containers:  - name: module01image: "tsmatz/iot-module-sample01:1.0.0"  - name: module02image: "tsmatz/iot-module-sample02:1.0.0"  nodeSelector:type: virtual-kubelet  tolerations:  - key: virtual-kubelet.io/provideroperator: Equalvalue: iotedgeeffect: NoSchedule  - key: node.kubernetes.io/network-unavailableoperator: Existseffect: NoSchedule---kind: ConfigMapapiVersion: v1metadata:  name: edgehubdata:  desiredProperties: |{  "routes": {"route": "FROM /messages/modules/module01/outputs/output1 INTO BrokeredEndpoint(\"/modules/module02/inputs/input1\")"  },  "storeAndForwardConfiguration": {"timeToLiveSecs": 7200  }}---kind: ConfigMapapiVersion: v1metadata:  name: module01data:  status: running  restartPolicy: on-unhealthy  version: "1.0"  createOptions: |{  "HostConfig": {"PortBindings": {  "8081/tcp": [{  "HostPort": "8081"}  ]}  }}---

This configuration in sample01.yaml sets up :

  • containers (in spectemplatespec) :
    Here we deploy the previous 2 modules (module01 and module02) in Docker Hub.

    containers:- name: module01  image: "tsmatz/iot-module-sample01:1.0.0"- name: module02  image: "tsmatz/iot-module-sample02:1.0.0"
  • nodeSelector (in spectemplatesepc) :
    The previous virtual kubelet’s node has a label with “type=virtual-kubelet“. (See the result of above command, “kubectl describe node iot-edge-connector-hub0“.)
    This deployment is set-up (deployed) on only node which has a label with “type=virtual-kubelet” by this nodeSelector setting.

    nodeSelector:  type: virtual-kubelet
  • tolerations (in spectemplatesepc) :
    In order to block applying non-IoTEdge configuration, IoT Edge connector node has “virtual-kubelet.io/provider=iotedge:NoSchedule” in Taints section. (See the result of above command, “kubectl describe node iot-edge-connector-hub0“.)
    Thus, allow deployment (applying) to this node by specifying this tolerations settings.
    (I’ve also added “node.kubernetes.io/network-unavailable” in order to ignore an internal route creation failure by RouteController. Or remove this taints in the node definition.)

    tolerations:- key: virtual-kubelet.io/provider  operator: Equal  value: iotedge  effect: NoSchedule- key: node.kubernetes.io/network-unavailable  operator: Exists  effect: NoSchedule
  • targetCondition (in spectemplatemetadataannotations) :
    Here we apply only 2 devices, device01 and decive03, by setting a condition tags.testtag='A'.

    targetCondition: "tags.testtag='A'"
  • podAntiAffinity (in spectemplatesepc) :
    This setting blocks the duplication of same configuration installation. (If there already exists a pod with same configuration, a pod configuration processing will be blocked.)

    podAntiAffinity:  requiredDuringSchedulingIgnoredDuringExecution:  - labelSelector:  matchExpressions:  - key: appoperator: Invalues:- sampleapp01topologyKey: "kubernetes.io/hostname"
  • ConfigMap :
    Please set Azure IoT Edge specific settings, such as, message routing, using ConfigMap.
    Here I don’t explain about the meaning of these settings, but see “Building Azure IoT Edge Module with Message Routing” for details.

    • name: edgehub
      In this setting, we define message routing setting.

      kind: ConfigMapapiVersion: v1metadata:  name: edgehubdata:  desiredProperties: |{  "routes": {"route": "FROM /messages/modules/module01/outputs/output1 INTO BrokeredEndpoint(\"/modules/module02/inputs/input1\")"  },  "storeAndForwardConfiguration": {"timeToLiveSecs": 7200  }}
    • name: module01
      In this setting, we define createOptions setting (such as, port configurations, module-specific parameters, and so forth) for module01. (There’s no createOptions settings in module02.)

      kind: ConfigMapapiVersion: v1metadata:  name: module01data:  status: running  restartPolicy: on-unhealthy  version: "1.0"  createOptions: |{  "HostConfig": {"PortBindings": {  "8081/tcp": [{  "HostPort": "8081"}  ]}  }}

Now, apply this configuration (.yaml) into your cluster by the following commnd !

kubectl apply -f sample01.yaml

Note : Please see the health information (events or logs), and check if there’s no errors for deployments.

# Check if pod's status is "RUNNING"kubectl get pods# See detailed event logs, if it's not correctly running# (Replace pod's name into yours)kubectl describe pod sampledeploy01-678d5668d6-4qjsr# Clean up (remove) deployment, if neededkubectl delete deployment sampledeploy01kubectl delete configmap module01kubectl delete configmap edgehub# Force remove a failed pod, if it cannot be removed# (Next deployment will be rejected by podAntiAffinity policy)kubectl get podskubectl delete pod sampledeploy01-678d5668d6-4qjsr \  --force \  --grace-period=0

As you can see below, this deployment for 2 devices is recognized as 1 virtual pod sampledeploy01-678d5668d6-4qjsr.

kubectl get pods
NAME   READY   STATUShub0-iot-edge-connector-6c9d649995-lf8sb   2/2 Runningsampledeploy01-678d5668d6-4qjsr0/2 Running

Open IoT Hub resource in Azure Portal and see the status for IoT Edge devices (device01, device02, and device03).
You will find that there exist 4 modules on device01 and device03 (these devices have a tag with testtag = 'A').

When you click your device (device01 or device03), you will find that the following 4 modules are installed and correctly running. (The following $edgeHub and $edgeAgent are the system modules in IoT Edge.)

In this post, we’re using Azure VMs as IoT Edge devices (device01, device02, and device03).
Now please login to device01 with SSH tunnel (port forwarding) for port 8081. (Or, allow an inbound port 8081 in firewall settings.)

Open http://localhost:8081/{some arbitrary string} with your browser as follows.

When you see module02’s logs in device01 (virtual machine), you will find that the string (in module01) is passed into module02. (See “Building Azure IoT Edge Module with Message Routing” for the source code in this module.)

iotedge logs module02
Received - b'Hello World'!

Next, let’s try to deploy a Text Analytics container in Azure AI Services on device02 (this device has a tag with testtag='B').
We prepare the following yaml file and deploy with kubectl on device02 as follows. (Please replace “{enter-your-EndpointURL}” and “{enter-your-ApiKey}” into your licensed endpoint and API key. See here for details.)

sample02.yaml

---apiVersion: apps/v1beta2kind: Deploymentmetadata:  name: sampledeploy02spec:  selector:matchLabels:  app: sampleapp02  replicas: 1  strategy:type: RollingUpdaterollingUpdate:  maxSurge: 0%  maxUnavailable: 100%  template:metadata:  labels:app: sampleapp02  annotations:isEdgeDeployment: "true"targetCondition: "tags.testtag='B'"priority: "15"loggingOptions: ""spec:  affinity:podAntiAffinity:  requiredDuringSchedulingIgnoredDuringExecution:  - labelSelector:  matchExpressions:  - key: appoperator: Invalues:- sampleapp02topologyKey: "kubernetes.io/hostname"  containers:  - name: keyphraseimage: "mcr.microsoft.com/azure-cognitive-services/keyphrase:latest"  nodeSelector:type: virtual-kubelet  tolerations:  - key: virtual-kubelet.io/provideroperator: Equalvalue: iotedgeeffect: NoSchedule  - key: node.kubernetes.io/network-unavailableoperator: Existseffect: NoSchedule---kind: ConfigMapapiVersion: v1metadata:  name: keyphrasedata:  status: running  restartPolicy: on-unhealthy  version: "1.0"  createOptions: |{  "Cmd": ["Eula=accept","Billing={enter-your-EndpointURL}","ApiKey={enter-your-ApiKey}"  ],  "HostConfig": {"PortBindings": {  "5000/tcp": [{  "HostPort": "5000"}  ]}  }}---
kubectl apply -f sample02.yaml

This time, you will find a new pod named “sampledeploy02-xxxxx” on your cluster.

kubectl get pods
NAME   READY   STATUShub0-iot-edge-connector-6c9d649995-lf8sb   2/2 Runningsampledeploy01-678d5668d6-4qjsr0/2 Runningsampledeploy02-57bc476799-d89jf0/1 Running

Go to Azure Portal and make sure that a keyphrase module is correctly running in device02.

Please login to device02 with SSH tunnel (port forwarding) for port 5000 and open http://localhost:5000/swagger with your web browser.
Then you can see a full set of documentation for Text analytics key phrase endpoints by a swagger definition.

For instance, when you post the following HTTP envelope to extract key pharses, you will get the following result from a text analytics container running in device02.

POST http://localhost:5000/text/analytics/v2.1/keyPhrasesContent-Type: application/json{  "documents": [{  "language": "en",  "id": "1",  "text": "This is a pen."}  ]}
HTTP/1.1 200 OKContent-Type: application/json; charset=utf-8{"documents":[{"id":"1","keyPhrases":["pen"]}],"errors":[]}

You can also update the structure of modules (cotainers) and routes with your familiar Kubernetes manners.

 

As you saw in this post, IoT Edge Connector by virtual kubelet is focusing deployments and configurations, by mapping a hub-connected devices to a single node, and a deployment to a single pod (or Kubernetes deployment). As you know, this doesn’t reflect entire physical structure of devices, conatiners, networks, volumes, and other resources running on IoT Edge, and just translates Kubernetes deployments into Azure IoT deployments.
On contrary, the challenges for “IoT Edge on Kubernetes” project (now in preview) is to close this gap in order for managing and maintaining any Edge resources by native Kubernetes without IoT SDK.
Let’s see “IoT Edge on Kubernetes” project (implemented by CRD) in the next post.
(Updated : See my next post “Manage Azure IoT Edge resources with native Kubernetes API” for “Azure IoT Edge on Kubernetes” project.)

 

Categories: Uncategorized

Tagged as: ,

2 replies»

Leave a Reply