Configuration-Level Resources
Akri supports creating a Kubernetes resource (i.e. device plugin) for each individual device. Since each device in Akri is represented as an Instance custom resource, these are called Instance-level resources. Instance-level resources are named in the format <configuration-name>-<instance-id>
. Akri also creates a Kubernetes Device Plugin for a Configuration called Configuration-level resource. A Configuration-level resource is a resource that represents all of the devices discovered via a Configuration. With Configuration-level resources, instead of needing to know the specific Instances to request, resources could be requested by the Configuration name and the Agent will do the work of selecting which Instances to reserve. The example below shows a deployment that requests the resource at Configuration level and would deploy a nginx broker to each discovered device respectively.
With Configuration-level resources, users could use higher level Kubernetes objects (Deployments, ReplicaSets, DaemonSets, etc.) or develop their own deployment strategies, rather than relying on the Akri Controller to deploy Pods to discovered devices.
Maintaining Device Usage
The Instance.deviceUsage
in Akri Instances is extended to support Configuration device plugin. The Instance.deviceUsage
may look like this:
where empty string means the slot is free and non-empty string indicates the slot is used (by the node). To support Configuration device plugin, the Instance.deviceUsage
format is extended to hold the additional information, the deviceUsage can be a "<node_name>" (for Instance) or a "C:<virtual_device_id>:<node_name>" (for Configuration). For example, the Instance.deviceUsage
shows the slot my-resource-00095f-2
is used by virtual device id "0" of the Configuration device plugin on node-b
. The slot my-resource-00095f-3
is used by Instance device plugin on node-a
. The other 3 slots are free.
Deployment Strategies with Configuration-level resources
For example, with Configuration-level resources, the following Deployment could be applied to a cluster:
Pods will only be successfully scheduled to a Node and run if the resources exist and are available. In the case of the above scenario, if there were two cameras on the network, two Pods would be deployed to the cluster. If there are not enough resources, say there is only one camera on the network, the two Pods will be left in a Pending
state until another is discovered. This is the case with any deployment on Kubernetes where there are not enough resources. However, Pending
Pods do not use up cluster resources.
Last updated
Was this helpful?