Akri
v0.12
v0.12
  • Home
  • Using Akri
    • Getting Started
    • Kubernetes Cluster Setup
    • Customizing an Akri Installation
    • Requesting Akri Resources
    • Monitoring with Prometheus
  • Discovery Handlers
    • ONVIF for IP Cameras
    • OPC UA
    • udev
  • Demos
    • Discovering and Using USB Cameras
    • Discovering and Using USB Cameras on Raspberry Pi 4
    • Discovering and Using OPC UA Thermometers
    • Discovering and Using Authentication-Enabled Onvif Cameras
    • Introduction and Demo Videos
  • Architecture
    • Overview
    • Resource Sharing
    • Controller
    • Agent
    • Configuration-Level Resources
  • Developing for Akri
    • Developer Guide
    • Building Containers
    • Custom Discovery Handlers
    • Custom Brokers
    • Mock Discovery Handler for Testing
    • Walkthrough of Implementing a Custom Discovery Handler and Broker
    • End to End Test Workflow
  • Community
    • Roadmap
    • Contributing
Powered by GitBook
On this page
  • Table of Contents
  • Requirements
  • Linux Environment
  • Tools for developing Akri's Rust components
  • Build and test Rust components
  • Running locally
  • Building Containers
  • Tools for building Akri's Rust containers
  • Establish a container repository
  • Build intermediate containers
  • Build and push Akri component containers
  • More information about Akri build
  • Installing Akri with newly built containers
  • Useful Helm Commands
  • Helm Package
  • Helm Template
  • Helm Get Manifest
  • Helm Upgrade
  • Testing with Debug Echo Discovery Handler
  • Discovery Handler and Broker Development
  • Developing non-Rust components
  • Naming Guidelines
  • General Principles
  • Akri Discovery Handlers
  • Akri Samples Brokers
  • Kubernetes Resources

Was this helpful?

  1. Developing for Akri

Developer Guide

PreviousConfiguration-Level ResourcesNextBuilding Containers

Last updated 9 months ago

Was this helpful?

This document will walk you through how to set up a local development environment, build Akri component containers, and test Akri using your newly built containers. It also includes instructions on running Akri locally, naming guidelines, and points to documentation on extending Akri with new Discovery Handlers and brokers.

Note: different tools are needed depending on what parts of Akri you are developing. This document aims to make that clear.

Table of Contents

Requirements

Linux Environment

To develop, you'll need a Linux environment whether on amd64 or arm64v8. We recommend using an Ubuntu VM; however, WSL2 should work for building and testing (but has not been extensively tested).

Tools for developing Akri's Rust components

The majority of Akri is written in Rust. To install Rust and Akri's component's dependencies, run Akri's setup script:

./build/setup.sh

If you previously installed Rust ensure you are using the v1.73.0 toolchain that Akri's build system uses:

sudo curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain=1.73.0
rustup default 1.73.0
cargo version

Build and test Rust components

  1. To install Rust and Akri's component's dependencies, run Akri's setup script:

    ./build/setup.sh

    If you previously installed Rust, ensure you are using the v1.73.0 toolchain that Akri's build system uses:

    sudo curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain=1.73.0

    Then, configure your current shell to see Cargo and set v1.73.0 as default toolchain.

    source $HOME/.cargo/env
    rustup default 1.73.0
    cargo version
  2. Build Controller, Agent, Discovery Handlers, and udev broker

    cargo build
  3. To run all unit tests:

    cargo test

Running locally

Before running Akri agent or controller locally, please ensure the Akri configuration and instance CRDs are applied to cluster, otherwise use the below command to apply them.

    kubectl apply -f akri/deployment/helm/crds/akri-configuration-crd.yaml
    kubectl apply -f akri/deployment/helm/crds/akri-instance-crd.yaml

To locally run Akri's Agent, Controller, and Discovery Handlers as part of a Kubernetes cluster, follow these steps:

  1. Build the repo with all default features by running cargo build

  2. Run the desired component by navigating to the appropriate directory and using cargo run

    Run the Controller locally with info-level logging and using 8081 to serve Akri's metrics (for Prometheus integration):

    cd akri/controller
    RUST_LOG=info METRICS_PORT=8081 KUBECONFIG=$HOME/.kube/config cargo run

    METRICS_PORT can be set to any value as it is only used if Prometheus is enabled. Just ensure that the Controller and Agent use different ports if they are both running.

    Run the Agent locally with info-level logging, debug echo enabled for testing, and a metrics port of 8082. The Agent must be run privileged in order to connect to the kubelet. Specify the user path to cargo $HOME/.cargo/bin/cargo so you do not have to re-install cargo for the sudo user:

    cd akri/agent
    sudo -E DEBUG_ECHO_INSTANCES_SHARED=true ENABLE_DEBUG_ECHO=1 RUST_LOG=info METRICS_PORT=8082 KUBECONFIG=$HOME/.kube/config DISCOVERY_HANDLERS_DIRECTORY=~/tmp/akri AGENT_NODE_NAME=myNode HOST_CRICTL_PATH=/usr/bin/crictl HOST_RUNTIME_ENDPOINT=/run/containerd/containerd.sock HOST_IMAGE_ENDPOINT=/run/containerd/containerd.sock $HOME/.cargo/bin/cargo run

    Note: DISCOVERY_HANDLERS_DIRECTORY is where Akri agent creates an unix domain socket for discovery handler's registeration. This example uses ~/tmp/akri that should exist or is created before executing this command.

    By default, the Agent does not have embedded Discovery Handlers. To allow embedded Discovery Handlers in the Agent, turn on the agent-full feature and the feature for each Discovery Handler you wish to embed -- Debug echo is always included if agent-full is turned on. For example, to run the Agent with OPC UA, ONVIF, udev, and debug echo Discovery Handlers add the following to the above command: --features "agent-full udev-feat opcua-feat onvif-feat".

    Note: The environment variables HOST_CRICTL_PATH, HOST_RUNTIME_ENDPOINT, and HOST_IMAGE_ENDPOINT are for slot-reconciliation (making sure Pods that no longer exist are not still claiming Akri resources). The values of these vary based on Kubernetes distribution. The above is for vanilla Kubernetes. For MicroK8s, use HOST_CRICTL_PATH=/usr/local/bin/crictl HOST_RUNTIME_ENDPOINT=/var/snap/microk8s/common/run/containerd.sock HOST_IMAGE_ENDPOINT=/var/snap/microk8s/common/run/containerd.sock and for K3s, use HOST_CRICTL_PATH=/usr/local/bin/crictl HOST_RUNTIME_ENDPOINT=/run/k3s/containerd/containerd.sock HOST_IMAGE_ENDPOINT=/run/k3s/containerd/containerd.sock.

    To run Discovery Handlers locally, simply navigate to the Discovery Handler under akri/discovery-handler-modules/ and run using cargo run, setting where the Discovery Handler socket should be created in the DISCOVERY_HANDLERS_DIRECTORY variable. The discovery handlers must be run privileged in order to connect to the Agent. For example, to run the ONVIF Discovery Handler locally:

    cd akri/discovery-handler-modules/onvif-discovery-handler/
    sudo -E RUST_LOG=info DISCOVERY_HANDLERS_DIRECTORY=~/tmp/akri AGENT_NODE_NAME=myNode $HOME/.cargo/bin/cargo run
    cd akri/discovery-handler-modules/debug-echo-discovery-handler/
    sudo -E RUST_LOG=info DEBUG_ECHO_INSTANCES_SHARED=false DISCOVERY_HANDLERS_DIRECTORY=~/tmp/akri AGENT_NODE_NAME=myNode $HOME/.cargo/bin/cargo run

Building Containers

Makefile has been created to help with the more complicated task of building the Akri components and containers for the various supported platforms.

Tools for building Akri's Rust containers

In order to cross-build Akri's Rust code for both ARM and x64 containers, several tools are leveraged.

  • qemu can be installed with:

    sudo apt-get install -y qemu qemu qemu-system-misc qemu-user-static qemu-user binfmt-support

    For qemu to be fully configured on Ubuntu 18.04, after running apt-get install, run these commands:

      sudo mkdir -p /lib/binfmt.d
      sudo sh -c 'echo :qemu-arm:M::\\x7fELF\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x28\\x00:\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xfe\\xff\\xff\\xff:/usr/bin/qemu-arm-static:F > /lib/binfmt.d/qemu-arm-static.conf'
      sudo sh -c 'echo :qemu-aarch64:M::\\x7fELF\\x02\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\xb7\\x00:\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xfe\\xff\\xff\\xff:/usr/bin/qemu-aarch64-static:F > /lib/binfmt.d/qemu-aarch64-static.conf'
      sudo systemctl restart systemd-binfmt.service

Establish a container repository

To build containers, log into the desired repository:

CONTAINER_REPOSITORY=<repo>
sudo docker login $CONTAINER_REPOSITORY

Build intermediate containers

To ensure quick builds, we have created a number of intermediate containers that rarely change.

By default, Makefile will try to create containers with tag following this format: <repo>/$USER/<component>:<label> where

  • <component> = opencv-base

  • <repo> = devcaptest.azurecr.io

    • <repo> can be overridden by setting REGISTRY=<desired repo>

  • $USER = the user executing Makefile (could be root if using sudo)

    • <repo>/$USER can be overridden by setting PREFIX=<desired container path>

.NET OpenCV containers

These containers allow the ONVIF broker to be created without rebuilding OpenCV for .NET each time. There is a container built for AMD64 and it is used to crossbuild to each supported platform. The dockerfile can be found here: build/containers/intermediate/Dockerfile.opencvsharp-build.

# To make all of the OpenCV base containers:
make opencv-base PUSH=1 PREFIX=$CONTAINER_REPOSITORY
# To make specific platform(s):
make opencv-base PUSH=1 PREFIX=$CONTAINER_REPOSITORY PLATFORMS="amd64 arm64 arm/v7"

Build and push Akri component containers

By default, Makefile will try to create containers with tag following this format: <repo>/$USER/<component>:<label> where

  • <component> = controller | agent | etc

  • <repo> = devcaptest.azurecr.io

    • <repo> can be overridden by setting REGISTRY=<desired repo>

  • $USER = the user executing Makefile (could be root if using sudo)

    • <repo>/$USER can be overridden by setting PREFIX=<desired container path>

  • <label> = v$(cat version.txt)

    • <label> can be overridden by setting LABEL_PREFIX=<desired label>

# To make all Akri containers:
make akri PREFIX=$CONTAINER_REPOSITORY PUSH=1
# To make a specific component:
make akri-controller PREFIX=$CONTAINER_REPOSITORY PUSH=1
make akri-agent PREFIX=$CONTAINER_REPOSITORY PUSH=1
make akri-udev-discovery-handler PREFIX=$CONTAINER_REPOSITORY PUSH=1
make akri-debug-echo-discovery-handler PREFIX=$CONTAINER_REPOSITORY PUSH=1
# To make an Agent with embedded Discovery Handlers, turn on the `agent-full` feature along with the 
# feature for any Discovery Handlers that should be embedded.
make akri-agent-full PREFIX=$CONTAINER_REPOSITORY AGENT_FEATURES="onvif-feat opcua-feat udev-feat" PUSH=1

# To make a specific component on specific platform(s):
make akri-controller PREFIX=$CONTAINER_REPOSITORY PLATFORMS="amd64 arm64 arm/v7" PUSH=1

# To make a specific component on specific platform(s) with a specific label:
make akri-controller PREFIX=$CONTAINER_REPOSITORY LABEL_PREFIX=latest PLATFORMS="amd64 arm64 arm/v7" PUSH=1

More information about Akri build

Installing Akri with newly built containers

kubectl create secret docker-registry <your-secret-name> --docker-server=ghcr.io  --docker-username=<your-github-alias> --docker-password=<your-github-token>
helm repo add akri-helm-charts https://project-akri.github.io/akri/
helm install akri akri-helm-charts/akri-dev \
    $AKRI_HELM_CRICTL_CONFIGURATION \
    --set imagePullSecrets[0].name="<your-secret-name>" \
    --set agent.image.repository="ghcr.io/<your-github-alias>/agent" \
    --set agent.image.tag="v<akri-version>" \
    --set controller.image.repository="ghcr.io/<your-github-alias>/controller" \
    --set controller.image.tag="v<akri-version>"

Useful Helm Commands

Helm Package

helm install akri akri-<akri-version>.tgz \
    $AKRI_HELM_CRICTL_CONFIGURATION \
    --set useLatestContainers=true

Helm Template

helm template akri deployment/helm/ \
  --set imagePullSecrets[0].name="<your-secret-name>" \
  --set agent.image.repository="ghcr.io/<your-github-alias>/agent" \
  --set agent.image.tag="v<akri-version>-amd64"

Helm Get Manifest

Run the following to inspect an already running Akri installation in order to see the currently applied yamls such as the Configuration CRD, Instance CRD, protocol Configurations, Agent DaemonSet, and Controller Deployment:

helm get manifest akri | less

Helm Upgrade

Testing with Debug Echo Discovery Handler

Discovery Handler and Broker Development

Developing non-Rust components

Naming Guidelines

Akri existed before naming guidelines were documented and may not employ the guidelines summarized here. However, it is hoped that developers will, at least, consider these guidelines when extending Akri.

General Principles

  • Akri uses English

  • Types need not be included in names unless ambiguity would result

  • Shorter, simpler names are preferred

Akri Discovery Handlers

Various Discovery Handlers have been developed: debug_echo, onvif, opcua, udev

Guidance:

  • snake_case names

  • (widely understood) initializations|acronyms are preferred

Akri Samples Brokers

Various samples Brokers have been developed: onvif-video-broker, opcua-monitoring-broker, udev-video-broker

Guidance:

  • Broker names should reflect Discovery Handler (Protocol) names and be suffixed -broker

  • Use Programming language-specific naming conventions when developing Brokers in non-Rust languages

Kubernetes Resources

Various Kubernetes Resources have been developed:

  • CRDS: Configurations, Instances

  • Instances: akri-agent-daemonset, akri-controller-deployment, akri-onvif, akri-opcua, akri-udev

Guidance:

  • Kubernetes Convention is that resources (e.g. DaemonSet) and CRDs use (upper) CamelCase

  • Akri Convention is that Akri Kubernetes resources be prefixed akri-, e.g. akri-agent-daemonset

  • Names combining words should use hyphens (-) to separate the words e.g. akri-debug-echo

NOTE akri-agent-daemonset contradicts the general principle of not including types, if it had been named after these guidelines were drafted, it would be named akri-agent.

Kubernetes' resources are strongly typed and the typing is evident through the CLI e.g. kubectl get daemonsets/akri-agent-daemonset and through a resource's Kind (e.g. DaemonSet). Including such types in the name is redundant.

Fork and clone . Then, navigate to the repo's top folder.

Note: To build a specific component, use the -p parameter along with the . For example, to only build the Agent, run cargo build -p agent

Note: To test a specific component, use the -p parameter along with the . For example, to only test the Agent, run cargo test -p agent

Create or provide access to a valid cluster configuration by setting KUBECONFIG (can be done in the command line) ... for the sake of this, the config is assumed to be in $HOME/.kube/config. Reference Akri's if needed.

To run the , an environment variable, DEBUG_ECHO_INSTANCES_SHARED, must be set to specify whether it should register with the Agent as discovering shared or unshared devices. Run the debug echo Discovery Handler to discover mock unshared devices like so:

Containers for Akri are currently hosted in ghcr.io/project-akri/akri using the new . Any container repository can be used for private containers. If you want to enable GHCR, you can follow the .

<label> = the label is defined in

For more detailed information about the Akri build infrastructure and other Makefile targets, review the

When installing Akri using helm, you can set the imagePullSecrets, image.repository and image.tag to point to your newly created containers. For example, to install Akri with custom Controller and Agent containers, run the following, specifying the image.tag version to reflect :

More information about the Akri Helm charts can be found in the .

If you make changes to anything in the , you will probably need to create a new Helm chart for Akri. This can be done using the command. To create a chart using the current state of the Helm templates and CRDs, run (from one level above the Akri directory) helm package akri/deployment/helm/. You will see a tgz file called akri-<akri-version>.tgz at the location where you ran the command. Now, install Akri using that chart:

When you install Akri using Helm, Helm creates the DaemonSet, Deployment, and Configuration yamls for you (using the values set in the install command) and applies them to the cluster. To inspect those yamls before installing Akri, you can use . For example, you will see the image in the Agent DaemonSet set to image: "ghcr.io/<your-github-alias>/agent:v<akri-version>-amd64" if you run the following:

To modify an Akri installation to reflect a new state, you can use . See the for further explanation.

In order to kickstart using and debugging Akri, a debug echo Discovery Handler has been created. See its to start using it.

Akri was made to be easily extensible as Discovery Handlers and brokers can be implemented in any language and deployed in their own Pods. Reference the and documents to get started, or if you prefer to learn by example, reference the .

This document focuses on developing Akri's Rust components; however, Akri has several non-Rust components. Reference their respective READMEs in for instructions on developing.

Several and for demo purposes.

A for testing and using Akri's OPC UA Discovery Handler

Python script for running .

Python script for .

One of the in Computer Science is naming things. It is proposed that Akri adopt naming guidelines to make developers' lives easier by providing consistency and reduce naming complexity.

Akri is written principally in Rust, and Rust conventions are used

NOTE Even though the initialization of includes "Video", the specification is broader than video and the broker name adds specificity by including the word (onvif-video-broker) in order to effectively describe its functionality.

Akri
workspace member
workspace member
cluster setup instructions
GitHub container registry
getting started guide
../build/intermediate-containers.mk
Akri Container building document
Helm values
version.txt
helm folder
helm package
helm template
helm upgrade
Customizing an Akri Installation document
documentation
Discovery Handler development
broker Pod development
extending Akri walk-through
Akri's source code
sample brokers
applications
certificate generator
end-to-end integration tests
testing Akri's Configuration validation webhook
two hard things
naming
ONVIF
Requirements
Build and Test Akri's Components
Running Akri's Components Locally
Building Akri Containers
Installing Akri with newly built containers
Useful Helm commands
Testing with Debug Echo Discovery Handler
Discovery Handler and Broker Development
Developing Akri's non-Rust components
Naming Guidelines
debug echo Discovery Handler
user guide