Note: different tools are needed depending on what parts of Akri you are developing. This document aims to make that clear.
v1.61.0
as default toolchain.Note: To build a specific component, use the-p
parameter along with the workspace member. For example, to only build the Agent, runcargo build -p agent
Note: To test a specific component, use the-p
parameter along with the workspace member. For example, to only test the Agent, runcargo test -p agent
KUBECONFIG
(can be done in the command line) ... for the sake of this, the config is assumed to be in $HOME/.kube/config
. Reference Akri's cluster setup instructions if needed.cargo build
cargo run
8081
to serve Akri's metrics (for Prometheus integration):METRICS_PORT
can be set to any value as it is only used if Prometheus is enabled. Just ensure that the Controller and Agent use different ports if they are both running.
8082
. The Agent must be run privileged in order to connect to the kubelet. Specify the user path to cargo $HOME/.cargo/bin/cargo
so you do not have to re-install cargo for the sudo user:agent-full
feature and the feature for each Discovery Handler you wish to embed -- Debug echo is always included if agent-full
is turned on. For example, to run the Agent with OPC UA, ONVIF, udev, and debug echo Discovery Handlers add the following to the above command: --features "agent-full udev-feat opcua-feat onvif-feat"
.Note: The environment variablesHOST_CRICTL_PATH
,HOST_RUNTIME_ENDPOINT
, andHOST_IMAGE_ENDPOINT
are for slot-reconciliation (making sure Pods that no longer exist are not still claiming Akri resources). The values of these vary based on Kubernetes distribution. The above is for vanilla Kubernetes. For MicroK8s, useHOST_CRICTL_PATH=/usr/local/bin/crictl HOST_RUNTIME_ENDPOINT=/var/snap/microk8s/common/run/containerd.sock HOST_IMAGE_ENDPOINT=/var/snap/microk8s/common/run/containerd.sock
and for K3s, useHOST_CRICTL_PATH=/usr/local/bin/crictl HOST_RUNTIME_ENDPOINT=/run/k3s/containerd/containerd.sock HOST_IMAGE_ENDPOINT=/run/k3s/containerd/containerd.sock
.
akri/discovery-handler-modules/
and run using cargo run
, setting where the Discovery Handler socket should be created in the DISCOVERY_HANDLERS_DIRECTORY
variable. The discovery handlers must be run privileged in order to connect to the Agent. For example, to run the ONVIF Discovery Handler locally:DEBUG_ECHO_INSTANCES_SHARED
, must be set to specify whether it should register with the Agent as discovering shared or unshared devices. Run the debug echo Discovery Handler to discover mock unshared devices like so:Makefile
has been created to help with the more complicated task of building the Akri components and containers for the various supported platforms.qemu
can be installed with:qemu
to be fully configured on Ubuntu 18.04, after running apt-get install, run these commands:ghcr.io/project-akri/akri
using the new GitHub container registry. Any container repository can be used for private containers. If you want to enable GHCR, you can follow the getting started guide.Makefile
will try to create containers with tag following this format: <repo>/$USER/<component>:<label>
where<component>
= rust-crossbuild | opencv-base<repo>
= devcaptest.azurecr.io
<repo>
can be overridden by setting REGISTRY=<desired repo>
$USER
= the user executing Makefile
(could be root
if using sudo)<repo>/$USER
can be overridden by setting PREFIX=<desired container path>
cross
tool to crossbuild the Akri Rust code. There is a container built for each supported platform and they contain any required dependencies for Akri components to build. The dockerfile can be found here: build/containers/intermediate/Dockerfile.rust-crossbuild-*Makefile
will try to create containers with tag following this format: <repo>/$USER/<component>:<label>
where<component>
= controller | agent | etc<repo>
= devcaptest.azurecr.io
<repo>
can be overridden by setting REGISTRY=<desired repo>
$USER
= the user executing Makefile
(could be root
if using sudo)<repo>/$USER
can be overridden by setting PREFIX=<desired container path>
<label>
= v$(cat version.txt)<label>
can be overridden by setting LABEL_PREFIX=<desired label>
sudo
, this will conflict with the cross
command. This flow has helped:imagePullSecrets
, image.repository
and image.tag
Helm values to point to your newly created containers. For example, to install Akri with with custom Controller and Agent containers, run the following, specifying the image.tag
version to reflect version.txt:helm package
command. To create a chart using the current state of the Helm templates and CRDs, run (from one level above the Akri directory) helm package akri/deployment/helm/
. You will see a tgz file called akri-<akri-version>.tgz
at the location where you ran the command. Now, install Akri using that chart:helm template
. For example, you will see the image in the Agent DaemonSet set to image: "ghcr.io/<your-github-alias>/agent:v<akri-version>-amd64"
if you run the following:helm upgrade
. See the Customizing an Akri Installation document for further explanation.debug_echo
, onvif
, opcua
, udev
snake_case
namesonvif-video-broker
, opcua-monitoring-broker
, udev-video-broker
-broker
NOTE Even though the initialization of ONVIF includes "Video", the specification is broader than video and the broker name adds specificity by including the word (onvif-video-broker
) in order to effectively describe its functionality.
Configurations
, Instances
akri-agent-daemonset
, akri-controller-deployment
, akri-onvif
, akri-opcua
, akri-udev
DaemonSet
) and CRDs use (upper) CamelCaseakri-
, e.g. akri-agent-daemonset
-
) to separate the words e.g. akri-debug-echo
NOTEakri-agent-daemonset
contradicts the general principle of not including types, if it had been named after these guidelines were drafted, it would be namedakri-agent
.Kubernetes' resources are strongly typed and the typing is evident through the CLI e.g.kubectl get daemonsets/akri-agent-daemonset
and through a resource'sKind
(e.g.DaemonSet
). Including such types in the name is redundant.