kubectl create secret docker-registry crPullSecret --docker-server=<cr> --docker-username=<cr-user> --docker-password=<cr-token>
and access it with an imagePullSecret
. Here, we will assume the secret is named crPullSecret
.Discovery
service and Registration
client defined in the Akri's discovery gRPC proto file. These DHs run as their own Pods and are expected to register with the Agent, which hosts the Registration
service defined in the gRPC interface.cargo generate
to clone the Discovery Handler templatecargo-generate
and use the tool to pull down Akri's template, specifying the name of the project with the --name
parameter.akri-http-discovery-handler
project, navigate to main.rs
. It contains all the logic to register our DiscoveryHandler
with the Akri Agent. We only need to specify the DiscoveryHandler
name and whether the device discovered by our DiscoveryHandler
can be shared. Set name
equal to "http"
and shared
to true
, as our HTTP Discovery Handler will discover devices that can be shared between nodes. The protocol name also resolves to the name of the socket the Discovery Handler will run on.discovery_details
string. The Agent passes this string to Discovery Handlers as part of a DiscoverRequest
. A discovery handler must then parse this string -- Akri's built in Discovery Handlers store an expected structure in it as serialized YAML -- to determine what to discover, filter out of discovery, and so on. In our case, no parsing is required, as it will simply put our discovery endpoint. Our implementation will ping the discovery service at that URL to see if there are any devices.DiscoveryHandler
DiscoveryHandlerImpl
Struct has been created (in discovery_handler.rs
) that minimally implements the DiscoveryHandler
service. Let's fill in the discover
function, which returns the list of discovered devices. It should have all the functionality desired for discovering devices via your protocol and filtering for only the desired set. For the HTTP protocol, discover
will perform an HTTP GET on the Discovery Handler's discovery service URL received in the DiscoverRequest
.Cargo.toml
under dependencies.discovery_handler.rs
.discover
function so as to match the following. Note, discover
creates a streamed connection with the Agent, where the Agent gets the receiving end of the channel and the Discovery Handler sends device updates via the sending end of the channel. If the Agent drops its end, the Discovery Handler will stop discovery and attempt to re-register with the Agent. The Agent may drop its end due to an error or a deleted Configuration.samples/apps/http-apps/cmd/device/main.go
). The application will accept a list of path
arguments, which will define endpoints that the service will respond to. These endpoints represent devices in our HTTP Discovery Handler. The application will also accept a set of device
arguments, which will define the set of discovered devices.samples/apps/http-apps/go.mod
:samples/apps/http-apps/Dockerfiles/device
:docker build
and docker push
:samples/apps/http-apps/kubernetes/device.yaml
(update image based on the ${IMAGE}):device.yaml
to create a deployment (called device
) and a pod (called device-...
):Optional: check one the services:kubectl run curl -it --rm --image=curlimages/curl -- shThen, pick a value forX
between 1 and 9:X=6curl device-${X}:8080Any or all of these should return a (random) 'sensor' value.
discovery
) using the deployment:Optional: check the service to confirm that it reports a list of devices correctly using:kubectl run curl -it --rm --image=curlimages/curl -- shThen, curl the service's endpoint:curl discovery:8080/discoveryThis should return a list of 9 devices, of the formhttp://device-X:8080
custom.discovery.enabled=true
. Specify the container for that DaemonSet as the HTTP discovery handler that you built above by setting custom.discovery.image.repository=$DH_IMAGE
and custom.discovery.image.repository=$TAGS
. To automatically deploy a custom Configuration, set custom.configuration.enabled=true
. We will customize this Configuration to contain the discovery endpoint needed by our HTTP Discovery Handler by setting it in the discovery_details
string of the Configuration, like so: custom.configuration.discoveryDetails=http://discovery:9999/discovery
. We also need to set the name the Discovery Handler will register under (custom.configuration.discoveryHandlerName
) and a name for the Discovery Handler and Configuration (custom.discovery.name
and custom.configuration.name
). All these settings come together as the following Akri installation command:Note: See the cluster setup steps for information on how to set the crictl configuration variableAKRI_HELM_CRICTL_CONFIGURATION
curl
ing a device's endpoints. This type of solution would be applicable in batch-like scenarios where the brokersamples/brokers
and running:"samples/brokers/http"
to the members in ./Cargo.toml
.Device
properties map will be transferred into the broker container's environment variables. Retrieving them is simply a matter of querying environment variables like this:samples/brokers/http/src/main.rs
. We retrieve the HTTP-based Device url from the environment variables, make a simple GET request to retrieve the device data, and output the response to the log:samples/brokers/http/Cargo.toml
:samples/brokers/http/Dockerfiles/standalone
:.dockerignore
is configured so that docker will ignore most files in our repository, some exceptions will need to be added to build the HTTP broker:Note: substitutehelm upgrade
forhelm install
if you do not have an existing Akri installation