KubeStellar Quickstart Setup#
This Quick Start is based on Scenario 1 of our examples. In a nutshell, you will:
- Before you begin, prepare your system (get the prerequisites)
- Create the Kubestellar core components on a cluster
- Commission a workload to a WEC
Before You Begin#
KubeStellar prerequisites#
The following prerequisites are required. You can use the check-pre-req script, to validate if all needed pre-requisites are installed.
Infrastructure (clusters)#
Because of its multicluster architecture, KubeStellar requires that you have the necessary privileges and infrastructure access to create and/or configure the necessary clusters. To create/administer the required few small kubernetes clusters, our current examples can use:
- kind OR
- k3s OR
- openshift
Software Prerequisites: for Using KubeStellar#
-
kubeflex version 0.6.1 or higher To install kubeflex go to https://github.com/kubestellar/kubeflex/blob/main/docs/users.md#installation. To upgrade from an existing installation, follow these instructions. At the end of the install make sure that the kubeflex CLI, kflex, is in your path.
-
OCM CLI (clusteradm) To install OCM CLI use:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
Note that the default installation of clusteradm will install in /usr/local/bin which will require root access. If you prefer to avoid root, you can specify an alternative installation path using the INSTALL_DIR environment variable, as follows:
mkdir -p ocm export INSTALL_DIR="$PWD/ocm" curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash export PATH=$PWD/ocm:$PATH
At the end of the install make sure that the OCM CLI, clusteradm, is in your path.
-
helm - to deploy the kubestellar and kubeflex charts
- kubectl - to access the kubernetes clusters
- docker (or compatible docker engine that works with kind)
#
Automated Check of Pre-Requisites for KubeStellar#
The check_pre_req script offers a convenient way to check for the pre-requisites eeded for KubeStellar deployment and use case scenarios.
The script checks for a pre-requisite presence in the path, by using the which
command, and it can optionally provide version and path information for pre-requisites that are present, or installation information for missing pre-requisites.
We envision that this script could be useful for user-side debugging as well as for asserting the presence of pre-requisites in higher-level automation scripts.
The script accepts a list of optional flags and arguments.
Supported flags:#
-A|--assert
: exits with error code 2 upon finding the fist missing pre-requisite-L|--list
: prints a list of supported pre-requisites-V|--verbose
: displays version and path information for installed pre-requisites or installation information for missing pre-requisites-X
: enableset -x
for debugging the script
Supported arguments:#
The script accepts a list of specific pre-requisites to check, among the list of available ones:
Examples#
For example, list of pre-requisites required by KubeStellar can be checked with the command below (add the -V
flag to get the version of each program and a suggestions on how to install missing pre-requisites):
$ hack/check_pre_req.sh
Checking pre-requisites for using KubeStellar:
✔ Docker
✔ kubectl
✔ KubeFlex
✔ OCM CLI
✔ Helm
Checking additional pre-requisites for running the examples:
✔ Kind
X ArgoCD CLI
Checking pre-requisites for building KubeStellar:
✔ GNU Make
✔ Go
✔ KO
Create the KubeStellar Core components#
Use our helm chart to set up the main core and establish its initial state using our helm chart:
Set the Version appropriately as an environment variable#
Use the Helm chart to deploy the KubeStellar Core to a Kind, K3s, or OpenShift cluster:#
Pick the cluster configuration which applies to your system:
For convenience, a new local Kind cluster that satisfies the requirements for KubeStellar setup and that can be used to commission the quickstart workload can be created with the following command:
After the cluster is created, deploy the Kubestellar Core installation on it with the helm chart commandA new local k3s cluster that satisfies the requirements for KubeStellar setup and that can be used to commission the quickstart workload can be created with the following command:
After the cluster is created, deploy the Kubestellar Core installation on it with the helm chart commandWhen using this option, one is required to explicitly set the isOpenShift
variable to true
by including --set "kubeflex-operator.isOpenShift=true"
in the Helm chart installation command.
After the cluster is created, deploy the Kubestellar Core installation on it with the helm chart command
Once you have done this, you should have the KubeStellar core components plus the required workload definition space and inventory and transport space control planes running on your cluster.
Define, Bind and Commission a workload on a WEC#
Set up and define the workload execution cluster(s)#
# Create and Register WECs for the "Common Setup" used in Examples
The following steps show how to create two new kind
clusters and
register them with the hub as descibed in the
official open cluster management docs.
Note that kind
does not support three or more concurrent clusters unless you raise some limits as described in this kind
"known issue": Pod errors due to “too many open files”.
-
Execute the following commands to create two kind clusters, named
cluster1
andcluster2
, and register them with the OCM hub. These clusters will serve as workload clusters. If you have previously executed these commands, you might already have contexts namedcluster1
andcluster2
. If so, you can remove these contexts using the commandskubectl config delete-context cluster1
andkubectl config delete-context cluster2
.: set flags to "" if you have installed KubeStellar on an OpenShift cluster flags="--force-internal-endpoint-lookup" clusters=(cluster1 cluster2); for cluster in "${clusters[@]}"; do kind create cluster --name ${cluster} kubectl config rename-context kind-${cluster} ${cluster} clusteradm --context its1 get token | grep '^clusteradm join' | sed "s/<cluster_name>/${cluster}/" | awk '{print $0 " --context '${cluster}' --singleton '${flags}'"}' | sh done
The
clusteradm
command grabs a token from the hub (its1
context), and constructs the command to apply the new cluster to be registered as a managed cluster on the OCM hub. -
Repeatedly issue the command:
until you see that the certificate signing requests (CSR) for both cluster1 and cluster2 exist. Note that the CSRs condition is supposed to be
Pending
until you approve them in step 4. -
Once the CSRs are created approve the csrs to complete the cluster registration with the command:
-
Check the new clusters are in the OCM inventory and label them:
Bind and Commission the workload#
Check for available clusters with label location-group=edge
Create a BindingPolicy to deliver an app to all clusters in wds1:
kubectl --context wds1 apply -f - <<EOF
apiVersion: control.kubestellar.io/v1alpha1
kind: BindingPolicy
metadata:
name: nginx-bpolicy
spec:
clusterSelectors:
- matchLabels: {"location-group":"edge"}
downsync:
- objectSelectors:
- matchLabels: {"app.kubernetes.io/name":"nginx"}
EOF
This BindingPolicy configuration determines where to deploy the workload by using the label selector expressions found in clusterSelectors. It also specifies what to deploy through the downsync.labelSelectors expressions. Each matchLabels expression is a criterion for selecting a set of objects based on their labels. Other criteria can be added to filter objects based on their namespace, api group, resource, and name. If these criteria are not specified, all objects with the matching labels are selected. If an object has multiple labels, it is selected only if it matches all the labels in the matchLabels expression. If there are multiple objectSelectors, an object is selected if it matches any of them.
Now deploy the app:
kubectl --context wds1 apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/name: nginx
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
labels:
app.kubernetes.io/name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: public.ecr.aws/nginx/nginx:latest
ports:
- containerPort: 80
EOF
Verify that manifestworks wrapping the objects have been created in the mailbox namespaces:
kubectl --context its1 get manifestworks -n cluster1
kubectl --context its1 get manifestworks -n cluster2
Verify that the deployment has been created in both clusters
kubectl --context cluster1 get deployments -n nginx
kubectl --context cluster2 get deployments -n nginx
Please note, in line with Kubernetes’ best practices, the order in which you apply a BindingPolicy and the objects doesn’t affect the outcome. You can apply the BindingPolicy first followed by the objects, or vice versa. The result remains consistent because the binding controller identifies any changes in either the BindingPolicy or the objects, triggering the start of the reconciliation loop.
### [Optional] Teardown Scenario 1