Provision a Tanzu Kubernetes Cluster
Create a namespace
You have to do this from the Workload Management section in vCenter. Make sure to set the storage class to kubernetes-storage-policy.
Connect to the Supervisor Cluster as a vCenter Single Sign-On User
kubectl vsphere login \
--server https://192.168.24.66 \
--vsphere-username administrator@vsphere.local \
--insecure-skip-tls-verify
Login with tanzu cli
tanzu login
? Select login type Local kubeconfig
? Enter path to kubeconfig (if any) /Users/malston/.kube/config
? Enter kube context to use 192.168.24.66
? Give the server a name supervisor
✔ successfully logged in to management cluster using the kubeconfig supervisor
Check which namespace you are currently in (*)
kubectl config get-contexts
Switch to the namespace we created
kubectl config use-context development
Create the cluster
See https://docs-staging.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.3/vmware-tanzu-kubernetes-grid-13/GUID-tanzu-k8s-clusters-vsphere.html
tanzu cluster create dev-cluster -d > ./tanzu/dev-cluster.yaml
tanzu cluster create dev-cluster --tkr=v1.20.7---vmware.1-tkg.1.7fb9067 -v9 --log-file dev-cluster-create.log
You are trying to create a cluster with kubernetes version '1.20.7+vmware.1-tkg.1.7fb9067' on vSphere with Tanzu, Please make sure virtual machine image for the same is available in the cluster content library.
Do you want to continue? [y/N]: y
List the clusters
± ma |master U:10 ?:3 ✗| → tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
dev-cluster development running 1/1 7/7 1.20.7+vmware.1-tkg.1.7fb9067 <none>
Login to Dev Cluster
Using the tanzu cli
⎈ |192.168.24.66:development py-3.9.0 malston-a01 in ~/workspace/homelab
± ma |master U:10 ?:3 ✗| → tanzu cluster kubeconfig get dev-cluster --namespace development --admin
Credentials of workload cluster 'dev-cluster' have been saved
You can now access the cluster by running 'kubectl config use-context dev-cluster-admin@dev-cluster'
⎈ |192.168.24.66:development py-3.9.0 malston-a01 in ~/workspace/homelab
± ma |master U:10 ?:3 ✗| → kubectl config use-context dev-cluster-admin@dev-cluster
Switched to context "dev-cluster-admin@dev-cluster".
⎈ |dev-cluster-admin@dev-cluster:default py-3.9.0 malston-a01 in ~/workspace/homelab
± ma |master U:10 ?:3 ✗| → k get ns
NAME STATUS AGE
default Active 28m
kube-node-lease Active 28m
kube-public Active 28m
kube-system Active 28m
vmware-system-auth Active 27m
vmware-system-cloud-provider Active 27m
vmware-system-csi Active 27m
Using the vsphere plugin
kubectl vsphere login --server 192.168.24.66 \
--tanzu-kubernetes-cluster-name dev-cluster \
--tanzu-kubernetes-cluster-namespace development
--vsphere-username administrator@vsphere.local \
--insecure-skip-tls-verify
Login to Supervisor Cluster
Using the vsphere plugin
KUBECTL_VSPHERE_PASSWORD=Cl0udFoundry! kubectl vsphere login --vsphere-username administrator@vsphere.local --server=https://192.168.24.66 --insecure-skip-tls-verify
kubectl config use-context tkgs-cluster-ns-01
then verify the vSphere namespace configuration
| HARBOR_NAMESPACE=$(kubectl get ns | grep registry- | awk ‘{print $1}’) |
| HARBOR_POD_ID=$(echo $HARBOR_NAMESPACE | sed ‘s/.*-//’) |
| echo “Harbor Username: $(kubectl -n ${HARBOR_NAMESPACE} get secret “harbor-${HARBOR_POD_ID}-controller-registry” –template= | base64 –decode | base64 –decode)” |
| echo “Harbor Password: $(kubectl -n ${HARBOR_NAMESPACE} get secret “harbor-${HARBOR_POD_ID}-controller-registry” –template= | base64 –decode | base64 –decode)” |
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain /Users/malston/workspace/homelab/harbor.ca.crt docker login https://192.168.24.68 sudo install ~/Downloads/docker-credential-vsphere /usr/local/bin/docker-credential-vsphere docker-credential-vsphere login 192.168.24.68 -u administrator@vsphere.local
Enable Embedded Harbor Registry
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-5B0373CA-5379-47AF-9284-ED725FC79D89.html
Shell into a node of your cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-C86B9028-2701-40FE-BA05-519486E010F4.html https://www.virtuallyghetto.com/2020/10/how-to-ssh-to-tanzu-kubernetes-grid-tkg-cluster-in-vsphere-with-tanzu.html
kubectl vsphere login –server=192.168.24.66 –insecure-skip-tls-verify kubectl config use-context development
kubectl -n development get secrets
kubectl -n supervisor-ns1 get secrets homelab-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d fHfiw3QMKYMyKFUJjEfsoAs3MAlgx2pEN9LtbXA5N4s=
kubectl config use-context dev-cluster-admin@dev-cluster k get no -owide
ssh 192.168.12.6 -l vmware-system-user 7ogSYFhF0VCqA3v/Nm8dxU3h8xxJvlgozHOQ5FhhPAo=
vmware-system-user@dev-cluster-control-plane-mzprc [ /etc/kubernetes ]$ sudo su root [ /etc/kubernetes ]# export KUBECONFIG=/etc/kubernetes/admin.conf root [ /etc/kubernetes ]# alias k=kubectl root [ /etc/kubernetes ]# k get no NAME STATUS ROLES AGE VERSION dev-cluster-control-plane-mzprc Ready control-plane,master 15d v1.20.2+vmware.1 dev-cluster-workers-j7tc7-8546bb84bf-8dn6w Ready
Shell into the supervisor
Enable shell access to vcenter.
ssh vcenter.markalston.net -l administrator@vsphere.local Cl0udFoundry!
root@vcenter [ ~ ]# /usr/lib/vmware-wcp/decryptK8Pwd.py
Read key from file
Connected to PSQL
Cluster: domain-c8:673078ed-b60b-488f-9bf7-1e165ee78956
IP: 192.168.20.20
PWD: XlEofSww1qK3vnZ9A8/QXO3T3QwO21RO9NaIw/XNJcuOsvqTM3jJWLATPXTGB5r3xY0MTzAQa6MM3+CoJ3I5uYGUEyHiRmIH1BGIfOyu9oJs5zxYCYvbGu1ZIqjZ3S8PE8+WkF13QNdAQ+4m3J47T/DMhaO91rWmNQwuoYstGc4=
------------------------------------------------------------
SSH login using decrypted password above:
ssh 192.168.20.21 -l root
zc3YHKuWzgBztgXxXlDpmfuZZwQOJ15/BhO74DxJT5UoASaf+bSsnNI4fvTeatAPMuRwaWfMg0WZyWqLTnLQ4XgSucgr1htrScq/vmBbUg7azX2nHLFx8sQuya+vmECUM+J1kTgISfMJ60CnXPGWBwIOqAlEZL+vtCRnP8+b3mY=
kubectl config set-context kubernetes-admin@kubernetes –namespace=kube-system k get svc
root@4208bf8cf21d02c022bc90e0689557cd [ ~ ]# k describe svc kube-apiserver-lb-svc
Name: kube-apiserver-lb-svc
Namespace: kube-system
Labels: service.route.lbapi.run.tanzu.vmware.com/gateway-name=kube-apiserver-lb-svc
service.route.lbapi.run.tanzu.vmware.com/gateway-namespace=kube-system
service.route.lbapi.run.tanzu.vmware.com/type=direct
Annotations: lbapi.run.tanzu.vmware.com/ip-address: 192.168.24.66
Selector: component=kube-apiserver
Type: LoadBalancer
IP Families: <none>
IP: 10.96.0.153
IPs: 10.96.0.153
LoadBalancer Ingress: 192.168.24.66
Port: nginx 443/TCP
TargetPort: 443/TCP
NodePort: nginx 30492/TCP
Endpoints: 192.168.12.3:443,192.168.12.4:443,192.168.12.5:443
Port: kube-apiserver 6443/TCP
TargetPort: 6443/TCP
NodePort: kube-apiserver 32694/TCP
Endpoints: 192.168.12.3:6443,192.168.12.4:6443,192.168.12.5:6443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Tail kube-apiserver logs
k logs -l component=kube-apiserver -f
Get logs
kubectl -n development get secrets dev-cluster-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d
7ogSYFhF0VCqA3v/Nm8dxU3h8xxJvlgozHOQ5FhhPAo=
ssh 192.168.12.25 -l vmware-system-user
7ogSYFhF0VCqA3v/Nm8dxU3h8xxJvlgozHOQ5FhhPAo=
export KUBECONFIG=/etc/kubernetes/admin.conf
root@4208bf8cf21d02c022bc90e0689557cd [ /var/log/pods/kube-system_wcp-authproxy-4208bf8cf21d02c022bc90e0689557cd_11a4a709f82285e13ceb229adbde2c37/wcp-authproxy ]# tail *.log -f
SSH
Follow these instructions to create a jumpbox pod in the supervisor cluster or ssh using these instructions.
CLUSTER=dev-cluster && kubectl get secrets $CLUSTER-ssh-password -o jsonpath={.data.'ssh-passwordkey'} --kubeconfig=/Users/malston/.kube/supervisor -n development | base64 --decode
ssh vmware-system-user@192.168.12.28
7ogSYFhF0VCqA3v/Nm8dxU3h8xxJvlgozHOQ5FhhPAo=
export NAMESPACE=development
export CLUSTER=dev-cluster
kubectl config use-context $NAMESPACE
kubectl get secrets $CLUSTER-ssh -o jsonpath={.data.'ssh-privatekey'} > ~/.ssh/$CLUSTER-ssh
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: jumpbox
namespace: development
spec:
containers:
- image: "photon:3.0"
name: jumpbox
command: [ "/bin/bash", "-c", "--" ]
args: [ "yum install -y openssh-server; mkdir /root/.ssh; cp /root/ssh/ssh-privatekey /root/.ssh/id_rsa; chmod 600 /root/.ssh/id_rsa; while true; do sleep 30; done;" ]
volumeMounts:
- mountPath: "/root/ssh"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: dev-cluster-ssh
EOF
Security
Tanzu Kubernetes Grid Service provisions Tanzu Kubernetes clusters with the PodSecurityPolicy Admission Controller enabled. This means that pod security policy is required to deploy workloads.
Tanzu Kubernetes clusters include default PodSecurityPolicy that you can bind to for privileged and restricted workload deployment.
For default namespace:
kubectl create rolebinding rolebinding-default-privileged-sa-ns_default \
--namespace=default --clusterrole=psp:vmware-system-privileged \
--group=system:serviceaccounts
For cluster:
kubectl create clusterrolebinding clusterrolebinding-privileged-sa \
--clusterrole=psp:vmware-system-privileged \
--group=system:serviceaccounts
Then you can deploy the Kubernetes Guestbook application.
Harbor
To use Harbor as an internal registry, you’ll need to configure the docker daemon on every worker to use an insecure registry or update the certs. See add-harbor-cert-to-docker.sh to update the certs and reboot the MacOS docker daemon. You can use the same script to update the Kubernetes nodes but you’ll have to uncomment out some things first. Or you can login to every node and change the /etc/docker/daemon.json file like below:
{
"insecure-registries": ["10.213.249.66"],
"exec-opts": ["native.cgroupdriver=systemd"],
"bridge": "none",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
where 10.213.249.66 is the IP of harbor.
Then run:
sudo systemctl restart docker
Install Contour for Ingress
The following steps include the procedure taken from Tanzu Kubernetes Ingress Example to deploy the Contour ingress controller.
-
Disable PSPs
If you haven’t already, you need to give
system:authenticatedgroup thevmware-system-privilegedcluster role to effectively disable PSPskubectl create clusterrolebinding "psp:authenticated" --clusterrole=psp:vmware-system-privileged --group=system:authenticated -
Deploy Contour with the changes specified in the example
kubectl apply -f k8s/contour/contour.yaml or kubectl apply -f https://projectcontour.io/quickstart/contour.yamlNOTE: If you’re running multiple ingress controllers, or running on a cloud provider that natively handles ingress, you can specify the annotation
kubernetes.io/ingress.class: "contour"on all ingresses that you would like Contour to claim. You can customize the class name with the--ingress-class-nameflag at runtime. If thekubernetes.io/ingress.classannotation is present with a value other than"contour", Contour will ignore that ingress.NB: If you’re not getting an
EXTERNAL-IPaddress then you need to check thekube-controller-managerlogs. Some indication of what’s happening should appear in the those logs. Contour doesn’t provision Load Balancers. Envoy doesn’t care how the traffic gets to it as long as it happens. So you may have to check with the cloud provider to see how it’s supposed to be configured.In this deployment, Contour created the certs for communication over gRPC between Envoy and Contour using the
contour-certgenjob. To create the certs manually, follow these instructions.
Contour Examples
-
Deploy Ingress example
kubectl apply -f k8s/contour/ingress-test.yaml -
Test Ingress example
export LOAD_BALANCER_IP=$(kubectl -n projectcontour get service envoy -o jsonpath='{.status.loadBalancer.ingress[0].ip}') curl http://$LOAD_BALANCER_IP:80/hello {"message":"Hello"} curl http://$LOAD_BALANCER_IP:80/nihao {"message":"Hello"}The VMware docs have you deploy an example that uses the standard Kubernetes Ingress object, however, Contour has expanded functionality of the Ingress object using the HTTPProxy CRD. To read more about this, see their documentation here.
-
Deploy HTTPProxy example
kubectl apply -f k8s/contour/httpproxy-test.yaml -
Test HTTPProxy example
export LOAD_BALANCER_IP=$(kubectl -n projectcontour get service envoy -o jsonpath='{.status.loadBalancer.ingress[0].ip}') curl -H "Host: hello.local" http://$LOAD_BALANCER_IP:80/hello {"message":"Hello"} curl -H "Host: hello.local" http://$LOAD_BALANCER_IP:80/nihao {"message":"Hello"} -
Deploy HTTPProxy example with TLS
-
Create a namespace for tls delegation
kubectl create ns www-admin -
Install Cert Manager
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml -
Verify the Installation
kubectl get pods --namespace cert-manager -
Create the demo and www-admin namespaces
kubectl create namespace demo www-admin -
Deploy the app
kubectl apply -f k8s/contour/httpproxy-test-tls.yaml -
Test
curl -k https://demo.tkg.markalston.net/hello {"message":"Hello"} curl -k https://demo.tkg.markalston.net/nihao {"message":"Hello"}
-
CI/CD
Installing the Concourse Helm chart using the documentation here.
There are some helper scripts to assist in adding the harbor root ca to your local docker daemon as well as the one used by the nodes in a Kubernetes cluster.
- Deploy the Concourse Helm chart.
cd concourse-helm
./install.sh
Once Concourse is deployed, you can use this example to deploy the spring-boot-sample app to it.
I created a fork of it so I could add a service account for Concourse to deploy the app to a specific namespace.

The kubernetes Concourse resource is no longer maintained, but you could easily accomplish the same thing with a custom task.
Update Insecure Registry
+YMeuOAsZVAsfu3JBzTrGGcpyppe/vmcUD4Uc8gOwoQ=
sudo vim /etc/docker/daemon.json
"insecure-registries": ["harbor.markalston.net"],
sudo systemctl daemon-reload; sudo systemctl restart docker
ssh 192.168.12.25 -l vmware-system-user
7ogSYFhF0VCqA3v/Nm8dxU3h8xxJvlgozHOQ5FhhPAo=
sudo vim /etc/containerd/config.toml
[plugins.cri.registry.configs."harbor.markalston.net".tls]
insecure_skip_verify = true
sudo systemctl daemon-reload; sudo systemctl restart containerd