Everything I did to get my cluster up to date

Firstly, I updated my supervisor cluster config to trust harbor ca, until I create a trusted cert for my homelab domain.

cat <<EOF | kubectl apply -f -
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  defaultCNI: antrea
  trust:
    additionalTrustedCAs:
      - name: harbor-ca
        data: $(base64 ./ca.crt)
EOF
kubectl apply -f tanzu/dev-cluster.yaml
kubectl vsphere login --server 192.168.24.66 --insecure-skip-tls-verify --vsphere-username "administrator@vsphere.local" --tanzu-kubernetes-cluster-name dev-cluster --tanzu-kubernetes-cluster-namespace development
kubectl config use-context dev-cluster
kubectl create rolebinding rolebinding-default-privileged-sa-ns_default \
    --namespace=default --clusterrole=psp:vmware-system-privileged \
    --group=system:serviceaccounts
kubectl create clusterrolebinding clusterrolebinding-privileged-sa \
    --clusterrole=psp:vmware-system-privileged \
    --group=system:serviceaccounts
kubectl create rolebinding psp:serviceaccounts --clusterrole=psp:vmware-system-restricted --group=system:serviceaccounts
  • Test trusted registry
docker pull nginx
docker tag nginx:latest harbor.markalston.net/tanzu/nginx:v1
docker push harbor.markalston.net/tanzu/nginx:v1
kubectl create deployment --image harbor.markalston.net/tanzu/nginx:v1 tanzu-nginx
k describe po tanzu-nginx-6b5d68f8c5-4hlkz
krmdep tanzu-nginx

Install contour

wget -O k8s/contour/contour.yaml https://projectcontour.io/quickstart/contour.yaml
kubectl apply -f k8s/contour/contour.yaml
kubectl apply -f k8s/contour/httpproxy-test.yaml
curl -H "Host: hello.local" http://httpbin.tkg.markalston.net:80/nihao
curl -H "Host: hello.local" http://httpbin.tkg.markalston.net:80/hello

Test DNS

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -i -t dnsutils -- nslookup google.com

Deploy jetstack/cert-manager

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml
kubectl get pods --namespace cert-manager
kubectl create namespace demo
kubectl create namespace www-admin
kubectl apply -f k8s/contour/httpproxy-test-tls.yaml
curl -k https://demo.tkg.markalston.net/hello
{"message":"Hello"}
curl -k https://demo.tkg.markalston.net/nihao
{"message":"Hello"}
kubectl delete namespace www-admin
kubectl delete namespace demo

Certs

Let’s Encrypt and the ACME (Automatic Certificate Management Environment) protocol enables you to set up an HTTPS server and automatically obtain a browser-trusted certificate. To get a certificate for your website’s domain from Let’s Encrypt, you have to demonstrate control over the domain by accomplishing certain challenges. A challenge is one among a list of specified tasks that only someone who controls the domain can accomplish.

Currently there are two types of challenges:

  • HTTP-01 challenge: HTTP-01 challenges are completed by posting a specified file in a specified location on a website. Let’s Encrypt CA verifies the file by making an HTTP request on the HTTP URI to satisfy the challenge.

  • DNS-01 challenge: DNS01 challenges are completed by providing a computed key that is present at a DNS TXT record. Once this TXT record has been propagated across the internet, the ACME server can successfully retrieve this key via a DNS lookup. The ACME server can validate that the client owns the domain for the requested certificate. With the correct permissions, cert-manager automatically presents this TXT record for your specified DNS provider.

On successful validation of the challenge, a certificate is granted for the domain.

Set up an IAM Role

cert-manager needs to be able to add records to Route53 in order to solve the DNS01 challenge. To enable this, create a IAM policy with the following permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "route53:GetChange",
      "Resource": "arn:aws:route53:::change/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets",
        "route53:ListResourceRecordSets"
      ],
      "Resource": "arn:aws:route53:::hostedzone/*"
    },
    {
      "Effect": "Allow",
      "Action": "route53:ListHostedZonesByName",
      "Resource": "*"
    }
  ]
}

Credentials

You have two options for the set up - either create a user or a role and attach that policy from above. Using a role is considered best practice because you do not have to store permanent credentials in a secret.

cert-manager supports two ways of specifying credentials:

  • explicit by providing a accessKeyID and secretAccessKey
  • or implicit (using metadata service or environment variables or credentials file.

cert-manager also supports specifying a role to enable cross-account access and/or limit the access of cert-manager. Integration with kiam and kube2iam should work out of the box.

# Load environment variables first: source .envrc or direnv allow
kubectl create secret generic route53-secret --from-literal=secret-access-key="${ROUTE53_SECRET_ACCESS_KEY}" --namespace cert-manager
kubectl get secret route53-secret --namespace cert-manager -ojsonpath={.data.secret-access-key} | base64 -d

Create the staging issuer & cert

cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
  namespace: cert-manager
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: marktalston@gmail.com
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
    - selector:
        dnsZones:
          - "markalston.net"
      dns01:
        cnameStrategy: Follow
        route53:
          region: us-east-1
          accessKeyID: AKIAX3HUFB7B4XMBPM3Y
          hostedZoneID: Z2Q2GUP1WG1LHK
          secretAccessKeySecretRef:
            name: route53-secret
            key: secret-access-key
EOF
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: staging.wildcard.markalston.net
  # cert-manager will put the resulting Secret in the same Kubernetes
  # namespace as the Certificate. You should create the certificate in
  # whichever namespace you want to configure a Host. Or use kubed to
  # automate the synchronization of certs across namespaces
  namespace: cert-manager
spec:
  secretName: staging-wildcard-certs
  issuerRef:
    name: letsencrypt-staging
    kind: ClusterIssuer
  commonName: "*.markalston.net"
  dnsNames:
  - "*.markalston.net"
  - "*.tkg.markalston.net"
EOF
kubectl get secrets -n cert-manager
kubectl get secrets letsencrypt-prod -n cert-manager -ojsonpath={.data.'tls\.key'} | base64 -d > prod.key
kubectl get orders staging.wildcard.markalston.net-9rdgm-552899068
kubectl get orders staging.wildcard.markalston.net-9rdgm-552899068 -ojsonpath={.data.'tls\.key'} | base64 -d > staging.key
kubectl logs -lapp=cert-manager --tail=1000 -n cert-manager

Create the production issuer & cert

cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: marktalston@gmail.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - selector:
        dnsZones:
          - "markalston.net"
      dns01:
        cnameStrategy: Follow
        route53:
          region: us-east-1
          accessKeyID: AKIAX3HUFB7B4XMBPM3Y
          hostedZoneID: Z2Q2GUP1WG1LHK
          secretAccessKeySecretRef:
            name: route53-secret
            key: secret-access-key
EOF
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: prod.wildcard.markalston.net
  # cert-manager will put the resulting Secret in the same Kubernetes
  # namespace as the Certificate. You should create the certificate in
  # whichever namespace you want to configure a Host. Or use kubed to
  # automate the synchronization of certs across namespaces
  namespace: cert-manager
spec:
  secretName: prod-wildcard-certs
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  commonName: "*.markalston.net"
  dnsNames:
  - "*.markalston.net"
  - "*.tkg.markalston.net"
EOF
kubectl get secrets -n cert-manager
kubectl get orders -n cert-manager
kubectl logs -lapp=cert-manager --tail=1000 -n cert-manager

k get challenges.acme.cert-manager.io -A

kubectl get secrets -n cert-manager
kubectl get orders prod.wildcard.markalston.net-tkh6l-552899068
kubectl get orders prod.wildcard.markalston.net-xxslh-552899068 -n cert-manager -o jsonpath={.spec.request} | base64 -d > prod-order.csr
cat prod-order.csr

kubectl get secret prod-wildcard-certs --namespace cert-manager -ojsonpath={.data.'tls\.crt'} | base64 -d > prod.crt
kubectl get secret prod-wildcard-certs --namespace cert-manager -ojsonpath={.data.'tls\.key'} | base64 -d > prod.key

openssl x509 -in prod.crt -text -noout
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: httpbin
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        name: httpbin
        ports:
        - containerPort: 8080
          name: http
        command: ["gunicorn"]
        args: ["-b", "0.0.0.0:8080", "httpbin:app"]
      dnsPolicy: ClusterFirst
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: httpbin
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: httpbin
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    ingress.kubernetes.io/force-ssl-redirect: "true"
    kubernetes.io/ingress.class: contour
    kubernetes.io/tls-acme: "true"
spec:
  tls:
  - secretName: prod-wildcard-certs
    hosts:
    - "httpbin.tkg.markalston.net"
  rules:
  - host: "httpbin.tkg.markalston.net"
    http:
      paths:
      - backend:
          serviceName: httpbin
          servicePort: 8080
EOF
kubectl -n default get pod -l app=httpbin
kubectl describe certificate prod.wildcard.markalston.net -n cert-manager | grep -C3 "Certificate is up to date"
curl -v https://httpbin.tkg.markalston.net

Install Kpack

./scripts/kpack/install.sh
./scripts/kpack/01-create-docker-secret.sh
./scripts/kpack/02-create-service-account.sh
./scripts/kpack/03-create-cluster-store.sh
./scripts/kpack/04-create-cluster-stack.sh
./scripts/kpack/05-create-builder.sh
./scripts/kpack/06-create-image.sh

Install TBS

./scripts/build-service/install.sh
kubectl get secrets -n build-service canonical-registry-secret -o jsonpath={.data.'\.dockerconfigjson'} | base64 -d
kp clusterbuilder list
kp clusterstack status base
kp clusterstack status default
kp clusterstack status full
kp clusterstack status tiny
kp secret create my-registry-creds --registry harbor.markalston.net --registry-user admin --namespace default
kubectl get secrets -n default my-registry-creds -o jsonpath={.data.'\.dockerconfigjson'} | base64 -d

Install Knative

./scripts/knative/install.sh

Install Tekton

kubectl apply –filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml kubectl apply –filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml

cat <<EOF | kubectl apply -f -
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: sentry-pro-psp
spec:
  privileged: true
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  volumes:
  - '*'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sentry-pro-clusterrole
rules:
- apiGroups:
  - policy
  resources:
  - podsecuritypolicies
  verbs:
  - use
  resourceNames:
  - sentry-pro-psp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: sentry-pro-clusterrole
  namespace: sentry-pro
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: sentry-pro-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts
- kind: ServiceAccount # Omit apiGroup
  name: default
  namespace: sentry-pro
EOF

Rotate Harbor Cert

https://cert-manager.io/docs/usage/certificate/#rotation-private-key

Update homelab harbor cert

kubectl get secrets prod-wildcard-certs -n cert-manager -ojsonpath={.data.'tls\.key'} | base64 -d > prod.key
kubectl get secrets prod-wildcard-certs -n cert-manager -ojsonpath={.data.'tls\.crt'} | base64 -d > prod.crt

Grab the first cert in the prod.crt and use that to update the server_cert_key cert_pem and grab prod.key and use that to update the server_cert_key private_key_pem

To rotate, all you should really need to do is update the cert_pem because the private key will be the same unless you updated cert manager with a new private key.

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./ca.crt

This project is for educational and home lab purposes.