Our primary deployment architecture is serverless. If that’s not possible, then deploy apps using Kubernetes, preferably with Istio service mesh.
MicroK8s in Ubuntu 20.04
MicroK8s is the most practical way to install/develop with Kubernetes locally, and also to deploy Kubernetes in a single-instance VM. Installation reference. We use version 1.18 because as of July 2020 this is the latest supported Kubernetes version in DigitalOcean. So let’s mimic the production Kubernetes to reduce incompatibility problems.
sudo snap install microk8s --classic --channel=1.18/stable
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
Close and reopen the Terminal for the group update to take place.
microk8s status --wait-ready
Install default addons.
microk8s enable dns dashboard registry storage ingress dns
Sometimes you may want Knative (includes Istio): microk8s enable knative
.
To get the Kubernetes config:
mkdir ~/.kube
microk8s config | tee ~/.kube/microk8s.config
# BEFORE using any kubectl/helm commands with microk8s, do this:
export KUBECONFIG=~/.kube/microk8s.config
kubectl config use-context microk8s
Enable traffic forwarding:
sudo iptables -P FORWARD ACCEPT
sudo apt-get install iptables-persistent
Install helm: (warning: will be used both on microk8s and production cluster, so please set kubectl context before using)
sudo snap install helm --classic
“Debug” a pod by exec-ing bash shell:
sudo microk8s exec -it $POD_NAME -- /bin/bash
Installing kubectl and helm CLI
While it’s possible to use sudo microk8s kubectl
and sudo microk8s helm3
, you’ll still need to install <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/">kubectl</a>
and <a href="https://helm.sh/docs/intro/install/">helm</a>
to interact with a remote Kubernetes cluster.
sudo snap install kubectl helm --classic
Creating Helm Charts
Create your first Helm chart – Bitnami
Accessing Kubernetes Cluster using kubectl and kubeconfig
After your clusters, users, and contexts are defined in one or more configuration files, you can quickly switch between clusters by using the kubectl config use-context
command.
# set clusters
kubectl config set-cluster development --server=https://1.2.3.4
kubectl config set-cluster scratch --server=https://5.6.7.8
# set users (credentials)
kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config set-credentials experimenter --username=exp --password=some-password
# set contexts (a combination of cluster + namespace + user)
kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
Reference: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Kubernetes Cluster at DigitalOcean
To access Kubernetes Cluster at DigitalOcean, set up kubectl
:
sudo snap install doctl
doctl auth init
sudo snap connect doctl:ssh-keys :ssh-keys
sudo snap connect doctl:kube-config
doctl kubernetes cluster kubeconfig save k8s-lovia-sg
Nginx Ingress
ACME SSL Certificate Issuer
We use cert-manager and acme Cluster Issuer to automatically issue SSL certificates for our domains. The detault solver is http01 but we can also use dns01 (Cloudflare).
Install helm chart jetstack/cert-manager:
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.2/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.14.2
Check services & pods in cert-manager namespace:
kubectl get svc -n cert-manager
kubectl get po -n cert-manager
By adding Ingress resource, ingress-shim will automatically create the certificate.
Check certificates:
kubectl get cert -A
If correct you should get like this:
$ kubectl get cert -A
NAMESPACE NAME READY SECRET AGE
default fusionauth-tls True fusionauth-tls 27m
default lovia-chat-tls True lovia-chat-tls 28m
Troubleshooting
ingress-shim not generating certificates:
- #287 said it’s because of AWS Access Key ID
- Hendy didn’t know why. Reinstalling cert-manager helm chart (including kubectl delete -f cert-manager.crd.yaml) then reinstalling works
GitLab.com Integration with DigitalOcean Kubernetes
References:
- https://gitlab.com/help/user/project/clusters/add_remove_clusters.md#add-existing-cluster
- https://www.digitalocean.com/community/questions/how-to-connect-gitlab-project-to-kubernetes-cluster?answer=53477
TL;DR:
- Create a gitlab-admin serviceaccount.
- Create the gitlab-admin cluster role binding with RBAC authorization.
- Get the gitlab-admin secret you just created in the kube-system namespace, and take the token out. This will be used as your TOKEN.
- Get your project’s namespace’s default token secret and extract the certificate. This will be your CA.
- Get your cluster’s public URL (which could be an IP address). This will be your API URL.
- Your project’s namespace (which has to be unique — and not default) will be your NAMESPACE.
- Now connect your cluster through the GitLab Kubernetes portal.
Using Private GitLab Container Registry
- Create a GitLab Personal Access Token with only
read_registry
permission - Create the secret:
kubectl create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson
3. Use it in Deployment
with imagePullSecrets
:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: registry.gitlab.com/lovia/PROJECT-NAME
imagePullSecrets:
- name: regcred