TL;DR Part 2 of how to make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.
Part one recap.
In part one we discussed the advantages of the Kubernetes Ingress Controller and configured our cluster to automatically register the public IP’s of ingress controllers into AWS Route53 DNS using an annotation.
Go and catch up HERE if you missed it!
TLS / SSL
Now it’s time to add the really fun stuff, we already know what subdomain we want to register with our ingress controller (you did read part one right?), so we have all the information we need to automatically configure SSL for that domain as well!
There’s something awesome about deploying an app to Kubernetes, browsing to the URL you configured and seeing a happy green BROWSER VALID SSL connection already setup.
Free, Browser Valid SSL Certificates…. as long as you automate!
If you haven’t heard of LetsEncrypt, then this blog post is going to give you an extra bonus present. Lets encrypt is a browser trusted certificate authority, which charges nothing for it’s certs and is also fully automated!
This means; Code can request a domain cert, prove to the certificate authority that the server making the request actually owns the domain (by placing specific content on the webserver for the CA to check) and then receive the valid certificate back, all via the LetsEncrypt API.
If you want to know how this all works, visit the Lets Encrypt – How It Works Page..
Without LetsEncrypt, this process would have manual steps for validation as with most other CA’s and potentially no API for requesting certs at all. We really must thank their efforts for making all this possible.
Using Lets Encrypt with Kubernetes Ingress Controllers
Much like with the automatic DNS problem, google returns more questions than solutions, with different bits of projects and GitHub issues suggesting a number of paths. This blog post aims to distill all of my research and what worked for me.
After testing a few different things, I found that a project called Kube-Lego did exactly what I wanted;
- Supports configuring both GCE Ingress Controllers and NginX ingress controllers with LetsEncrypt Certs (I’m using GCE in this example).
- Supports automatic renewals and the automated proof of ownership needed by LetsEncrypt.
Another reason I liked kube-lego, is that it’s standalone. The LetsEncrypt code isn’t embedded in the LoadBalancer (Ingress Controller) code itself, this would have caused me problems:
- I’m using Googles’ GCE loadbalancers so I have no access to their code anyway.
- Even if I was running my own NginX/Caddy/Etc ingress controller pods, If LetsEncrypt was embedded, I’d need to write some clustering logic in order to have more than one instance of them running, otherwise all of them would race each other to get a cert for the same domain and i’d end up in a mess (and rate limited from the LetsEncrypt API).
KubeLego seemed like the most flexible choice.
Installation is pretty simple, as the documentation at https://github.com/jetstack/kube-lego was much better than the dns-controller from Part 1 of this article.
Firstly, we configure a ConfigMap that the kube-lego pod will get settings from, i’ve saved this as kube-lego-config-map.yaml
apiVersion: v1 metadata: name: kube-lego namespace: kube-system data: # modify this to specify your address lego.email: "[email protected]" # configure letencrypt's production api lego.url: "https://acme-v01.api.letsencrypt.org/directory" # For testing, use their testing API if you wish # (generates non browser certs, no API rate limiting) # lego.url: "https://acme-staging.api.letsencrypt.org/directory" kind: ConfigMap
Now we need a deployment manifest for the kube-lego app itself, i’ve saved this as kube-lego.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-lego namespace: kube-system spec: replicas: 1 template: metadata: labels: app: kube-lego spec: containers: - name: kube-lego image: jetstack/kube-lego:0.1.3 imagePullPolicy: Always ports: - containerPort: 8080 env: - name: LEGO_EMAIL valueFrom: configMapKeyRef: name: kube-lego key: lego.email - name: LEGO_URL valueFrom: configMapKeyRef: name: kube-lego key: lego.url - name: LEGO_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LEGO_POD_IP valueFrom: fieldRef: fieldPath: status.podIP readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 timeoutSeconds: 1
Notice our Deployment references our configmap to pull settings for e-mail and API endpont. Also notice the app exposes port 8080, more on that later!
We can now deploy (both configmap and app) onto our k8’s cluster:
kubectl create -f kube-lego-config-map.yaml kubectl create -f kube-lego.yaml
Voila! We’re running kube-lego on our cluster.
You can view the logs to see what kube-lego is doing, by default it will listen for all new ingress-controllers and take action with certs if they have certain annotations which we’ll cover below.
Also, if the application fails to start for whatever reason, the health check in the deployment manifest above will fail and Kubernetes will restart the pod.
your pod name will differ for the logs command:
kubectl --namespace=kube-system logs kube-lego-3323932148-c2nci
Putting it all together
Here we are going to create an app deployment for which we want all this magic to happen!
However, there is a reason automatic DNS registration was part one of this blog series.. LetsEncrypt validation depends on resolving the domain requested down to our K8’s cluster, so if you haven’t enabled automatic DNS (or put the ingress controllers public IP in DNS yourself), then LetsEncrypt will never be able to validate ownership of the domain and therefore never give you a certificate!
May be worth revisiting part1 of this series if you haven’t already (it’s good, honest!)
App Deployment manifests
If you’re familiar with Kubernetes, then you’ll recognise the following manifests simply deploy us a nginx sample ‘application’ in a new namespace. The differences to enable DNS and SSL are all in the ingress controller definition.
namespace.yaml Creates our new namespace:
apiVersion: v1 kind: Namespace metadata: name: blog-demo
nginx.yaml Deploys our application:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: blog-demo-app namespace: blog-demo-secret spec: replicas: 2 template: metadata: labels: app: blog-demo-app spec: containers: - name: nginx image: nginx ports: - containerPort: 80
service.yaml is needed to track active backend endpoints for the Ingress Controller (notice it’s of type: NodePort, it’s not publicly exposed)
apiVersion: v1 kind: Service metadata: name: blog-demo-svc namespace: blog-demo spec: ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort selector: app: blog-demo-app
Finally, our Ingress Controller. ingress-tls.yaml I’ve highlighted the ‘non standard’ bits which enable our automated magic.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: blog-demo-ingress namespace: blog-demo annotations: kubernetes.io/tls-acme: "true" kubernetes.io/ingress.class: "gce" dns.alpha.kubernetes.io/external: "true" spec: tls: - hosts: - blogdemo.ciscoplatform.com secretName: blog-demo-secret rules: - host: blogdemo.ciscoplatform.com http: paths: - path: / backend: serviceName: blog-demo-svc servicePort: 80
Lets deploy these manifests to our Kubernetes cluster and watch the magic happen!
gt; for i in namespace.yaml nginx.yaml service.yaml ingress-tls.yaml ; do kubectl create -f $i ; done namespace "blog-demo" created deployment "blog-demo-app" created service "blog-demo-svc" created ingress "blog-demo-ingress" created
Right, now our ingress controller is going to go and configure a GCE load balancer for us (standard), this will be allocated a public IP and our dns-controller will register this against our hostname in Route53:
gt; kubectl --namespace=blog-demo get ingress NAME HOSTS ADDRESS PORTS AGE blog-demo-ingress blogdemo.ciscoplatform.com 22.214.171.124 80, 443 2m
And looking in our AWS Route53 portal:
While this was happening. kube-lego was also configuring the GCE loadbalancer to support LetsEcrypt’s ownership checks. We look at the LoadBalancer configuration in Google’s cloud console and see that a specific URL path has been configured to point to the kube-lego app on 8080.
This allows kube-lego to control the validation requests for domain ownership that will come in from LetsEncrypt when we request a certificate. All other request paths will be passed to our actual app.
This will allow the kube-lego process (requesting certs via LetsEncrypt) to succeed:
kubectl --namespace=kube-system logs kube-lego-3323932148-c2nci #logs shortened for easy reading time="2017-03-03T15:02:42Z" level=info msg="no cert associated with ingress" context="ingress_tls" name=blog-demo-ingress namespace=blog-demo time="2017-03-03T15:02:42Z" level=info msg="requesting certificate for blogdemo.ciscoplatform.com" context="ingress_tls" name=blog-demo-ingress namespace=blog-demo time="2017-03-03T15:03:06Z" level=info msg="authorization successful" context=acme domain=blogdemo.ciscoplatform.com time="2017-03-03T15:03:07Z" level=info msg="successfully got certificate: domains=[blogdemo.ciscoplatform.com] url=https://acme-v01.api.letsencrypt.org/acme/cert/031473fb894da1fcdbecd61975453abef694" context=acme time="2017-03-03T15:03:07Z" level=info msg="creating new secret" context=secret name=blog-demo-secret namespace=blog-demo
As soon as a valid cert is received, kube-lego re-configures the GCE LoadBalancer for HTTPS as well as HTTP (notice in the above screenshot, only Protocol HTTP is enabled on the LB when it is first created).
The whole process above takes a couple of mins to complete (LB getting a public IP, DNS Registration, LetsEncrypt Checks, Get Cert, Configure LB with SSL) but then… Huzzah! Completley hands off publicly available services, protected by valid SSL certs!
Now your developers can deploy applications which are SSL by default without any extra hassle.
Appreciate any corrections, comments of feedback, please direct to @mattdashj on twitter.
Until next time!