Skip to content

Additional services in your Kubernetes environment

We list in this document all services that you can edit and modify yourself using the login data you received from us to connect to your Kubernetes environment. We do ask that you report your changes to these services to us, as they are all included in our automated deployment tools. And it would be unfortunate that your changes would be overwritten during the upgrades we make to these services.

Nginx Ingress

Nginx Ingress connects http(s) traffic from outside to your services and pods.

The SRE team manages the Nginx Ingress controller and all related resources in the Nginx Ingress namespace. It's best not to make any changes there; our automation will overwrite them back over time.

By defining an ingress resource in your own namespace (in addition to your pods and services), you can instruct Nginx Ingress to forward http(s) traffic for a particular domain to your pod. The example below forwards traffic for "example.com" to a service named "example-service" on port 3000.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  namespace: jouw-namespace
  annotations:
    kubernetes.io/tls-acme: "true"
spec:
  ingressClassName: nginx
  rules:
  - host: example.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: example-service
              port:
                number: 3000
  tls:
  - hosts:
      - example.com
    secretName: ingress-tls

The annotation "kubernetes.io/tls-acme" indicates that you wish to use a free Let's Encrypt certificate for the hostnames listed under "tls" at the bottom of the yaml. In this case, the certificates are automatically put in the specified secret "ingress-tls". If you do not set the tls-acme annotation, you have to do it manually. This is the case when you use certificates from another CA. Additional information:

Cert-Manager

Cert-Manager is responsible for requesting Let's Encrypt certificates.

You rarely address this service directly. Via an annotation in your ingress resource, you indicate whether you wish to use Let's Encrypt certificates.
See also the example ingress resource on the previous page. The settings of cert-Manager itself are managed by the SRE team, it is best not to make any changes there.

When it fails to request a Let's Encrypt certificate, the cause is usually outside Cert-Manager:

  • Do all A and AAAA DNS records for the domain name point to the k8s cluster?
  • If you have put multiple hostnames under "tls" in your ingress resource, they should all point to the cluster. Cert-Manager requests 1 certificate with all hostnames as Subject Alternative Name, if 1 is not configured correctly none will work.
  • Is there a CAA DNS record that prohibits the use of Let's Encrypt?

To get more information about a certificate request, you can check some resources via kubectl. To start with, an ingress with tls-acme annotation creates a "certificate request". This will in turn create an "order", which then creates a "challenge". If you list these resources in your namespace, you can usually see what the problem is via kubectl describe.

Can't find the problem? Feel free to ask us for help!

Additional information:

EFK (Elasticsearch - Fluentbit - Kibana)

The EFK stack is a combination of software that provides insight into the logs of the Kubernetes cluster. Of all the pods you launch, the output on stdout and stderr are also collected. So to keep track of those logs, it is enough to have your application log to stdout. Logs that get to another location (e.g. a log file on disk) will not appear in the EFK stack.

The collected logs are kept in Elasticsearch. By default, we delete logs older than 7 days every night. If desired, we can keep logs longer. However, keep in mind that both memory and disk consumption of Elasticsearch increase as more logs are kept in it. If we need to grow the volume of Elasticsearch beyond the contractually permitted usage, there will be an additional cost.

You can visualise the logs via Kibana. It is also possible to create application-specific dashboards as required. That way, you can easily gain insight into what your application (and Kubernetes) is doing at any time.

You can get the URL of the Kibana dasbhoard by retrieving the ingress:
kubectl -n efk get ingress

Fluentbit-Consult

Given the memory and disk consumption of Elasticsearch, you can also opt for a simpler alternative. In this case, we replace Elasticsearch (and the Kibana dashboards) with a directory of log files. More difficult to search, but a lot cheaper in consumption. If you don't often have the need to search your logs, you can save on your cluster's precious resources this way.

We also provide a log consulting pod that allows you to access the logs folder at any time. The usual linux tools (grep, awk, less, ...) to deal with log files are also installed there.

The log consulting pod is available in the "log-consulting" namespace. You can retrieve the name via kubectl -n log-consulting get pods, then access it via kubectl exec.

The log consulting pod also contains an SSH service: you can connect via a standard SSH client this way. This also allows copying log files via scp or sftp. To connect to this, you must first:

  • Add your SSH key to the config folder: kubectl -n log-consulting edit configmap ssh-public-keys
  • Enable port forwarding via kubectl -n log-consulting port-forward.
  • Find out the name of the pod via kubectl -n log-consulting get pods

When we enable the log-consulting pod, you can no longer use our EFK stack.

Kubernetes Dashboard

As an alternative to the kubectl command-line tool, we provide a Kubernetes Dashboard. This web application makes many (but not all) of the cli tool's features available via the browser. That way, you can view short-term metrics and the current status of various components of the cluster. You can also easily query secrets, statuses, logs, and health checks with it. However, it is always about the current status. For history, please refer to the EFK stack and VictoriaMetrics mentioned earlier.

You obtain the URL of the dashboard via kubectl -n kubernetes-dashboard get ingress.

On the login screen, choose "token", obtain the correct token via:

echo $(kubectl -n kubernetes-dashboard get secret user-admin -o 'jsonpath={.data.token}'|base64 -d)

Harbor

Harbor is a Docker registry with additional features. It supports versioning of your Docker images, so you can quickly roll one back to an earlier version of your container image. CVE scanning of your Docker images is also possible. In addition, it also includes full support for Helm charts, and so can also serve as a Helm repository.

You obtain Harbor's URL via kubectl -n harbor get ingress.

On the login screen, you can log in as user admin with the password obtained via:

echo $(kubectl -n harbor get secret harbor-core -o 'jsonpath={.data.HARBOR_ADMIN_PASSWORD}' |base64 --decode)

A point of interest in Harbor is the disk consumption of the images you upload. Per project you can set how many images Harbor should keep, the rest is cleaned up every night by the garbage collector. If you don't set this, the disk will fill up quickly, causing errors during deploys.

VictoriaMetrics

VictoriaMetrics (an alternative to Prometheus) collects statistics about Kubernetes and your pods. It consists of several components in the namespace "VictoriaMetrics", some of which are interesting to look at.

  • Via Grafana, you get a graphical view of the collected metrics. You can also create your own dashboards here as you wish, showing relevant metrics of a particular service. This can be particularly interesting when you send additional data to VictoriaMetrics yourself.
  • Alertmanager sends alerts by e-mail or to a Slack channel. It is also possible to silenced alerts, after which they do not appear for a certain period of time. This can be useful if a certain (test) service is temporarily not working.

The necessary URLs can be obtained via kubectl -n victoriametrics get ingress.

On Grafana, you can log in with username "admin" and the password obtained via:

echo $(kubectl -n victoriametrics get secret victoria-metrics-stack-grafana -o 'jsonpath={.data.admin-password}'|base64 -d)

Access the Grafana dashboards

Accessing the Grafana dashboards instance hosted on your Kubernetes cluster, but what is the URL, and how about the login credentials?

  1. Get the URL using kubectl, to access the Grafana web service.
    for domain in $(kubectl get ingress --all-namespaces -l app.kubernetes.io/name=grafana -o jsonpath={'.items[*].spec.rules[*].host'}); do echo $domain; done
    
  2. Get the login credentials for the Grafana service.
    kubectl get secret --all-namespaces -l app.kubernetes.io/name=grafana -o jsonpath={'.items[*].data'} \
     | jq -S '{user: ."admin-user"|@base64d, password: ."admin-password"|@base64d'
    
  3. Use your preferred webbrowser to access the Grafana dashboards, using the login credentials fetched earlier.
Note

@base64d might give an error if you are using a jq version < 1.6 You can still fetch the values, but you must use the command base64 --decode instead (not in the jq command string).

MinIO operator (optional)

MinIO is an open source implementation of an S3 service.

The MinIO operator we optionally install in the k8s cluster allows you to create your own MinIO services.

Although we take on the installation of MinIO operator, we do not configure the MinIO services.