At Cloud Practice we aim to encourage adoption of the cloud as a way of working in the IT world. To help with this task, we are going to publish numerous good practice articles and use cases and others will talk about those key services within the cloud.
This time we will talk about Kubewatch.
Kubewatch is a utility developed by Bitnami Labs that enables notifications to be sent to various communication systems.
Supported webhooks are:
The available images are published in the bitnami/kubewatch GitHub
You can download the latest version to test it in your local environment:
$ docker pull bitnami/kubewatch
Once inside the container, you can play with the options:
$ kubewatch -h
Kubewatch: A watcher for Kubernetes
kubewatch is a Kubernetes watcher that publishes notifications
to Slack/hipchat/mattermost/flock channels. It watches the cluster
for resource changes and notifies them through webhooks.
supported webhooks:
- slack
- hipchat
- mattermost
- flock
- webhook
- smtp
Usage:
kubewatch [flags]
kubewatch [command]
Available Commands:
config modify kubewatch configuration
resource manage resources to be watched
version print version
Flags:
-h, --help help for kubewatch
Use "kubewatch [command] --help" for more information about a command.
As soon as there is an action on a Kubernetes object, as well as creation, destruction or updating.
Firstly, create a Slack channel and associate a webhook with it. To do this, go to the Apps section of Slack, search for “Incoming WebHooks” and press “Add to Slack”:
If there is no channel created for this purpose, register a new one:
In this example, the channel to be created will be called “k8s-notifications”. Then you have to configure the webhook at the “Incoming WebHooks” panel and adding a new configuration where you will need to select the name of the channel to which you want to send notifications. Once selected, the configuration will return a ”Webhook URL” that will be used to configure Kubewatch. Optionally, you can select the icon (“Customize Icon” option) that will be shown on the events and the name with which they will arrive (“Customize Name” option).
You are now ready to configure the Kubernetes resources. There are some example manifests and also the option of installing by Helm on the Kubewatch GitHub However, here we will build our own.
First, create a file “kubewatch-configmap.yml” with the ConfigMap that will be used to configure the Kubewatch container:
apiVersion: v1
kind: ConfigMap
metadata:
name: kubewatch
data:
.kubewatch.yaml: |
handler:
webhook:
url: https://hooks.slack.com/services/<your_webhook>
resource:
deployment: true
replicationcontroller: true
replicaset: false
daemonset: true
services: true
pod: false
job: false
secret: true
configmap: true
persistentvolume: true
namespace: false
You simply need to enable the types of resources on which you wish to receive notifications with “true” or disable them with “false”. Also set the url of the Incoming WebHook registered previously.
Now, for your container to have access the Kubernetes resources through its api, register the “kubewatch-service-account.yml” file with a Service Account, a Cluster Role and a Cluster Role Binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubewatch
rules:
- apiGroups: ["*"]
resources: ["pods", "pods/exec", "replicationcontrollers", "namespaces", "deployments", "deployments/scale", "services", "daemonsets", "secrets", "replicasets", "persistentvolumes"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubewatch
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubewatch
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubewatch
subjects:
- kind: ServiceAccount
name: kubewatch
namespace: default
Finally, create a “kubewatch.yml” file to deploy the application:
apiVersion: v1
kind: Pod
metadata:
name: kubewatch
namespace: default
spec:
serviceAccountName: kubewatch
containers:
- image: bitnami/kubewatch:0.0.4
imagePullPolicy: Always
name: kubewatch
envFrom:
- configMapRef:
name: kubewatch
volumeMounts:
- name: config-volume
mountPath: /opt/bitnami/kubewatch/.kubewatch.yaml
subPath: .kubewatch.yaml
- image: bitnami/kubectl:1.16.3
args:
- proxy
- "-p"
- "8080"
name: proxy
imagePullPolicy: Always
restartPolicy: Always
volumes:
- name: config-volume
configMap:
name: kubewatch
defaultMode: 0755
You will see that the value of the “mountPath” key will be the file path where the configuration of your ConfigMap will be written within the container (/opt/bitnami/kubewatch/.kubewatch.yaml). You can expand the information on how to mount configurations in Kubernetes here. In this example, you can see that our application deployment will be through a single pod. Obviously, in a production system you would need to define a Deployment with the number of replicas considered appropriate to keep it active, even in case of loss of the pod.
Once the manifests are ready apply them to your cluster:
$ kubectl apply -f kubewatch-configmap.yml -f kubewatch-service-account.yml -f kubewatch.yml
The service will be ready in a few seconds:
$ kubectl get pods |grep -w kubewatch
kubewatch 2/2 Running 0 1m
The Kubewatch pod has two containers associated: Kubewatch and kube-proxy, the latter to connect to the API.
$ kubectl get pod kubewatch -o jsonpath='{.spec.containers[*].name}'
kubewatch proxy
Check through the logs that the two containers have started up correctly and without error messages:
$ kubectl logs kubewatch kubewatch
==> Config file exists...
level=info msg="Starting kubewatch controller" pkg=kubewatch-daemonset
level=info msg="Starting kubewatch controller" pkg=kubewatch-service
level=info msg="Starting kubewatch controller" pkg="kubewatch-replication controller"
level=info msg="Starting kubewatch controller" pkg="kubewatch-persistent volume"
level=info msg="Starting kubewatch controller" pkg=kubewatch-secret
level=info msg="Starting kubewatch controller" pkg=kubewatch-deployment
level=info msg="Starting kubewatch controller" pkg=kubewatch-namespace
...
$ kubectl logs kubewatch proxy
Starting to serve on 127.0.0.1:8080
You could also access the Kubewatch container to test the cli, view the configuration, etc.:
$ kubectl exec -it kubewatch -c kubewatch /bin/bash
Now you need to test it. Let’s use the creation of a deployment as an example to test proper operation:
$ kubectl create deployment nginx-testing --image=nginx
$ kubectl logs -f kubewatch kubewatch
level=info msg="Processing update to deployment: default/nginx-testing" pkg=kubewatch-deployment
The logs now alert you that the new event has been detected, so go to your Slack channel to confirm it:
Now you can eliminate the test deployment:
$ kubectl delete deploy nginx-testing
Obviously, Kubewatch does not replace the basic warning and monitoring systems that all production orchestrators need to maintain, but it does provide an easy and effective way to extend control over the creation and modification of resources in Kubernetes. In this example case we performed a Kubewatch configuration across the whole cluster, “spying” on all kinds of events, some of which are perhaps useless if the platform is maintained as a service, as we would be aware of each of the pods created, removed or updated by each development team in its own namespace, which is common, legitimate and does not add value. It may be more appropriate to filter by the namespaces for which you wish to receive notifications, such as kube-system, which is where we generally host administrative services and where only administrators should have access. In that case, you would simply need to specify the namespace in your ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: kubewatch
data:
.kubewatch.yaml: |
namespace: "kube-system"
handler:
webhook:
url: https://hooks.slack.com/services/<your_webhook>
resource:
deployment: true
replicationcontroller: true
replicaset: false
Another interesting utility may be to “listen” to our cluster after a significant configuration adjustment, such as our self-scaling strategy, integration tools and so on, as it will always notify us of the scale ups and scale downs, which could be especially useful initially. In short, Kubewatch extends control over clusters, and we decide the scope we give it. In later articles we will look at how to manage logs and metrics productively.
Patron
Sponsor
© 2024 Bluetab Solutions Group, SL. All rights reserved.