K8s Challenge from DigitalOcean

Wang Poh Peng
3 min readDec 4, 2021

Done during Quarantine in HK

Having previously used DigitalOcean to host my shadowsockets server for my VPN access to China and it being reliable; I decided to give the managed Kubernetes service a try.

First of all, I wanted to keep this challenge short as I just wanted to casually test if I could just deploy something into this managed service, without writing a single line of code.

Hence, I went through the user interface of the managed k8s service, created a cluster of 2 nodes and got myself going. Although I faced some issues in download the doctl binary through Homebrew, I managed to download it alright via their Github with wget

Connecting to the K8s cluster

Kubernetes connection works by saving a configuration file with credentials that allows you to interact with the API. Hence, we just need to follow the instructions laid by DigitalOcean in their web portal to save the account token into their doctl tool when running the one-liner to save the k8s config file.

After it is saved, one can simply test their connection to the cluster by running kubectl get nodes to retrieve the node details and make sure it matches the number nodes you created on the portal.

Helm & Falco

Alright it was time for the magic. For the Kubernetes challenge we are required to deploy any of the CNCF solutions for the project completion.

Coming from a background where I used to host services such SonarQube and Prisma Cloud on Kubernetes and other container services. I decided to give this Falco a try.

Lucky for me, Falco has a Helm Chart ready for me to be used. Since I decided to go for the codeless approach, I only used imperative commands to get things done.

Of course we will need Helm3 to be installed in the our work machine and note that nothing is required to be install at the cluster level for Helm to work. It works directly with the K8s API and no longer needs a Tiller running in the cluster to work.

helm upgrade falco falcosecurity/falco --set falcosidekick.enabled=true --set falcosidekick.webui.enabled=true --set fakeEventGenerator.enabled=true

Once the above runs successfully, pods and services will be created.

falco-falcosidekick-5ffbf9b997-6x8ll      1/1     Running             0          47m
falco-falcosidekick-5ffbf9b997-f2569 1/1 Running 0 47m
falco-falcosidekick-ui-67749c4fb5-zrmjh 1/1 Running 0 47m
falco-fbfgc 1/1 Running 0 46m
falco-m59bn 1/1 Running 0 46m

From what I observed, each falco and falcosidekick pod is created on a per node basis. The falcosidesick-ui pod is enabled so that I can have a visuals on if the pods are really performing its functions to check what is going in the cluster.

You can view the Falco Sidekick UI publicly here: http://68.183.235.149:30336/ui/#/

Oh, I almost forget to mention, the chart does not explicitly allow the Falco Sidekick UI to be exposed to connections out of the cluster as it assumes that your cluster should already have internal apps access set up.

The simple solution I did was just to change the service type of the falco-falcosidekick-ui from ClusterIP to NodePort.

kubectl edit svc falco-falcosidekick-ui# Change ClusterIP to NodePort and save the file
# A Port number will be assigned and will be able to connect it at any node
# Make sure to put a /ui to load the site

The nodes provided by DigitalOcean are publicly accessible by default and its traffic rules can be managed on the portal itself.

What’s next

I did this really because I needed to wait for my dinner to arrive for my qurantine. I am pretty surprised that I require no code and very little time to get this done. Kudos to the DigitalOcean and CNCF for making deployment of apps crazy fast once you understand the underlying concepts.

--

--