Select Page
by

Ayan Ganguly

|
last updated on May 31, 2023
Share

This is a two-part blog and is extremely relevant for engineers or DevOps professionals who intend to run Spinnaker on a single node K8S cluster for a small sized trial / proof of concept (POC).

  1. Series-1 blog highlights how to make a single-node Kubernetes cluster for a Spinnaker trial deployment. 
  2. Series-2 blog highlights how to install Spinnaker into the K8S cluster.

In general, Spinnaker is one of the most robust software for continuous delivery. However there are certain limitations around infra and resource (compute power) requirements to install and get Spinnaker going besides compatibility with existing K8s versions. Primarily these limitations are around the compute power needed for Spinnaker to support production requirements. 

In this blog, I would like to highlight setting up a single-node Kubernetes cluster for the POC of Spinnaker. Yes, single node, because multiple nodes for POC will waste resources. 

Besides this there are several challenges to keep in mind when scaling deliveries with K8s that may act as a roadblock for your projects. 

Download Spinnaker Center Of Excellence Solution Brief

Challenges in scaling the software deliveries with K8s

  • Dependence on scripts (kubectl apply for the deployment of apps into dev, test, and production environments
  • Lack of knowledge for deploying into various managed Kubernetes such as EKS, AKE, GKE, or on-prem Kubernetes
  • Lack of safe deployment strategies like canary or blue-green or rollback
  • Time-consuming process such as manual verification data gathering for release approvals
  • Unable to track deployment status in a central plane.

Most of the organizations we work with, want to overcome the above challenges and speed up their software delivery process. And since Spinnaker is open source and API based, the DevOps team wants to automate deployments using pipelines.  

Setting up single node clusters for Spinnaker trial: We can test simple features with a single node k8s cluster that we can quickly set up in our VM’s standalone systems.

After POC tests are done, one can tear down the nodes and free resources. For the installation of Kubernetes, I have considered two environments- Ubuntu and RedHat Linux. 

Let us start with setting up Kubernetes on Ubuntu.

Steps to Set up Kubernetes in Ubuntu

Step-1: Update existing packages before Docker installation.

Docker is a prerequisite for k8s installation. Listing out the steps for the same:

We need to update our existing packages: 

				
					$ sudo apt update
				
			

We will also install a few prerequisite packages which will be apt packages over HTTPS:

				
					$ sudo apt install apt-transport-https ca-certificates curl 
software-properties-common
				
			

We will add GPG key for official docker repository to our system:

				
					$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
				
			

Then, we will add docker repository to APT sources:

				
					$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
				
			

Next, We will update the package database from the newly added repo:

				
					$ sudo apt update
				
			

Note: Please make sure you do the installation from the Docker repo and not default Ubuntu repo:

				
					$ apt-cache policy docker-ce
				
			

You will see output like below, it will list all the versions of Docker that are available.

Step-2: Install Docker

				
					$ sudo apt install docker-ce
				
			

To check whether docker is installed or not, you can run the below command:

				
					$ sudo apt install docker-ce
				
			

Step-3: Install single node K8s cluster

We will update and upgrade the package list as usual:

				
					$ sudo apt-get update
$ sudo apt-get upgrade
				
			

We will add k8s package key by using the following command.

				
					deb http://apt.kubernetes.io/ kubernetes-xenial main
				
			

Note: We will need to run the below command as the root user, we will add the k8s repository by creating k8s repository source list file:

				
					$ touch /etc/apt/sources.list.d/kubernetes.list
				
			

We will add the below line in the above file using vi editor or any preferred editor that can be used:

				
					$ vi /etc/apt/sources.list.d/kubernetes.list
				
			

And add the below line:

				
					deb http://apt.kubernetes.io/ kubernetes-xenial main
				
			

We will update the package list and install the packages to run k8s:

				
					$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni
				
			

Next we will require to initiate a pod network, this is because for k8s pod to communicate they require a pod network. There are several pod networks that can be used, you can read more about pod networks here.  For our example, we are using Flannel.

Step-4: Initiate Pod Network using Flannel

We will need to pass bridged Ipv4 traffic to the iptables chain, this is a requirement for some CNI plugins to work. Run the below command:

				
					$ sysctl net.bridge.bridge-nf-call-iptables=1
				
			

We will have to pass pod network and initialize the same using kubeadm, by running the below command:

				
					$ kubeadm init -pod-network-cidr=10.244.0.0/16
				
			

Once you run the above command as root you will see the below output:

In case, you run the command as a non-root user, you will see the below output:

We will run the commands in the screenshot as a non-root user:

				
					$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
				
			

Finally we will apply the flannel cluster, by running the below command:

				
					$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
				
			

We will see below output:

The warnings in the above screenshot can be avoided by making the suggested changes. These warnings are related to my system, you may not even get these warnings. Also note, warnings will not restrict the pods to come up and running.

We will check pods by running the below command:

				
					$ kubectl get pods -all-namespaces
				
			

We will see output somewhat like below:

We will also confirm if the node is a single node k8s cluster. The node in this case will be a master node by default. Use the below command to check the details of the node:

				
					$ kubectl get nodes
				
			

** At times the status in the above screenshot may show as not Ready because by default cluster will not schedule pods in the master node for security reasons. In that case, we have to run the below command and taint the nodes which usually fixes the issue:

				
					$ kubectl taint nodes --all node-role.kubernetes.io/master-
				
			

Let’s move onto our next chapter wrt RedHat Linux.

Steps to Set up Kubernetes on RedHat:

The prerequisite needed are:

  • Docker has to be installed.
  • Sestatus set to disabled.
  • Configure firewalld and IP tables.

After installing docker follow the below steps:

Step-1: Disable Sestatus

				
					# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
				
			

Step-2: Configure Firewall and IP tables

				
					# firewall-cmd --permanent --add-port=6443/tcp
# firewall-cmd --permanent --add-port=2379-2380/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10251/tcp
# firewall-cmd --permanent --add-port=10252/tcp
# firewall-cmd --permanent --add-port=10255/tcp
# firewall-cmd --reload

				
			

Step-3: Configure IP tables

				
					# modprobe br_netfilter
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
				
			

Once the above mentioned step is done next we create a repo.

Step-3: Create a repo

				
					# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
>       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
				
			

You can use the above method or use vi to create the repo.

The next step is to install kubeadm.

Step-4: Install and Initialize Kubeadm

				
					# yum install kubeadm -y
				
			

Once the above step is complete we will restart docker and kubelet and enable them as well:

				
					# systemctl restart docker && systemctl enable docker
# systemctl  restart kubelet && systemctl enable kubelet

				
			

We will initialize kubeadm next: 

				
					# kubeadm init
				
			

Once the initialization is successful we will get similar prompt that we saw during our installation in Ubuntu asking to run below commands as a regular user:

				
					# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
				
			

Now if we check k8s node we might find it in pending state, this we can fix initiating pod network, also after initiating pod network if we see our master in pending state we can taint the nodes, steps shared above for k8s installation in Ubuntu

				
					# kubectl get nodes
				
			

Step-5: Initiate Pod Network using Weave

To deploy pod network run the below command (in this case I have used weave):

				
					# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
				
			

This will get your node up and running.

Further notes

If you notice, once kubeadm is initialized for both the environments (Ubuntu and Red Hat), the steps are almost the same. It is only the prerequisites that have some significant differences. Also, the links shared for pod initialization keep upgrading in case you all face issues using flannel or weave, use a different pod network tool or use the latest versions of flannel/weave.

A quick point to remember is if you are starting from scratch, the Docker and Kubernetes version that will be installed will always be the latest by default. 

However, if you have Docker already installed, you are starting to establish your Kubernetes cluster directly. Depending on the Docker version installed, you will start getting warnings or errors informing you that the Docker version needs updating or a compatible version of Kubernetes with your current docker version that you should install.

Even though it’s a single node cluster, we can still deploy microservices for our application and test the same unless the requirement to deploy such apps isn’t too heavy. In the next blog, we will see a test case where we will deploy Spinnaker in a single node k8s cluster.

If you are already using Spinnaker you would need dashboard and AI-based verification to practice advanced deployment strategies. 

OpsMx offers Intelligent Software Delivery (ISD) for Spinnaker to accelerate approvals, verification, and compliance checks by analyzing data generated across Spinnaker pipelines and the software delivery chain.  ISD for Spinnaker is available as SaaS, managed service, or on-premises. 

About OpsMx

Founded with the vision of “delivering software without human intervention,” OpsMx enables customers to transform and automate their software delivery processes. OpsMx builds on open-source Spinnaker and Argo with services and software that helps DevOps teams SHIP BETTER SOFTWARE FASTER.

Ayan Ganguly

Ayan is a DevOps Engineer at OpsMx. He helps platform teams of large clients across US and UK regions implement Continuous Delivery solutions. He has nearly 10 years of experience in IT maintenance, support, and operations. He is an avid book reader, and in his spare time, he likes to trek mountains.

Link

0 Comments

Submit a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.