Deploy your first Kubernetes cluster with Terraform and manage it with Rancher

Stefano CucchiellaDevOPSLeave a Comment

In the era of DevOps and micro-services, Kubernetes is playing an important role in the IaaS ecosystem, enabling flexibility and simplification of the application’s underlying platform implementation.
However, this is true to certain extent. Meaning, only when you have a wide-range of tools that allow you to control, monitor and scale your infrastructure upon your application needs.

In this guide I will describe how to create a basic Kubernetes cluster in City Cloud using Terraform and Rancher/RKE provider and importing the newly created cluster in Rancher.

Rancher is a Kubernetes Cluster Manager and it can be installed into a Kubernetes Cluster which itself can be provisioned by Rancher RKE (Rancher Kubernetes Engine) or, within Terraform, by the RKE community provider.

Note. Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language, or optionally JSON. Source 

Prerequisites

You need a City Cloud account.

*New user? Get $100 worth of usage for free!

Overview

In this example, we will follow the below steps

Step 1 –  Create the Terraform configuration

In this step we will create the Terraform configuration to deploy our nodes and install the Rancher Server.

We will create the following VMs:

– 1 VM for the Rancher Server

– 3 VMs for Master (etcd + control_plane) and Worker nodes

with 4 vCPU, 4 GB RAM and 50GB of disk size.

Also, we will use RancherOS, the smallest and easiest way to run Docker, even in production. 

The Rancher RKE project folder containing the Terraform configuration is available in our GitHub repository.

Step 2 – Source your Openstack project RC-file

Download your Openstack project RC-file from the control panel (How-to?).

OS_REGION_NAME=***
OS_USER_DOMAIN_NAME=***
OS_PROJECT_NAME=***
OS_AUTH_VERSION=***
OS_IDENTITY_API_VERSION=***
OS_PASSWORD=***
OS_AUTH_URL=***
OS_USERNAME=***
OS_TENANT_NAME=***
OS_PROJECT_DOMAIN_NAME=***


Source the file with `source openstack.rc`

$ source openstack.rc


Terraform will automatically read and use the environment variables when needed.

More info about how Terraform uses the environment variables here.

Step 3 – Apply the configuration

Once you are ready with the configuration, it’s time to initialise Terraform and apply the configuration.

Initialise Terraform In the same directory where the configuration files are stored by running `terraform init`

$ terraform init
...
* provider.local: version = "~> 1.4"
* provider.null: version = "~> 2.1"
* provider.openstack: version = "~> 1.24"
* provider.rke: version = "~> 0.14"
* provider.template: version = "~> 2.1"
...
Terraform has been successfully initialized!

We can now apply the configuration with `terraform apply`:

$ terraform apply
...
rke_cluster.cluster: Creation complete after 3m11s (ID: node1.brotandgames.com)
...
local_file.kube_cluster_yaml: Creation complete after 0s (ID: 4bd2da6f5c62317e16392c2a6b680f96f41bb2dc)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.


The terraform.tfstate file is generated and used by Terraform to store and maintain the state of your infrastructure as well as the kube_config_cluster.yaml for the connection to the Kubernetes Rancher cluster.

Note. terraform.tfstate and kube_config_cluster.yaml can possibly contain sensitive information.

Step 4 – Verify the cluster

Now that the configuration is successfully applied, use the following commands to check connectivity:

$ kubectl --kubeconfig kube_config_cluster.yml get nodes     
NAME             STATUS   ROLES                      AGE   VERSION
86.107.243.178   Ready    controlplane,etcd,worker   34m   v1.14.6
86.107.243.193   Ready    controlplane,etcd,worker   34m   v1.14.6
86.107.243.20    Ready    controlplane,etcd,worker   34m   v1.14.6

and status:

$ kubectl --kubeconfig=kube_config_cluster.yml get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-5954bd5d8c-4jlmw     1/1     Running     0          33m
ingress-nginx   nginx-ingress-controller-bng8n            1/1     Running     0          33m
ingress-nginx   nginx-ingress-controller-m74fj            1/1     Running     0          33m
ingress-nginx   nginx-ingress-controller-s2q6l            1/1     Running     0          33m
kube-system     canal-9mjrc                               2/2     Running     0          34m
kube-system     canal-c2flq                               2/2     Running     0          34m
kube-system     canal-m49l9                               2/2     Running     0          34m
kube-system     coredns-autoscaler-5d5d49b8ff-44jdl       1/1     Running     0          33m
kube-system     coredns-bdffbc666-pxsk7                   1/1     Running     0          33m
kube-system     metrics-server-7f6bd4c888-wwrwc           1/1     Running     0          33m
kube-system     rke-coredns-addon-deploy-job-htbhs        0/1     Completed   0          33m
kube-system     rke-ingress-controller-deploy-job-p2442   0/1     Completed   0          33m
kube-system     rke-metrics-addon-deploy-job-cxcxr        0/1     Completed   0          33m
kube-system     rke-network-plugin-deploy-job-dbk5z       0/1     Completed   0          34m

of your cluster.

Step 5 – Access the Kubernetes Dashboard

Before accessing the Kubernetes Dashboard a token is needed to log in. 

Note. To find out more about how to configure and use Bearer Tokens, please refer to the Kubernetes Authentication section. 

Generate your token using the following command:

$ kubectl --kubeconfig kube_config_cluster.yml -n kube-system describe secret $(kubectl --kubeconfig kube_config_cluster.yml -n kube-system get secret | grep admin-user | awk '{print $1}') | grep ^token: | awk '{ print $2 }'
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5l...


Set up the kubectl proxy using:

$ kubectl --kubeconfig kube_config_cluster.yml proxy
Starting to serve on 127.0.0.1:8001


and login at:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

using the token generated in the previous step.

You should then finally be able to access the Kubernetes Dashboard.

Step 6 – Access the Rancher UI

Open the link prompted at the end of the terraform configuration using ”admin123” as password.

Apply complete! Resources: 24 added, 0 changed, 0 destroyed.

Outputs:

Rancher_Server_IP = https:// _ . _ . _ . _ 


As this is just an example and no real certificates have been used.We recommend to use an old version of your browser or less strict browsers as Firefox or Safari (Mac)

Step 7 – Import your cluster nodes

Once in the Dashboard, add a new cluster selecting ‘⚙️ From existing nodes (Custom)’

Enter your cluster name and as Cloud Provider select ‘Custom‘. Press Next.

You will now be prompted different roles you might want to have for your nodes.

For simplicity, we will tick all 3 node roles: etcdControl Plane and Worker.

Login into the first VM using the floating IPs prompted in Step 4.

ssh rancher@<vm_floating_ip>

and run the command prompted by the UX interface, similar to the one shown below:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.4 --server https://86.107.243.222 --token 555mkhwx9lzvn*8w4xmxd --ca-checksum 43da36621b2*9279 --etcd --controlplane --worker

Repeat this step for the other 2 VMs.

Rancher will then start importing your 3 VMs into your newly created cluster one at the time.

A notification bar like the one below will be also prompted.

Once done, you will then be able to see all resources allocated to your cluster and you can start deploying any application on top of it by using Rancher too.

🎉 Congratulations!

You have just created your first Kubernetes Cluster and imported it into Rancher, one of the most complete open-source Kubernetes Managers.

Happy clustering!