This is a first attempt to create a clustered/cloud application on virtual machines running on bare metal(s).
Host machine may be any Linux distribution which supports KVM. I used Linux Mint 18 in this case as KVM host. You can use more physical machines with oVirt or other virtualization technology.
On virtual machines (VMs) was Debian Linux installed.
Install Qemu/KVM
Run the following commands on host machine. They will update, upgrade the OS and install the virtualization.
sudo apt update sudo apt -y upgrade sudo apt install -y -o 'apt::install-recommends=true' \ qemu-kvm libvirt0 virt-manager libguestfs-tools
Add your user to the libvirtd group to controll the VMs.
sudo gpasswd libvirt -a <username>
Install at least with 6 VMs with minimum 2 CPUs, 2 GBs of ram and minimum 20GBs disk space. I named them:
– k8s-mst-1
– k8s-inf-1
– k8s-inf-2
– k8s-inf-3
– k8s-wrk-1
– k8s-wrk-2
I installed Debian Linux on all VMs. At install you should choose LVM so you can extend later the disk space easily.
It is highly recommended to assign fixed IP addresses to your VMs. Or at least the master VMs.
I recommend to select the bridged network via the interface of host. So all VMs have IP address from same network as host. (For example: 192.168.143.0/24)
In this case you have to set the fixed DHCP IP addresses in your router or DHCP server. For example:
– k8s-mst-1: 192.168.143.180
– k8s-wrk-1: …181
– k8s-wrk-2: …182
– k8s-wrk-3: …183
– k8s-wrk-4: …184
– k8s-inf-1: …185
– k8s-inf-2: …186
– k8s-inf-3: …187
If you want NAT network within the virtualization environment, you have to use different network. For example 192.168.122.0/24. In this case you can solve the routing/port forward to the internal network. The VM IP address scheme may be same:
– k8s-mst-1: 192.168.122.180
– k8s-wrk-1: …181
– k8s-wrk-2: …182
– k8s-wrk-3: …183
– k8s-wrk-4: …184
– k8s-inf-1: …185
– k8s-inf-2: …186
– k8s-inf-3: …187
You can set the statich IP addresses in KVM:
First, find out the MAC addresses of the VMs you want to assign static IP addresses:
virsh dumpxml $VM_NAME | grep 'mac address'
Then edit the network
virsh net-list virsh net-edit $NETWORK_NAME # Probably "default"
Find the section, restrict the dynamic range and add host entries for your VMs
<dhcp> <range start='192.168.122.180' end='192.168.122.190'/> <host mac='52:54:00:6c:3c:01' name='k8s-mst-1' ip='192.168.122.180'/> <host mac='52:54:00:6c:3c:02' name='k8s-wrk-1' ip='192.168.122.181'/> <host mac='52:54:00:6c:3c:03' name='k8s-wrk-2' ip='192.168.122.182'/> ... </dhcp>
Then restart the network.
virsh net-destroy $NETWORK_NAME virsh net-start $NETWORK_NAME
… and restart the VM’s DHCP client.
If that still doesn’t work, you might have to
stop the libvirtd service kill any dnsmasq processes that are still alive start the libvirtd service
Docker install
At this point we have basic installed OS-es and the next step is the install of Docker. You have to execute these commands on all VM as root.
Disable swap:
- edit /etc/fstab, remove or comment swap line(s)
- swapoff -a
Run the following commands as root to install Docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker.io && apt-mark hold docker.io
Check install
docker version
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.4
Git commit: e8ff056
Built: Fri Apr 12 00:34:27 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: e8ff056
Built: Fri Apr 12 00:27:37 2019
OS/Arch: linux/amd64
Experimental: false
Enable the service to start at boot:
systemctl enable docker.service
Configure systemd service as cgroupdriver.
#Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
Restart systemctl daemon and docker service:
systemctl daemon-reload && systemctl restart docker
Kubernetes install
At first, we have to add the apt-key of packages:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Check your Linux distribution version and which (nearest) available at https://packages.cloud.google.com/apt/dists
Debian 6 | squeeze |
Debian 7 | wheezy |
Debian 8 | jessie |
Debian 9 | stretch |
Ubuntu 10.04 LTS | lucid |
Ubuntu 12.04 LTS | precise |
Ubuntu 14.04 LTS | trusty |
Ubuntu 16.04 LTS | xenial |
Ubuntu 16.10 | yakkety |
Ubuntu 16.04 LTS is xenial. So we need the xenial deb packages to install the actual Kubernetes. If you can not decide, in general the xenial is recommended for Ubuntu. Execute the the following lines as root:
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
Install the Kubernetes tools:
apt-get update && apt-get install -y kubelet kubeadm kubectl && apt-mark hold kubelet kubeadm kubectl
Check the Kubeadm version:
kubeadm version
We use flannel, one of the simplest network stack. For flannel net you have to give the proper network configuration.
The following commands you have to execute only on “master” node.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Copy the config to the user, who may configure the Kubernetes Cluster. Execute these commands as a normal user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Exete this command only on worker (and inf) nodes to join to the cluster. (The exact command line is printed out by “kubeadm init” command.)
kubeadm join 192.168.122.180:6443 --token <token> \
--discovery-token-ca-cert-hash <sha256 hash>
If you want later join a new node to the cluster, you can get the join command line with actual token with the following command on master node:
kubeadm token create --print-join-command
For flannel the following setting is needed on all nodes:
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sysctl -p
For internal cluster network we need a layer 3 network fabric. Install Flannel in the cluster by running this only on the Master node as normal/kubernetes user:
kubectl apply -f \
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
See more info:
– https://www.hiroom2.com/2018/08/06/linuxmint-19-kvm-en/
– https://serverfault.com/questions/627238/kvm-libvirt-how-to-configure-static-guest-ip-addresses-on-the-virtualisation-ho