Kubernetes: kubeadm + centos + pfsense + flannel + metallb

 Requirements

  • CentOS 8 Stream
  • pfsense 2.6.0
  • Container Runtime: CRI-O
  • MetalLB 0.13.7
  • Flannel 0.21.0

Preparation

Do the "Preparing Container Runtime" for each node you are planning to include to your cluster.

Preparing Container Runtime

Follow installation instruction from their website: https://github.com/cri-o/cri-o/blob/main/install.md#readme

Afterwhich, enable and start its daemon:

systemctl start crio
systemctl enable crio
systemctl status crio

Disable Swap

swapoff -a

Delete the swap from /etc/fstab

Disable firewall-cmd

Depending on your setup, you might need this. 

systemctl stop firewalld
systemctl disable firewalld
systemctl mask --now firewalld
systemctl status firewalld

Enable br_netfilter

modprobe br_netfilter

Add to the modules to load

cat /etc/modules-load.d/br_netfilter.conf
br_netfilter

Install kubeadm and tools


Then, make sure to lock the version

dnf install 'dnf-command(versionlock)'
dnf versionlock add kubeadm
dnf versionlock add kubelet
dnf versionlock add kubectl

dnf versionlock list
...
kubeadm-0:1.26.1-0.*
kubelet-0:1.26.1-0.*
kubectl-0:1.26.1-0.*

Enable ip forwarding

sysctl -w net.ipv4.ip_forward=1

cat /etc/sysctl.conf | grep ip_forward
net.ipv4.ip_forward = 1



Installing the first control-plane node

Create a file called kubeadm-config.yaml with the following content.

kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
localAPIEndpoint:
  # node's IP
  advertiseAddress: 192.168.2.30
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: stable
# loadbalancer's dns name
controlPlaneEndpoint: k8.example.com
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/16
  # this must be the same with flannel
  podSubnet: 10.244.0.0/16
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

Modify the config file as you see fit.

Then, initialize the cluster, do the command as root user: kubeadm init --config kubeadm-config.yaml

Make sure everything is running correctly:

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get pod -A
kubectl get node -o wide

coredns will remain in pending, not until there's CNI installed. See the next section.

Flannel Networking


Check the "Network", it must be the same as in kubeadm-config's podSubnet.

Then, install
kubectl create -f ./kube-flannel.yml

# wait for all container in RUNNING state
kubectl get pod -A


Adding additional control-plane node

All the preparation must have been done.

From the original master, we must upload the certificates temporarily to the kubernetes' secret so that the new control-plane node can use them: kubeadm init phase upload-certs --upload-certs. This will print certificate key, take note of that.

Create a token and join command with ttl of 1h: kubeadm token create --ttl 1h --print-join-command

Go to the new node and execute the command that was printed by the token create and specify the certificate key by --certificate-key add also  --control-plane --v=5

--v is for verbosity

Keep watch until the new node has been added: watch 'kubectl get node -o wide; kubectl get pod -A'


Untaint control-plane nodes

In order to run normal pods on the control-plane nodes, the NoSchedule must be removed

kubectl taint nodes --all node-role.kubernetes.io/control-plane-



MetalLB

pfsense

Install the package "frr".

Once installed, go to `Services` -> `FRR Global/Zebra`:
  • Check `Enable FRR`
  • Set `Master Password`
Then, Save.

Click `Route Maps` from the Tab, and Add
  • `Name`: Allow-All
  • `Action`: Permit
  •  `Sequence`: 100
Setting up BGP, click `[BGP]` from the Tab
  •  Check `Enable BGP Routing`
  • Set `Local AS` - pick from private 64512 to 65534. This is should be set in `BGPPeer`'s  `peerASN`
  • Set `Router ID` to the router's gateway IP, main VLAN
Setting up Neighbors, click `Neighbors` from the Tab

If you have multiple peers, create a Group first; otherwise, skip this. Click Add
  • `Name` give it a group name
  • `Peer Group` None
  • `Remote AS` the cluster's ASN, pick the private range. should be different from `Local AS`. Must be the same in `BGPPeer`'s  `myASN`
  • `Route Map Filters` - `Allow-All` for both Inbound and Outbound
For each peer, click Add
  • `Name` ip address of the peer
  • `Peer Group` - if you created a group previously, or None.
  • `Remote AS` the cluster's ASN, pick the private range. should be different from `Local AS`. Must be the same in `BGPPeer`'s  `myASN`
  • `Route Map Filters` - `Allow-All` for both Inbound and Outbound

Install MetalLB

`kubectl create -f ./metallb-native.yaml` (Download from https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml)

`memberlist` requires secret; otherwise, the speakers' pods will get stuck at `CreateContainerConfigError` state
`kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"`

Specify the address pool to use: `kubectl apply -f metallb-ipaddresspool.yaml`
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: metallb-pool
  namespace: metallb-system
spec:
  addresses:
  - 10.0.0.10-10.0.0.250

Advertise that pool: `kubectl apply -f metallb-bgpads.yaml`
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
  name: metallb-bgpads
  namespace: metallb-system
spec:
  ipAddressPools:
  - metallb-pool

And, finally, specify the peer, router: `kubectl apply -f metallb-bgppeer.yaml`
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: metallb-bgppeer
  namespace: metallb-system
spec:
  myASN: 64000
  peerASN: 64500
  peerAddress: 192.168.2.1
  nodeSelectors: []


Admin User

Follow this github gist to create separate admin user for the cluster: https://gist.github.com/rraboy/4444da44fdbbfdb66510106dda20e989#file-k8-new-cert-sh


Reset Everything

For each node, starting from the last joined node, do:
kubeadm reset
rm -rf /etc/cni/net.d
rm -rf /etc/kubernetes/
iptables --flush
reboot


References


Errors

  • "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/kubelet.service\":

        edit `sudo vim /etc/sysconfig/kubelet`

        add `KUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice`, then restart 

        `sudo systemctl restart kubelet`

  • "rpc error: code = Unknown desc = failed to create pod network sandbox failed to delegate add: failed to set bridge addr already has an IP address different from 10.X.X.X
            sudo su
            ip link set cni0 down && ip link set flannel.1 down 
            ip link delete cni0 && ip link delete flannel.1
            systemctl restart docker && systemctl restart kubelet

        https://stackoverflow.com/a/71981013

Comments

Popular Posts