Setup SingleNode Kubernetes Cluster using kubeadm

Setup simple kubernetes cluster for test purposes.

Add Kubernetes APT Repo

If you are still running Ubuntu 16.04 LTS

sudo xxx
sudo apt-get update
sudo apt-get install kubeadm kubelet kubectl
sudo docker version
Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.6.2
 Git commit:   092cba3
 Built:        Thu Nov  2 20:40:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.6.2
 Git commit:   092cba3
 Built:        Thu Nov  2 20:40:23 2017
 OS/Arch:      linux/amd64
 Experimental: false
sudo kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

Run Kubeadm

First command needs to use sudo

sudo kubeadm config images pull
sudo kubeadm init

Let copy the .kube config into current user home directoru

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get all --all-namespaces

From that point don’t really need sudo anymore

Add Calico

At that point most of the node will stay in mode “NotReady”

kubectl get nodes

NAME       STATUS     ROLES     AGE       VERSION
allinone   NotReady   master    1m        v1.11.0

To install Calico

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

configmap/calico-config created
daemonset.extensions/calico-etcd created
service/calico-etcd created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
serviceaccount/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

Check again

kubectl get nodes

NAME       STATUS    ROLES     AGE       VERSION
allinone   Ready     master    2m        v1.11.0

Allow payload on master node

kubectl get all --all-namespaces

NAMESPACE     NAME                                           READY     STATUS    RESTARTS   AGE
kube-system   pod/calico-etcd-d6gn7                          1/1       Running   0          1m
kube-system   pod/calico-kube-controllers-84fd4db7cd-ctctz   1/1       Running   0          1m
kube-system   pod/calico-node-bg4w8                          2/2       Running   0          1m
kube-system   pod/coredns-78fcdf6894-7d5mf                   1/1       Running   0          2m
kube-system   pod/coredns-78fcdf6894-xv9lm                   1/1       Running   0          2m
kube-system   pod/etcd-allinone                              1/1       Running   0          1m
kube-system   pod/kube-apiserver-allinone                    1/1       Running   0          1m
kube-system   pod/kube-controller-manager-allinone           1/1       Running   0          1m
kube-system   pod/kube-proxy-zz9t9                           1/1       Running   0          2m
kube-system   pod/kube-scheduler-allinone                    1/1       Running   0          1m

NAMESPACE     NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP         2m
kube-system   service/calico-etcd   ClusterIP   10.96.232.136   <none>        6666/TCP        1m
kube-system   service/kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   2m

NAMESPACE     NAME                         DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
kube-system   daemonset.apps/calico-etcd   1         1         1         1            1           node-role.kubernetes.io/master=   1m
kube-system   daemonset.apps/calico-node   1         1         1         1            1           <none>                            1m
kube-system   daemonset.apps/kube-proxy    1         1         1         1            1           beta.kubernetes.io/arch=amd64     2m

NAMESPACE     NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1         1         1            1           1m
kube-system   deployment.apps/coredns                   2         2         2            2           2m

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/calico-kube-controllers-84fd4db7cd   1         1         1         1m
kube-system   replicaset.apps/coredns-78fcdf6894

To remove the taint:

kubectl taint node allinone node-role.kubernetes.io/master:NoSchedule-

Install Helm

Create the service account

kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
kubectl replace -f rbac-config.yaml
serviceaccount/tiller replaced
clusterrolebinding.rbac.authorization.k8s.io/tiller replaced

Init Tiller

helm init --service-account tiller

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster
kubectl get all -n kube-system

NAME                                           READY     STATUS    RESTARTS   AGE
pod/calico-etcd-d6gn7                          1/1       Running   0          4m
pod/calico-kube-controllers-84fd4db7cd-ctctz   1/1       Running   0          4m
pod/calico-node-bg4w8                          2/2       Running   0          4m
pod/coredns-78fcdf6894-7d5mf                   1/1       Running   0          5m
pod/coredns-78fcdf6894-xv9lm                   1/1       Running   0          5m
pod/etcd-allinone                              1/1       Running   0          5m
pod/kube-apiserver-allinone                    1/1       Running   0          4m
pod/kube-controller-manager-allinone           1/1       Running   0          5m
pod/kube-proxy-zz9t9                           1/1       Running   0          5m
pod/kube-scheduler-allinone                    1/1       Running   0          4m
pod/tiller-deploy-759cb9df9-wrznl              1/1       Running   0          46s

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/calico-etcd     ClusterIP   10.96.232.136   <none>        6666/TCP        4m
service/kube-dns        ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   6m
service/tiller-deploy   ClusterIP   10.103.151.9    <none>        44134/TCP       46s

NAME                         DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
daemonset.apps/calico-etcd   1         1         1         1            1           node-role.kubernetes.io/master=   4m
daemonset.apps/calico-node   1         1         1         1            1           <none>                            4m
daemonset.apps/kube-proxy    1         1         1         1            1           beta.kubernetes.io/arch=amd64     6m

NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-kube-controllers   1         1         1            1           4m
deployment.apps/coredns                   2         2         2            2           6m
deployment.apps/tiller-deploy             1         1         1            1           46s

NAME                                                 DESIRED   CURRENT   READY     AGE
replicaset.apps/calico-kube-controllers-84fd4db7cd   1         1         1         4m
replicaset.apps/coredns-78fcdf6894                   2         2         2         5m
replicaset.apps/tiller-deploy-759cb9df9              1         1         1         46s

Update the repo

helm repo add stable https://kubernetes-charts.storage.googleapis.com

"stable" has been added to your repositories

Install and remove nginx

helm install stable/nginx-ingress

NAME:   messy-snake
LAST DEPLOYED: Sat Jun 30 18:05:58 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/ClusterRoleBinding
NAME                       AGE
messy-snake-nginx-ingress  1s

==> v1beta1/Role
NAME                       AGE
messy-snake-nginx-ingress  1s

==> v1/ServiceAccount
NAME                       SECRETS  AGE
messy-snake-nginx-ingress  1        1s

==> v1beta1/ClusterRole
NAME                       AGE
messy-snake-nginx-ingress  1s

==> v1/Service
NAME                                       TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
messy-snake-nginx-ingress-controller       LoadBalancer  10.98.107.113  <pending>    80:31098/TCP,443:32158/TCP  1s
messy-snake-nginx-ingress-default-backend  ClusterIP     10.105.239.60  <none>       80/TCP                      1s

==> v1beta1/Deployment
NAME                                       DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
messy-snake-nginx-ingress-controller       1        1        1           0          1s
messy-snake-nginx-ingress-default-backend  1        1        1           0          1s

==> v1beta1/PodDisruptionBudget
NAME                                       MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
messy-snake-nginx-ingress-controller       1              N/A              0                    1s
messy-snake-nginx-ingress-default-backend  1              N/A              0                    1s

==> v1/Pod(related)
NAME                                                        READY  STATUS             RESTARTS  AGE
messy-snake-nginx-ingress-controller-5db5f96774-ql6nf       0/1    ContainerCreating  0         1s
messy-snake-nginx-ingress-default-backend-7f877996b6-zzp82  0/1    ContainerCreating  0         1s

==> v1/ConfigMap
NAME                                  DATA  AGE
messy-snake-nginx-ingress-controller  1     1s

==> v1beta1/RoleBinding
NAME                       AGE
messy-snake-nginx-ingress  1s


NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w messy-snake-nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
helm ls
NAME            REVISION        UPDATED                         STATUS          CHART                   NAMESPACE
messy-snake     1               Sat Jun 30 18:05:58 2018        DEPLOYED        nginx-ingress-0.22.0    default
kubectl get all

NAME                                                             READY     STATUS    RESTARTS   AGE
pod/messy-snake-nginx-ingress-controller-5db5f96774-ql6nf        1/1       Running   0          1m
pod/messy-snake-nginx-ingress-default-backend-7f877996b6-zzp82   1/1       Running   0          1m

NAME                                                TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes                                  ClusterIP      10.96.0.1       <none>        443/TCP                      9m
service/messy-snake-nginx-ingress-controller        LoadBalancer   10.98.107.113   <pending>     80:31098/TCP,443:32158/TCP   1m
service/messy-snake-nginx-ingress-default-backend   ClusterIP      10.105.239.60   <none>        80/TCP                       1m

NAME                                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/messy-snake-nginx-ingress-controller        1         1         1            1           1m
deployment.apps/messy-snake-nginx-ingress-default-backend   1         1         1            1           1m

NAME                                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/messy-snake-nginx-ingress-controller-5db5f96774        1         1         1         1m
replicaset.apps/messy-snake-nginx-ingress-default-backend-7f877996b6   1         1         1         1m

Remove the test chart

helm delete messy-snake
release "messy-snake" deleted

Use kubeadm to remove everything

sudo kubeadm reset
sudo docker rm $(sudo docker ps -qa)
sudo docker image rm -f $(sudo docker image list -qa)

References

  • TBD