kubeadmコマンドでKubernatesのクラスタ環境を構築する!
記事の目次
コントロールノードとワーカーノードの名前解決を可能とする!
コントロールノードとワーカーノードの名前解決を/etc/hostsあるいは、DNSで名前解決可能にします。
Kubernates環境を初期化する!
$ sudo kubeadm init --pod-network-cidr=10.200.0.0/16 [sudo] password for usradmin: [init] Using Kubernetes version: v1.26.0 [preflight] Running pre-flight checks [WARNING Hostname]: hostname "vmsk8s81" could not be reached [WARNING Hostname]: hostname "vmsk8s81": lookup vmsk8s81 on 10.1.20.1:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vmsk8s81] and IPs [10.96.0.1 10.1.97.81] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost vmsk8s81] and IPs [10.1.97.81 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost vmsk8s81] and IPs [10.1.97.81 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 10.503284 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node vmsk8s81 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node vmsk8s81 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: 2gtmrt.uliw8ddw1mot49xw [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.1.97.81:6443 --token 2gtmrt.uliw8ddw1mot49xw \ --discovery-token-ca-cert-hash sha256:8e7676f789f71f63e5d659da3d239cedc9e0a0523449a48d114bfbb8bc9fa3b7
メーッセージ出力のとおり、kubeadm環境を整えます。
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
ワーカーノードを登録する!
ワーカーノードを登録します。今回は、「vmsk8s91」と「vmsk8s92」いうマシンから登録してみました。
$ sudo kubeadm join 10.1.97.81:6443 --token 64yr1e.zl3ykzqa1m9bss2z --discovery-token-ca-cert-hash sha256:8e7676f789f71f63e5d659da3d239cedc9e0a0523449a48d114bfbb8bc9fa3b7 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
コントロールプレーンで、ノードが追加されたことを確認します。
$ kubectl get node NAME STATUS ROLES AGE VERSION vmsk8s81 NotReady control-plane 4m48s v1.26.0 vmsk8s91 NotReady <none> 2m42s v1.26.0 vmsk8s92 NotReady <none> 2m v1.26.0
Calicoをインストールする!
Calicoをインストールします。
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml $ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml
ノードの状態を確認する!
ノードの状態がすべて、Readyになったことを確認します。
$ kubectl get node NAME STATUS ROLES AGE VERSION vmsk8s81 Ready control-plane 64m v1.26.0 vmsk8s91 Ready <none> 62m v1.26.0 vmsk8s92 Ready <none> 61m v1.26.0
Nginxを動作させてみる!
Nginxを動作させてみます。
YAMLファイルを用意する!
以下のYAMLファイルを作成します。
$ cat nginx.yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:latest name: nginx
Kubernatesの環境に適用する!
YAMLファイルをKubernatesの環境に適用します。
$ kubectl apply -f nginx.yaml pod/nginx created
NginxのPodが動作していることを確認します。
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 60s
Podの状態を確認する!
NginxのPodの状態を確認します。
$ kubectl describe pod nginx Name: nginx Namespace: default Priority: 0 Service Account: default Node: vmsk8s91/10.1.97.91 Start Time: Mon, 09 Jan 2023 08:50:25 +0900 Labels: <none> Annotations: <none> Status: Running IP: 10.88.0.5 IPs: IP: 10.88.0.5 IP: 2001:4860:4860::5 Containers: nginx: Container ID: containerd://2a3e827647d101632026fb91758011691cb0bbdd002ba29bcb377aeb3a61840a Image: nginx:latest Image ID: docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 Port: <none> Host Port: <none> State: Running Started: Mon, 09 Jan 2023 08:50:28 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86rwh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-86rwh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m2s default-scheduler Successfully assigned default/nginx to vmsk8s91 Normal Pulling 2m1s kubelet Pulling image "nginx:latest" Normal Pulled 2m kubelet Successfully pulled image "nginx:latest" in 1.589679623s (1.58968523s including waiting) Normal Created 2m kubelet Created container nginx Normal Started 119s kubelet Started container nginx
Nginxが「vmsk8s91」上で、「10.88.0.5」で動作していることがわかります。
Nginxにアクセスする!
「vmsk8s91」上でcurlコマンドでアクセスしてみます。
$ curl 10.88.0.5 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Nginxのトップページにアクセスできました。
NginxのPodを削除する!
最後に、コントロールノードで、NginxのPodを削除しておきます。
$ kubectl delete pod nginx pod "nginx" deleted $ kubectl get pods No resources found in default namespace.
動作しない場合は・・・
動作しない場合は、containerdと、kubletのサービスのログを見て、エラー内容を確認すると解決にたどりけるかもしれません。
$ journalctl -xeu conatinerd $ journalctl -xeu kubulet
実行中に、最新のログを確認する以下のコマンドを実行するのもよいと思います。
$ journalctl -xef
おわりに
kubeadmコマンドで、1個のコントロールノード、2個のワーカーノードの構成のKubernates環境を構築して、動作確認としてNginxのPodを動作させてみました。
参考情報
関連記事
関連書籍(Amazon)