准备节点(node)

节点部署在家里的一台运行ESXi 6.5的HP Gen8服务器上

每个节点的配置如下:

  • CPU:2vcpu
  • RAM:2G
  • OS:Ubuntu 16.04

    • 关闭SWAP分区/etc/fstab
    • MAC地址不重复 ip link

      • product_uuid不重复 /sys/class/dmi/id/product_uuid
    • docker runtime

公共配置

安装kubectl、kubeadm、kubelet

根据官方文档进行操作

后面kubeadm用root操作

master

kubeadm init

root@k8m:~# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8m kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.150]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8m localhost] and IPs [10.0.0.150 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8m localhost] and IPs [10.0.0.150 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.503802 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8m as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8m as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: csmubd.2ohe2gxmohxqf8sq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.150:6443 --token csmubd.2ohe2gxmohxqf8sq \
    --discovery-token-ca-cert-hash sha256:xxxxxxx

我在实验时,由于docker是以前装的,没有根据kubernetes推荐参数来安装,所以最新版本(2019-3-28)的kubernetes在init时会提示推荐docker使用systemd作为cgroupdriver。

所以我后来又根据kubernetes给出的docker安装流程,修改了docker的配置文件并重启。详见文档

配置kubectl

根据官方文档进行操作,实现用ubuntu用户操作kubectl

安装pod网络

同样根据上一步的官方文档页面相关介绍进行操作。

这里选择flannel作为pod网络方案。此处的选择影响到master节点init时的pod网络参数。

slave

在master节点init后,最后的提示会给出slave加入集群的命令,在slave上执行即可。

进一步配置

在完成了master的建立和slave的加入后,可以用kubectl跑一下测试集群是否工作正常。在集群工作正常的情况下,开始安装并配置一些额外的组件,帮助更好地管理集群。

kubernetes-dashboard

注意:dashboard默认监听443端口

  1. 导入ssl证书并安装
  2. 修改dashboard service的类型为NodePort,并指定nodePort,暴露出端口方便外网访问。
  3. 添加rbac用户并生成token,注意文档里的示例用户可以更改,注意匹配。
  4. 用上一步的token即可登录dashboard

EFK套装

部署

  1. 下载kubernetes仓库中的6个yaml文件
  2. 注释掉kibana-deployment.yaml中的SERVER_BASEPATH环境变量,否则无法通过nodeport访问
  3. apply即可

使用

目前发现,在kibana操作经常引起node的崩溃……

监控套装

使用了weave scope,比较稳定,效果很棒