Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
建议至少2 cpu ,2G,非硬性要求,1cpu,1G也可以搭建起集群。但是:1个cpu的话初始化master的时候会报 [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2部署插件或者pod时可能会报warning:FailedScheduling:Insufficient cpu, Insufficient memory如果出现这种提示,说明你的虚拟机分配的CPU为1核,需要重新设置虚拟机master节点内核数。
[preflight] FYI: You can look at this config file with'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waitingfor the kubelet to perform the TLSBootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * TheKubelet was informed of the new secure connection details.
Run'kubectl get nodes' on the control-plane to see this node join the cluster.
If the cluster contains 1 master and 1 worker and the nodeName of your master node is ending with "master"(e.g XXXmaster). The pod will not be evicted from the worker node.
It is because the controller-manager does not take the node with nodeName ends with "master" as the worker node. kubernetes/pkg/controller/nodelifecycle/node_lifecycle_controller.go
Lines 1004 to 1016 in 2adc8d7
funclegacyIsMasterNode(nodeName string)bool { // We are trying to capture "master(-...)?$" regexp. // However, using regexp.MatchString() results even in more than 35% // of all space allocations in ControllerManager spent in this function. // That's why we are trying to be a bit smarter. if strings.HasSuffix(nodeName, "master") { returntrue } iflen(nodeName) >= 10 { return strings.HasSuffix(nodeName[:len(nodeName)-3], "master-") } returnfalse }
If all worker nodes are NotReady, the controller-manager will enter master disruption mode. Disruption mode :#42733 (comment)
Around August last year we introduced a protection against master machine network isolation, which prevents any evictions from happening if master can't see any healthy Node. It then assumes that it's not a problem with Nodes, but with itself and just don't do anything. I stop the kubelet on worker node and the controller-manager logs shows: I0126 15:19:20.901414 1 node_lifecycle_controller.go:1230] Controller detected that all Nodes are not-Ready. Entering master disruption mode. If you want your pod can be evicted from worker node in 1 master 1 worker cluster, you can: set LegacyNodeRoleBehavior feature gate on controller-manager to false, for example, edit the controller-manager manifest, which is located at /etc/kubernetes/manifests/kube-controller-manager.yaml on your master node ... spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true - --feature-gates=LegacyNodeRoleBehavior=false <-- add this line - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf ... rename the nodeName of your master node, do not ends with "master"