0%

Kubernetes-Demo

阅读更多

1 环境

Virtual Box+CentOS-7-x86_64-Minimal-2009.iso

配置(master与worker相同)

  1. 系统配置:2C4G
  2. 网络配置:NAT 网络+HostOnly

2 搭建步骤

2.1 step1:准备工作

该步骤,每台机器都需要执行,包括masterworker的节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# docker安装
# --step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# --step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# --step 3
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# --step 4: 更新并安装docker-ce,可以通过 yum list docker-ce --showduplicates | sort -r 查询可以安装的版本列表
yum makecache fast
yum -y install docker-ce-20.10.5-3.el7
# --step 5: 开启docker服务
systemctl enable docker
systemctl start docker

# 配置内核参数
cat >> /etc/sysctl.conf << 'EOF'
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.ipv6.conf.all.forwarding=1
fs.inotify.max_user_watches=524288
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.conf

# 禁用swap
sed -ri '/ swap /s|^([^#].*)$|#\1|' /etc/fstab

# 关闭防火墙
systemctl disable firewalld
systemctl stop firewalld

# 关闭selinux
sed -r -i '/SELINUX=/s/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# 修改主机名,每台机器都必须不一样
hostnamectl set-hostname <xxx>

# 重启
reboot

2.2 step2:安装kubeadm、kubectl等工具

该步骤,每台机器都需要执行,包括masterworker的节点

安装步骤可以参考Installing kubeadm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

2.3 step3:初始化master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kubeadm init --pod-network-cidr=10.169.0.0/16 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
#-------------------------↓↓↓↓↓↓-------------------------
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.25:6443 --token yokoe3.huvu1vzcllt96zt7 \
--discovery-token-ca-cert-hash sha256:fd27e0a6257017ad8c2b25386d02db27ebaaca1916a97d2e8811ce4945bf430b
#-------------------------↑↑↑↑↑↑-------------------------

接下来需要安装CNI插件,以下插件任选一种即可

  • flannelkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • calicokubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

这里,我选的是flannel

1
2
3
4
5
6
7
8
9
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#-------------------------↓↓↓↓↓↓-------------------------
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
#-------------------------↑↑↑↑↑↑-------------------------

查看节点状态

1
2
3
4
5
kubectl get nodes
#-------------------------↓↓↓↓↓↓-------------------------
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 6m35s v1.20.5
#-------------------------↑↑↑↑↑↑-------------------------

不知道为何,k8s没有为容器网络配置snat规则,这里我是手动配置的,否则容器通不了外网,其中10.169.0.0/16是demo集群的cidr

1
iptables -t nat -A POSTROUTING -s 10.169.0.0/16 -j MASQUERADE

2.3.1 查看节点加入所需的指令

1
kubeadm token create --print-join-command

2.3.2 重新生成加入集群所需的token

1
2
3
4
5
6
7
8
9
10
11
# 以下命令在master节点执行

# 1 创建token
kubeadm token create

# 2 计算hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //'

# 在node节点执行join命令

kubeadm join <master ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

2.4 step4:初始化worker

使用step3中提示的命令,进行work的加入动作。如果忘记了这个命令怎么办?在master上执行kubeadm token create --print-join-command,重新获取加入指令

1
2
kubeadm join 10.0.2.25:6443 --token yokoe3.huvu1vzcllt96zt7 \
--discovery-token-ca-cert-hash sha256:fd27e0a6257017ad8c2b25386d02db27ebaaca1916a97d2e8811ce4945bf430b

查看节点状态

1
2
3
4
5
6
kubectl get nodes
#-------------------------↓↓↓↓↓↓-------------------------
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 24m v1.20.5
k8s-node-1 Ready <none> 8m24s v1.20.5
#-------------------------↑↑↑↑↑↑-------------------------

2.5 step5:部署demo应用

为了简单起见,我们用nginx作为demo应用,部署到刚创建的k8s集群中

首先,编写应用所在deployment的yml文件,nginx-deployment.yml文件如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
mylabel: label_nginx
template:
metadata:
labels:
mylabel: label_nginx
spec:
containers:
- name: "nginx"
image: "nginx:1.19"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

接下来,部署应用

1
kubectl apply -f nginx-deployment.yml

查看pod和deployment运行状态

1
2
3
4
5
6
7
8
9
10
11
12
kubectl get deployment
#-------------------------↓↓↓↓↓↓-------------------------
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 32m
#-------------------------↑↑↑↑↑↑-------------------------

kubectl get pods
#-------------------------↓↓↓↓↓↓-------------------------
NAME READY STATUS RESTARTS AGE
nginx-59578f7988-chlch 1/1 Running 0 31m
#-------------------------↑↑↑↑↑↑-------------------------

由于我们这个nginx应用服务在80端口,但是上面的nginx-deployment.yml文件中配置的是容器端口,我们仍然不能在外界访问这个服务。我们还需要增加一个nginx-servicenginx-service.yml文件内容如下

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 4000
targetPort: 80
nodePort: 30001
selector:
mylabel: label_nginx
type: NodePort

启动service

1
kubectl create -f nginx-service.yml

查看服务信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
kubectl describe svc nginx
#-------------------------↓↓↓↓↓↓-------------------------
Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>
Selector: mylabel=label_nginx
Type: NodePort
IP Families: <none>
IP: 10.107.164.249
IPs: 10.107.164.249
Port: <unset> 4000/TCP
TargetPort: 80/TCP
NodePort: <unset> 30001/TCP
Endpoints: 10.169.1.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
#-------------------------↑↑↑↑↑↑-------------------------

访问http://<node_ip>:30001,成功!

3 Helm

步骤

  1. 下载,二进制包,传送门helm release
  2. 解压缩,tar -zxvf <file>
  3. mv linux-amd64/helm /usr/local/bin/helm
  4. helm init

3.1 参考

4 Tips

4.1 kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# 列出所有资源类型
kubectl api-resources
kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false

# 查看node概要信息
kubectl get nodes
kubectl get node <node-name>

# 查看node详细信息
kubectl describe nodes
kubectl describe node <node-name>

# 查看namespace空间
kubectl get namespace

# 查看pod概要信息
kubectl get pod -n <namespace>
kubectl get pod -n <namespace> <pod-name>
kubectl get pod -n <namespace> <pod-name> -o wide
kubectl get pod --all-namespaces

# 查看pod详细信息
kubectl describe pod -n <namespace>
kubectl describe pod -n <namespace> <pod-name>

# 删除pod
kubectl delete pod <pod-name> -n <namespace>
kubectl delete pod <pod-name> --force --grace-period=0 -n <namespace>

# 查看service概要信息
kubectl get svc <service-name>

# 查看service详细信息
kubectl describe svc <service-name>

# 查看object
kubectl get -f <filename|url> -o yaml
kubectl get pod -n <namespace> <pod-name> -o yaml

# 打label
kubectl label pod -n <namespace> <pod-name> <label_name>=<label_value>

4.2 如何修改kube-proxy的工作模式

kube-proxy的配置信息存储在kube-system命名空间下的同名configMap,修改该configMap即可调整kube-proxy的工作模式,可选的值包括

  1. iptables:配置缺省时的默认值
  2. ipvs
  3. userspace

4.3 如何更新证书

k8s存放证书的目录是:/etc/kubernetes/pki

先备份原有证书

1
2
3
cd /etc/kubernetes/pki
mkdir backup
cp -vrf *.crt *.key backup/

创建新的证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
kubeadm certs renew all

#-------------------------↓↓↓↓↓↓-------------------------
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
#-------------------------↑↑↑↑↑↑-------------------------

查看证书有效期

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
kubeadm certs check-expiration

#-------------------------↓↓↓↓↓↓-------------------------
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Apr 06, 2022 07:20 UTC 364d no
apiserver Apr 06, 2022 07:20 UTC 364d ca no
apiserver-etcd-client Apr 06, 2022 07:20 UTC 364d etcd-ca no
apiserver-kubelet-client Apr 06, 2022 07:20 UTC 364d ca no
controller-manager.conf Apr 06, 2022 07:20 UTC 364d no
etcd-healthcheck-client Apr 06, 2022 07:20 UTC 364d etcd-ca no
etcd-peer Apr 06, 2022 07:20 UTC 364d etcd-ca no
etcd-server Apr 06, 2022 07:20 UTC 364d etcd-ca no
front-proxy-client Apr 06, 2022 07:21 UTC 364d front-proxy-ca no
scheduler.conf Apr 06, 2022 07:21 UTC 364d no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Mar 21, 2031 03:26 UTC 9y no
etcd-ca Mar 21, 2031 03:26 UTC 9y no
front-proxy-ca Mar 21, 2031 03:26 UTC 9y no
#-------------------------↑↑↑↑↑↑-------------------------

4.4 加签api-server的地址

先备份原有证书

1
2
3
cd /etc/kubernetes/pki
mkdir backup
cp -vrf apiserver.* backup/

加签地址

1
2
3
4
5
6
7
8
9
10
# 删除api-server原有的证书
rm /etc/kubernetes/pki/apiserver.*

# 重新生成api-server的证书
# 1. --apiserver-advertise-address 后跟原有的api-server地址
# 2. --apiserver-cert-extra-sans 后跟需要加签的地址
kubeadm init phase certs apiserver --apiserver-advertise-address ${api-server原有的地址} --apiserver-cert-extra-sans ${api-server新增的地址,可以是多个}

# 刷新证书
kubeadm certs renew admin.conf

重启api-server后生效

5 常见问题

5.1 kubernetes相关依赖不兼容

k8s.io/apik8s.io/apimachineryk8s.io/client-go这个三个包的版本要保持一致,否则可能会有兼容性问题

6 参考