Kubernetes1.9.6部署(一)

1. 生成证书和密钥

一、 安装CFSSL

1
2
3
4
5
6
7
8
9
10
11
12
13
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

export PATH=/usr/local/bin:$PATH

二、 创建CA

注意:创建证书的时候只需要在Master节点上创建,创建完成后,再将证书分发到其余Node节点

总共需要用到的CA证书和密钥文件如下:

  • ca-key.pem
  • ca.pem
  • kubernetes-key.pem
  • kubernetes.pem
  • kube-proxy.pem
  • kube-proxy-key.pem
  • admin.pem
  • admin-key.pem

各组件需要用到的证书如下所示:

  • etcd: 使用ca.pem、kubernetes-key.pem、kubernetes.pem
  • kube-apiserver: 使用ca.pem、kubernetes-key.pem、kubernetes.pem
  • kubelet: 使用ca.pem
  • kube-proxy: 使用ca.pem、kube-proxy-key.pem、kube-proxy.pem
  • kubectl: 使用ca.pem、admin-key.pem、admin.pem
  • kube-controller-manager: 使用ca-key.pem、ca.pem

1) 创建CA配置文件

以下证书创建时,所在目录均为/root/ssl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF

该证书的过期时间设置成87600h

2)创建CA证书签名请求

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
],
"ca": {
"expiry": "87600h"
}
}

3)生成CA证书和私钥

1
2
3
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

三、 创建kubernetes证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# vim kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.0.3.6",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
  • hosts字段指定为授权使用该证书的IP或域名列表,该证书被etcd集群和kubernetes master集群使用。所以上面分别指定了etcd集群、kubernetes-master集群的主机IP和kubernetes服务的IP(一般为kube-apiserver指定的service-cluster-ip-range网段的第一个IP)

生成kubernetes证书和私钥:

1
2
3
4
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# ls kubernetes*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

四、 创建admin证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}

生成admin证书和私钥:

1
2
3
4
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

# ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem

五、 创建Kube-proxy证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
  • CN指定该证书的User为system:kube-proxy
  • kube-apiserver 预定义的 RoleBinding cluster-admin 将 User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy相关API的权限。

生成kube-proxy客户端证书和私钥:

1
2
3
4
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

# ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

六、 分发证书

将master节点上的证书和密钥文件(后缀名为.pem)分发到所有机器上的 /etc/kubernetes/ssl 目录下。

2. 创建kubeconfig文件

Node节点上的kubelet、Kube-proxy进程与Master节点上的 kube-apiserver 进程通信时需要认证和授权。以下操作也只需要在master节点上进行,生成的*.kubeconfig文件可以直接拷贝到node节点的 /etc/kubernetes/ 目录下。

一、 创建TLS Bootstrapping Token

Token 是任意的包含128bit的字符串,可以使用安全的随机数发生器生成。

1
2
3
4
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

注意:在进行后续操作前先检查 token.csv 文件,确认其中的${BOOTSTRAP_TOKEN}环境变量已被真实值替代

BOOTSTRAP_TOKEN将被写入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,如果后续重新生成了 BOOTSTRAP_TOKEN,则需要:

  1. 更新 token.csv 文件,分发到所有机器的 /etc/kubernetes/ 目录下
  2. 重新生成 bootstrap kubeconfig 文件,分发到所有 node 机器的 /etc/kubernetes 目录下
  3. 重启 kube-apiserver 和 kubelet 进程
  4. 重新 approve kubeler 的 csr 请求

二、 创建 kubelet bootstrapping kubeconfig 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# cd /etc/kubernetes
# export KUBE_APISERVER="https://10.0.3.6:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

三、 创建 kube-proxy kubeconfig 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
export KUBE_APISERVER="https://10.0.3.6:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • kube-proxy.pem 证书中 CN 为 system:kube-proxy ,kube-apiserver 预定义的 RoleBinding cluster-admin 将 User system:kube-proxy 与 Role system:node-proxier 绑定, 该 Role 授予了调用 kube-apiserver Proxy 相关API的权限。

四、 分发 kubeconfig 文件

将以上生成的 kubeconfig 文件分发到所有 Node 机器的 /etc/kubernetes/ 目录。

3. 创建 etcd

这里只部署了单个 etcd 服务。

一、 下载 etcd

首先,在官网 https://github.com/coreos/etcd/releases 下载需要版本的压缩包。

1
2
3
# wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz
# tar xvf etcd-v3.1.5-linux-amd64.tar.gz
# mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin

二、 编辑 etcd 的 systemd unit 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster etcd1=https://10.0.3.6:2380\
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 指定 etcd 的工作目录为 /var/lib/etcd,需要在服务启动前创建这个目录,否则服务启动时会报:“ Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory ”
  • 创建 kubernetes.pem 证书时使用的 kubernetes-csr.json 文件的 hosts 字段必须包含所有 etcd 节点的IP,否则证书校验会出错
  • –initial-cluster-state 值为 new 时, –name 的参数值必须位于 –initial-cluster列表中;

环境变量配置文件 /etc/etcd/etcd.conf

1
2
3
4
5
6
7
8
# [member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd"

#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.3.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.3.6:2379"

三、 启动 etcd 服务

1
2
3
4
# systemctl daemon-reload
# systemctl enable etcd
# systemctl start etcd
# systemctl status etcd

执行如下命令进行验证:

1
2
3
4
5
6
7
8
9
[root@bigdata-06 ~]#  etcdctl \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
> --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> cluster-health
2018-04-28 19:36:25.987756 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-04-28 19:36:25.988777 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 2b57cce6b67c6241 is healthy: got healthy result from https://10.0.3.6:2379
cluster is healthy

4. 部署 master 节点

1
# wget https://dl.k8s.io/v1.9.6/kubernetes-server-linux-amd64.tar.gz

解压后,直接把对应的二进制文件拷贝到/usr/bin/路径下
Mster节点需要: kube-apiserver、kube-scheduler、kube-controller-manager
Node节点需要: kubelet、kube-proxy文件

一、 配置 kube-apiserver

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@bigdata-06 ~]# vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

/etc/kubernetes/config 的文件内容为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
KUBE_MASTER="--master=http://127.0.0.1:8080"

该配置文件同时被 kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。

/etc/kubernetes/apiserver 文件内容为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com"
KUBE_API_ADDRESS="--advertise-address=10.0.3.6"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://10.0.3.6:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h --allow-privileged=true"
  • 如果中途修改过 –service-cluster-ip-range 地址,则必须将 default 命名空间的 kubernetes 的service 删除,使用 kubectl delete service kubernetes,然后系统会重新用新的 IP 重建这个 service,否则 apiserver 会报错 the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/16; please recreate
  • kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信;
  • kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,如果通过安全端口访问 kube-apiserver,则必须先通过 TLS 证书认证,再通过 RBAC 授权;

启动kube-apiserver:

1
2
3
4
# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl start kube-apiserver
# systemctl status kube-apiserver

二、 配置kube-controller-manager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@bigdata-06 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件 /etc/kubernetes/controller-manager:

1
2
3
4
5
6
7
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem "

启动 kube-controller-manager:

1
2
3
4
# systemctl daemon-reload
# systemctl enable kube-controller-manager
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager

三、 配置kube-scheduler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@bigdata-06 ~]# vim /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件 /etc/kubernetes/scheduler 如下:

1
2
3
4
5
6
7
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS=" --address=127.0.0.1"

启动kube-scheduler:

1
2
3
4
# systemctl daemon-reload
# systemctl enable kube-scheduler
# systemctl start kube-scheduler
# systemctl status kube-scheduler

四、 验证master节点各组件

1
2
3
4
5
[root@bigdata-06 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}

5. 部署node节点

注意,node节点应事先装好docker

在配置之前,需要注意,对于k8s v1.9.6,需要先关闭系统的 swap ,否则kubelet将启动失败。注释掉 swap 后,需要重启机器,让配置生效。

1
2
3
4
5
6
7
8
9
[root@bigdata-60 ~]# vim /etc/fstab

UUID=2d42d1f6-c995-4bee-ad04-12eb9b7e93e1 / ext4 defaults 1 1
UUID=6c7bb3a3-abbd-42fe-83e4-6f768de1f8dc /boot ext4 defaults 1 2
UUID=B966-A8F7 /boot/efi vfat umask=0077,shortname=winnt 0 0
UUID=443217c8-0b55-4b74-a7e0-25133da4136c /home ext4 defaults 1 2
#UUID=93f06c73-bd2e-45ca-9604-2d42601790a9 swap swap defaults 0 0

[root@bigdata-60 ~]# reboot

首先让我们检查一下,node 节点上是否有如下文件:

1
2
3
4
5
[root@bigdata-60 ~]# ls /etc/kubernetes/ssl/
admin-key.pem ca-key.pem kube-proxy-key.pem kubernetes-key.pem
admin.pem ca.pem kube-proxy.pem kubernetes.pem
[root@bigdata-60 ~]# ls /etc/kubernetes/
bootstrap.kubeconfig config kubelet kube-proxy.kubeconfig proxy ssl token.csv

kubelet 启动时会向 kube-apiserver 发送TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests),以下操作在 master 节点上进行:

1
2
3
4
[root@bigdata-06 ~]# cd /etc/kubernetes
[root@bigdata-06 ~]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
  • –user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件

一、 部署 kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@bigdata-60 ~]# vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
#KUBE_MASTER="--master=http://10.0.3.6:8080"

配置文件/etc/kubernetes/kubelet:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=10.0.3.60"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.0.3.60"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER="--api-servers=http://172.20.0.113:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.3.6:5000/library/kube-pause-amd64:3.0 "
#
## Add your own!
KUBELET_ARGS=" --cluster-dns=10.254.0.100 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --network-plugin=cni"

注意:在启动kubelet前,需要先手动创建/var/lib/kubelet目录

启动kubelet:

1
2
3
4
[root@bigdata-60 ~]# systemctl daemon-reload
[root@bigdata-60 ~]# systemctl enable kubelet
[root@bigdata-60 ~]# systemctl start kubelet
[root@bigdata-60 ~]# systemctl status kubelet

二、 部署 kube-proxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@bigdata-60 ~]# vim /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/proxy:

1
2
3
4
5
6
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=10.0.3.60 --hostname-override=10.0.3.60 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

启动kube-proxy:

1
2
3
4
[root@bigdata-60 ~]# systemctl daemon-reload
[root@bigdata-60 ~]# systemctl enable kube-proxy
[root@bigdata-60 ~]# systemctl start kube-proxy
[root@bigdata-60 ~]# systemctl status kube-proxy

三、 通过kubelet 的TLS证书请求

kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过后 kubernetes 系统才会将该 Node 加入到集群。
查看未授权的 CSR 请求

1
[root@bigdata-06 ~]# kubectl get csr

此时会看到证书的 CONDITION 为pending 状态

通过CSR请求

1
[root@bigdata-06 ~]# kubectl certificate approve 证书名

此时查看证书状态,会变为 approved, issued,然后再查看节点状态,会显示为ready.
假如你更新kubernetes的证书,只要没有更新token.csv,当重启kubelet后,该node就会自动加入到kuberentes集群中,而不会重新发送certificaterequest,也不需要在master节点上执行kubectl certificate approve操作。前提是不要删除node节点上的/etc/kubernetes/ssl/kubelet*和/etc/kubernetes/kubelet.kubeconfig文件。否则kubelet启动时会提示找不到证书而失败。

以上是k8s master节点和node节点各个组件的配置。但目前还差一些必要的组件。如网络、dns等等之类。后面会再进行补充

文章参考:

[1]. https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-centos.html

[2]. https://mritd.me/