首页
友链
统计
留言
更多
直播
壁纸
推荐
我的毛线
院长科技
Search
1
本站官方群:894703859------|诚邀各位大佬的入驻!
418 阅读
2
pxe 自动化安装系统
348 阅读
3
新款螺旋帽子编织#夏凉帽#合股线夏凉帽编织
308 阅读
4
10 个Linux Awk文本处理经典案例
306 阅读
5
软件安装
293 阅读
Linux
yaml
iptables
shell
ansible
ssl
命令
文件管理
用户权限
综合集群架构
三剑客
awk
sed
自动化
pxe
编织
编织视频
监控
prometheus
go
go占位符
vue
vue基础
vue项目
web
Nginx
html
vscode
html标签
html表格
css基础
css定位
css精灵图
code
html5
项目
js
jQuery
面向对象
kubernetes
k8s命令
k8s
k8s搭建
database
clickhouse
常用工具
微软
登录
/
注册
Search
标签搜索
基础
js
Nginx
css
webapi
jQuery
面向对象
command
项目
ansible
用户权限
go
html
文件管理
命令
综合集群架构
k8s
pxe
awk
vscode
JustDoIt
累计撰写
112
篇文章
累计收到
4
条评论
首页
栏目
Linux
yaml
iptables
shell
ansible
ssl
命令
文件管理
用户权限
综合集群架构
三剑客
awk
sed
自动化
pxe
编织
编织视频
监控
prometheus
go
go占位符
vue
vue基础
vue项目
web
Nginx
html
vscode
html标签
html表格
css基础
css定位
css精灵图
code
html5
项目
js
jQuery
面向对象
kubernetes
k8s命令
k8s
k8s搭建
database
clickhouse
常用工具
微软
页面
友链
统计
留言
直播
壁纸
推荐
我的毛线
院长科技
搜索到
18
篇与
的结果
2023-11-22
k8s 1.28高可用搭建kube-scheduler集群07
1 创建kube-scheduler证书请求文件cat > kube-scheduler-csr.json << "EOF" { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.31.34", "192.168.31.35", "192.168.31.36" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "system" } ] } EOF 2.5.8.2 生成kube-scheduler证书cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler # ls kube-scheduler.csr kube-scheduler-csr.json kube-scheduler-key.pem kube-scheduler.pem 2.5.8.3 创建kube-scheduler的kubeconfigkubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.31.100:6443 --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig 2.5.8.4 创建服务配置文件cat > kube-scheduler.conf << "EOF" KUBE_SCHEDULER_OPTS="--bind-address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ --leader-elect=true \ --v=2" EOF 2.5.8.5创建服务启动配置文件cat > kube-scheduler.service << "EOF" [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF 2.5.8.6 同步文件至集群master节点cp kube-scheduler*.pem /etc/kubernetes/ssl/ cp kube-scheduler.kubeconfig /etc/kubernetes/ cp kube-scheduler.conf /etc/kubernetes/ cp kube-scheduler.service /usr/lib/systemd/system/ scp kube-scheduler*.pem k8s-master02:/etc/kubernetes/ssl/ scp kube-scheduler*.pem k8s-master03:/etc/kubernetes/ssl/ scp kube-scheduler.kubeconfig kube-scheduler.conf k8s-master02:/etc/kubernetes/ scp kube-scheduler.kubeconfig kube-scheduler.conf k8s-master03:/etc/kubernetes/ scp kube-scheduler.service k8s-master02:/usr/lib/systemd/system/ scp kube-scheduler.service k8s-master03:/usr/lib/systemd/system/ 2.5.8.7 启动服务systemctl daemon-reload systemctl enable --now kube-scheduler systemctl status kube-scheduler
2023年11月22日
3 阅读
0 评论
0 点赞
2023-11-22
k8s 1.28高可用搭建kube-controller-manager集群06
1 部署kube-controller-manager1.1 创建kube-controller-manager证书请求文件cat > kube-controller-manager-csr.json << "EOF" { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "192.168.31.34", "192.168.31.35", "192.168.31.36" ], "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "system" } ] } EOF 说明: hosts 列表包含所有 kube-controller-manager 节点 IP; CN 为 system:kube-controller-manager; O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限 1.2 创建kube-controller-manager证书文件cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager # ls kube-controller-manager.csr kube-controller-manager-csr.json kube-controller-manager-key.pem kube-controller-manager.pem 1.3 创建kube-controller-manager的kube-controller-manager.kubeconfigkubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.31.100:6443 --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig 1.4 创建kube-controller-manager配置文件cat > kube-controller-manager.conf << "EOF" KUBE_CONTROLLER_MANAGER_OPTS="--secure-port=10257 \ --bind-address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.96.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --allocate-node-cidrs=true \ --cluster-cidr=10.244.0.0/16 \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --v=2" EOF 2.5.7.5 创建服务启动文件cat > kube-controller-manager.service << "EOF" [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF 2.5.7.6 同步文件到集群master节点cp kube-controller-manager*.pem /etc/kubernetes/ssl/ cp kube-controller-manager.kubeconfig /etc/kubernetes/ cp kube-controller-manager.conf /etc/kubernetes/ cp kube-controller-manager.service /usr/lib/systemd/system/ scp kube-controller-manager*.pem k8s-master02:/etc/kubernetes/ssl/ scp kube-controller-manager*.pem k8s-master03:/etc/kubernetes/ssl/ scp kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master02:/etc/kubernetes/ scp kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master03:/etc/kubernetes/ scp kube-controller-manager.service k8s-master02:/usr/lib/systemd/system/ scp kube-controller-manager.service k8s-master03:/usr/lib/systemd/system/ #查看证书 openssl x509 -in /etc/kubernetes/ssl/kube-controller-manager.pem -noout -text 2.5.7.7 启动服务systemctl daemon-reload systemctl enable --now kube-controller-manager systemctl status kube-controller-manager kubectl get componentstatuses
2023年11月22日
4 阅读
0 评论
0 点赞
2023-11-22
k8s 1.28高可用搭建kubectl集群05
1 部署kubectl1.1 创建kubectl证书请求文件cat > admin-csr.json << "EOF" { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "system" } ] } EOF 说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 注: 这个admin 证书,是将来生成管理员用的kubeconfig 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。 1.2 生成证书文件cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 1.3 复制文件到指定目录cp admin*.pem /etc/kubernetes/ssl/ 1.4 生成kubeconfig配置文件kube.config 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.31.100:6443 --kubeconfig=kube.config kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config kubectl config use-context kubernetes --kubeconfig=kube.config 1.5 准备kubectl配置文件并进行角色绑定mkdir ~/.kube cp kube.config ~/.kube/config kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config 1.6 查看集群状态export KUBECONFIG=$HOME/.kube/config 查看集群信息 kubectl cluster-info 查看集群组件状态 kubectl get componentstatuses 查看命名空间中资源对象 kubectl get all --all-namespaces 1.7 同步kubectl配置文件到集群其它master节点k8s-master02: mkdir /root/.kube k8s-master03: mkdir /root/.kube scp /root/.kube/config k8s-master02:/root/.kube/config scp /root/.kube/config k8s-master03:/root/.kube/config 1.8 配置kubectl命令补全(可选)yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) kubectl completion bash > ~/.kube/completion.bash.inc source '/root/.kube/completion.bash.inc' source $HOME/.bash_profile
2023年11月22日
11 阅读
0 评论
0 点赞
2023-11-22
k8s 1.28高可用搭建apiserver集群04
1 Kubernetes集群部署1.1 Kubernetes软件包下载wget --no-check-certificate https://dl.k8s.io/v1.28.4/kubernetes-server-linux-amd64.tar.gz 如果下载失败可以单独加群联系 1.2 Kubernetes软件包安装tar -xvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ [root@k8s-master01 bin]# ll total 1147568 -rwxr-xr-x 1 root root 61353984 Nov 16 01:16 apiextensions-apiserver -rwxr-xr-x 1 root root 49102848 Nov 16 01:16 kubeadm -rwxr-xr-x 1 root root 58933248 Nov 16 01:16 kube-aggregator -rwxr-xr-x 1 root root 121745408 Nov 16 01:16 kube-apiserver -rw-r--r-- 1 root root 8 Nov 16 01:16 kube-apiserver.docker_tag -rw------- 1 root root 127259136 Nov 16 01:16 kube-apiserver.tar -rwxr-xr-x 1 root root 117780480 Nov 16 01:16 kube-controller-manager -rw-r--r-- 1 root root 8 Nov 16 01:16 kube-controller-manager.docker_tag -rw------- 1 root root 123293696 Nov 16 01:16 kube-controller-manager.tar -rwxr-xr-x 1 root root 49885184 Nov 16 01:16 kubectl -rwxr-xr-x 1 root root 48828416 Nov 16 01:16 kubectl-convert -rw-r--r-- 1 root root 8 Nov 16 01:16 kubectl.docker_tag -rw------- 1 root root 55398400 Nov 16 01:16 kubectl.tar -rwxr-xr-x 1 root root 110850048 Nov 16 01:16 kubelet -rwxr-xr-x 1 root root 1605632 Nov 16 01:16 kube-log-runner -rwxr-xr-x 1 root root 55107584 Nov 16 01:16 kube-proxy -rw-r--r-- 1 root root 8 Nov 16 01:16 kube-proxy.docker_tag -rw------- 1 root root 74757120 Nov 16 01:16 kube-proxy.tar -rwxr-xr-x 1 root root 56070144 Nov 16 01:16 kube-scheduler -rw-r--r-- 1 root root 8 Nov 16 01:16 kube-scheduler.docker_tag -rw------- 1 root root 61583360 Nov 16 01:16 kube-scheduler.tar -rwxr-xr-x 1 root root 1527808 Nov 16 01:16 mounter [root@k8s-master01 bin]# pwd /data/k8s-work/kubernetes/server/bin cp -p kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ 1.3 Kubernetes软件分发scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master02:/usr/local/bin/ scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master03:/usr/local/bin/ scp -rp kubelet kube-proxy k8s-node01:/usr/local/bin scp -rp kubelet kube-proxy k8s-node02:/usr/local/bin 1.4 在集群节点上创建目录所有节点mkdir -p /etc/kubernetes/ mkdir -p /etc/kubernetes/ssl mkdir -p /var/log/kubernetes 2 部署api-server2.1 创建apiserver证书请求文件cd /data/k8s-work/ cat > kube-apiserver-csr.json << EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.31.32", "192.168.31.33", "192.168.31.34", "192.168.31.35", "192.168.31.36", "192.168.31.37", "192.168.31.38", "192.168.31.39", "192.168.31.40", "192.168.31.100", "10.96.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "kubemsb", "OU": "CN" } ] } EOF 说明: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP(含VIP) 或域名列表。由于该证书被 集群使用,需要将节点的IP都填上,为了方便后期扩容可以多写几个预留的IP。 同时还需要填写 service 网络的首个IP(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.96.0.1)。 2.5.5.2 生成apiserver证书及token文件cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver cat > token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF 说明: 创建TLS机制所需TOKEN TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet。而kube-proxy还是由我们统一颁发一个证书。 2.5.5.3 创建apiserver服务配置文件k8s-master01 cat > /etc/kubernetes/kube-apiserver.conf << EOF KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.31.34 \ --secure-port=6443 \ --advertise-address=192.168.31.34 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-issuer=api \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --v=4" EOF 2.5.5.4 创建apiserver服务管理配置文件cat > /etc/systemd/system/kube-apiserver.service << "EOF" [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 2.5.5.5 同步文件到集群master节点cp ca*.pem /etc/kubernetes/ssl/ cp kube-apiserver*.pem /etc/kubernetes/ssl/ cp token.csv /etc/kubernetes/ scp /etc/kubernetes/token.csv k8s-master02:/etc/kubernetes scp /etc/kubernetes/token.csv k8s-master03:/etc/kubernetes scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master02:/etc/kubernetes/ssl scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master03:/etc/kubernetes/ssl scp /etc/kubernetes/ssl/ca*.pem k8s-master02:/etc/kubernetes/ssl scp /etc/kubernetes/ssl/ca*.pem k8s-master03:/etc/kubernetes/ssl k8s-master02 # cat > /etc/kubernetes/kube-apiserver.conf <<EOF KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.31.35 \ --secure-port=6443 \ --advertise-address=192.168.31.35 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-issuer=api \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --v=4" EOF k8s-master03 # cat > /etc/kubernetes/kube-apiserver.conf <<EOF KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.31.36 \ --secure-port=6443 \ --advertise-address=192.168.31.36 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-issuer=api \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --v=4" EOF k8s-master01 scp /etc/systemd/system/kube-apiserver.service k8s-master02:/etc/systemd/system/kube-apiserver.service scp /etc/systemd/system/kube-apiserver.service k8s-master03:/etc/systemd/system/kube-apiserver.service 2.5.5.6 启动apiserver服务systemctl daemon-reload systemctl enable --now kube-apiserver systemctl status kube-apiserver # 测试 curl --insecure https://192.168.31.34:6443/ curl --insecure https://192.168.31.35:6443/ curl --insecure https://192.168.31.36:6443/ curl --insecure https://192.168.31.100:6443/
2023年11月22日
3 阅读
0 评论
0 点赞
2023-11-22
k8s 1.28高可用搭建etcd集群03
在k8s-master01上操作 1. 创建工作目录mkdir -p /data/k8s-work 2. 获取cfssl工具cd /data/k8s-work wget --no-check-certificate https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget --no-check-certificate https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget --no-check-certificate https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 如果下载失败可以单独加群联系 说明: cfssl是使用go编写,由CloudFlare开源的一款PKI/TLS工具。主要程序有: - cfssl,是CFSSL的命令行工具 - cfssljson用来从cfssl程序获取JSON输出,并将证书,密钥,CSR和bundle写入文件中。 chmod +x cfssl* mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo # cfssl version Version: 1.2.0 Revision: dev Runtime: go1.6 3. 创建CA证书3.1 配置ca证书请求文件cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "kubemsb", "OU": "CN" } ], "ca": { "expiry": "876000h" } } EOF 3.2 创建ca证书cfssl gencert -initca ca-csr.json | cfssljson -bare ca 3.3 配置ca证书策略cfssl print-defaults config > ca-config.json cat > ca-config.json <<"EOF" { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF server auth 表示client可以对使用该ca对server提供的证书进行验证 client auth 表示server可以使用该ca对client提供的证书进行验证 4 创建etcd证书4.1 配置etcd请求文件cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.31.34", "192.168.31.35", "192.168.31.36" ], "key": { "algo": "rsa", "size": 2048 }, "names": [{ "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "kubemsb", "OU": "CN" }] } EOF 4.2 生成etcd证书cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd # ls 输出 ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd.csr etcd-csr.json etcd-key.pem etcd.pem 5 部署etcd集群5.1 下载etcd软件包wget https://github.com/etcd-io/etcd/releases/download/v3.5.10/etcd-v3.5.10-linux-amd64.tar.gz 如果下载失败可以单独加群联系 5.2 安装etcd软件tar -xvf etcd-v3.5.10-linux-amd64.tar.gz cp -p etcd-v3.5.10-linux-amd64/etcd* /usr/local/bin/ 5.3 分发etcd软件scp etcd-v3.5.10-linux-amd64/etcd* k8s-master02:/usr/local/bin/ scp etcd-v3.5.10-linux-amd64/etcd* k8s-master03:/usr/local/bin/ 5.4 创建配置文件mkdir /etc/etcd cat > /etc/etcd/etcd.conf <<EOF #[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.34:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.31.34:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.34:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.34:2379" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.31.34:2380,etcd2=https://192.168.31.35:2380,etcd3=https://192.168.31.36:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF 说明: ETCD_NAME:节点名称,集群中唯一 ETCD_DATA_DIR:数据目录 ETCD_LISTEN_PEER_URLS:集群通信监听地址 ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 ETCD_INITIAL_CLUSTER:集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN:集群Token ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群 5.5 创建服务配置文件mkdir -p /etc/etcd/ssl mkdir -p /var/lib/etcd/default.etcd cd /data/k8s-work cp ca*.pem /etc/etcd/ssl cp etcd*.pem /etc/etcd/ssl cat > /etc/systemd/system/etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-client-cert-auth \ --client-cert-auth Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 5.6 同步etcd配置到集群其它master节点创建目录 mkdir -p /etc/etcd mkdir -p /etc/etcd/ssl mkdir -p /var/lib/etcd/default.etcd 服务配置文件,需要修改etcd节点名称及IP地址 for i in k8s-master02 k8s-master03 \ do \ scp /etc/etcd/etcd.conf $i:/etc/etcd/ \ done k8s-master02: cat > /etc/etcd/etcd.conf <<EOF #[Member] ETCD_NAME="etcd2" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.35:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.31.35:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.35:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.35:2379" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.31.34:2380,etcd2=https://192.168.31.35:2380,etcd3=https://192.168.31.36:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF k8s-master03: cat > /etc/etcd/etcd.conf<<EOF #[Member] ETCD_NAME="etcd3" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.36:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.31.36:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.36:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.36:2379" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.31.34:2380,etcd2=https://192.168.31.35:2380,etcd3=https://192.168.31.36:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF 证书文件 for i in k8s-master02 k8s-master03 \ do \ scp /etc/etcd/ssl/* $i:/etc/etcd/ssl \ done 服务启动配置文件 for i in k8s-master02 k8s-master03 \ do \ scp /etc/systemd/system/etcd.service $i:/etc/systemd/system/ \ done 5.7 启动etcd集群systemctl daemon-reload systemctl enable --now etcd.service systemctl status etcd 5.8 验证集群状态ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 endpoint health +----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +----------------------------+--------+-------------+-------+ | https://192.168.31.34:2379 | true | 10.393062ms | | | https://192.168.31.35:2379 | true | 15.70437ms | | | https://192.168.31.36:2379 | true | 15.871684ms | | +----------------------------+--------+-------------+-------+ 检查ETCD数据库性能 ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 check perf [root@k8s-master01 k8s-work]# ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 check perf 59 / 60 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooom ! 98.33%PASS: Throughput is 151 writes/s PASS: Slowest request took 0.011820s PASS: Stddev is 0.000712s PASS ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 member list +------------------+---------+-------+----------------------------+----------------------------+------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+-------+----------------------------+----------------------------+------------+ | 571a14daac64a5f | started | etcd3 | https://192.168.31.36:2380 | https://192.168.31.36:2379 | false | | c1975c3c20f6f75b | started | etcd1 | https://192.168.31.34:2380 | https://192.168.31.34:2379 | false | | fed2d7ddda540f99 | started | etcd2 | https://192.168.31.35:2380 | https://192.168.31.35:2379 | false | +------------------+---------+-------+----------------------------+----------------------------+------------+ ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.31.34:2379,https://192.168.31.35:2379,https://192.168.31.36:2379 endpoint status +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.31.34:2379 | c1975c3c20f6f75b | 3.5.10 | 22 MB | true | false | 2 | 9010 | 9010 | | | https://192.168.31.35:2379 | fed2d7ddda540f99 | 3.5.10 | 22 MB | false | false | 2 | 9010 | 9010 | | | https://192.168.31.36:2379 | 571a14daac64a5f | 3.5.10 | 22 MB | false | false | 2 | 9010 | 9010 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
2023年11月22日
5 阅读
0 评论
0 点赞
1
2
3
4