前言
之前在单机测试k8s的kind最近故障了,虚拟机运行个几分钟后就宕机了,不知道是根因是什么,而且kind部署k8s不太好做一些个性化配置,干脆用二进制方式重新搭一个单机k8s。
因为是用来开发测试的,所以control-panel就不做高可用了,etcd+apiserver+controller-manager+scheduler都只有一个实例。
环境信息:
- 主机:Debian 12.7,4核CPU、4GB内存、30GB存储(只是部署一个k8s的话,2C2G的配置也足够)
- 容器运行时:containerd v1.7.22
- etcd: v3.4.34
- kubernetes:v1.30.5
- cni: calico v3.25.0
本文中的大部分配置文件已上传到gitee - k8s-note,目录为"安装k8s/二进制单机部署k8s-v1.30.5",如有需要可直接clone repo.
准备
本节命令大部分都要root权限,如果执行命令时提示权限不足,可自行切换root用户或使用sudo
。
调整主机参数
- 修改主机名。kubernetes要求每个节点的hostname不一样
hostnamectl set-hostname k8s-node1
- 修改
/etc/hosts
文件。如果内网有自建DNS可忽略
192.168.0.31 k8s-node1
- 安装时间同步服务。如果有多台主机,要注意主机之间的时间要同步。内网如果有时间同步服务器,可以修改chrony的配置来指向内网时间同步服务器
sudo apt install -y chrony sudo systemctl start chrony
- 关闭swap。默认情况下,k8s没法在使用swap的主机上运行。这里用的临时关闭命令,固化配置需要修改
/etc/fstab
文件,将swap相关配置行删除或注释。
sudo swapoff -a
- 装载内核模块。这步没做的话,下一步配置系统参数会报错。
# 1. 添加配置 cat <<EOF > /etc/modules-load.d/containerd.conf overlay br_netfilter EOF # 2. 立即装载 modprobe overlay modprobe br_netfilter # 3. 检查装载。如果没有输出结果则说明没有装载成功。 lsmod | grep br_netfilter
- 配置系统参数。主要是
net.bridge.bridge-nf-call-ip6tables
、net.bridge.bridge-nf-call-iptables
和net.ipv4.ip_forward
这三个参数,其它参数可按情况自行修改。
# 1. 添加配置文件 cat << EOF > /etc/sysctl.d/k8s-sysctl.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 user.max_user_namespaces=28633 vm.swappiness = 0 EOF # 2. 配置生效 sysctl -p /etc/sysctl.d/k8s-sysctl.conf
- 启用ipvs。编写systemd配置文件,实现开机自动装载到内核。能安装ipvs的话就尽量使用ipvs,有助于提高集群的负载均衡性能。具体参见:https://kubernetes.io/zh-cn/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
# 1. 安装依赖 apt install -y ipset ipvsadm # 2. 立即装载 modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack # 3. 固化到配置文件 cat << EOF > /etc/modules-load.d/ipvs.conf ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF # 4. 检查是否已装载 lsmod |grep ip_vs
安装containerd
k8s在1.24版本之后不再直接支持docker作为容器运行时,所以本文使用使用containerd。二进制安装包可从GitHub - containerd下载,注意要下载cri-containerd-cni版本的
- 解压到根目录。压缩包里面的文件是按照根目录结构组织的,所以要直接解压到根目录。
tar xf cri-containerd-cni-1.7.22-linux-amd64.tar.gz -C /
- 创建配置文件目录并生成默认的配置文件
mkdir /etc/containerd containerd config default > /etc/containerd/config.toml
- 编辑配置文件
/etc/containerd/config.toml
,修改以下内容
# 对于使用systemd作为init system的linux发行版,官方建议用systemd作为容器cgroup driver # false改成true SystemdCgroup = true # pause镜像的地址改为自己在阿里云上传的镜像地址。如果是内网环境,可改为内网registry的地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/rainux/pause:3.9"
- 启动containerd
systemctl start containerd systemctl enable containerd
- 执行命令测试下containerd是否正常。没报错一般就是正常的
crictl images
生成ca证书
后面的k8s和etcd集群都会用到ca证书。如果组织能提供统一的CA认证中心,则直接使用组织颁发的CA证书即可。如果没有统一的CA认证中心,则可以通过颁发自签名的CA证书来完成安全配置。这里自行生成一个ca证书。
# 生成私钥文件ca.key openssl genrsa -out ca.key 2048 # 根据私钥文件生成根证书文件ca.crt # /CN为master的主机名或IP地址 # days为证书的有效期 openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-node1" -days 36500 -out ca.crt # 拷贝ca证书到/etc/kubernetes/pki mkdir -p /etc/kubernetes/pki cp ca.crt ca.key /etc/kubernetes/pki/
安装etcd
etcd的安装包可以从官网下载,下载后解压。可以将压缩包中的etcd
和etcdctl
放到环境变量PATH
中的目录。
- 编辑文件
etcd_ssl.cnf
。IP地址为etcd节点。
[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name [ req_distinguished_name ] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [ alt_names ] IP.1 = 192.168.0.31
- 创建etcd服务端证书
openssl genrsa -out etcd_server.key 2048 openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr openssl x509 -req -in etcd_server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
- 创建etcd客户端证书
openssl genrsa -out etcd_client.key 2048 openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr openssl x509 -req -in etcd_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt
- 编辑etcd的配置文件。目录、文件路径,IP、端口等信息按实际情况修改
ETCD_NAME=etcd1 ETCD_DATA_DIR=/home/rainux/apps/etcd/data ETCD_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt ETCD_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key ETCD_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt ETCD_CLIENT_CERT_AUTH=true ETCD_LISTEN_CLIENT_URLS=https://192.168.0.31:2379 ETCD_ADVERTISE_CLIENT_URLS=https://192.168.0.31:2379 ETCD_PEER_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt ETCD_PEER_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key ETCD_PEER_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt ETCD_LISTEN_PEER_URLS=https://192.168.0.31:2380 ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.0.31:2380 ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.31:2380" ETCD_INITIAL_CLUSTER_STATE=new
- 编辑
/etc/systemd/system/etcd.service
,注意根据实际修改配置文件和etcd二进制文件的路径
[Unit] Description=etcd key-value store Documentation=https://github.com/etcd-io/etcd After=network.target [Service] User=rainux EnvironmentFile=/home/rainux/apps/etcd/conf/etcd.conf ExecStart=/home/rainux/apps/etcd/etcd Restart=on-failure [Install] WantedBy=multi-user.target
- 启动etcd
systemctl daemon-reload systemctl start etcd systemctl enable etcd # 检查service状态 systemctl status etcd
- 使用etcd客户端验证下etcd状态
etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=$HOME/apps/certs/etcd_client.crt --key=$HOME/apps/certs/etcd_client.key --endpoints=https://192.168.0.31:2379 endpoint health # 正常情况下会有类似以下输出 https://192.168.0.31:2379 is healthy: successfully committed proposal: took = 13.705325ms
安装control-panel
k8s的二进制文件安装包可以从github下载:https://github.com/kubernetes/kubernetes/releases
在changelog中找到二进制包的下载链接,下载server binary即可,里面包含了master和node的二进制文件。
解压后将其中的二进制文件挪到 /usr/local/bin
目录
安装apiserver
apiserver的核心功能是提供k8s各类资源对象的增删改查及watch等HTTP REST接口,成为集群内各个功能模块之间数据交互和通信的中心枢纽,是整个系统的数据总线和数据中心。除此之外,它还是集群管理的API入口,是资源配额控制的入口,提供了完备的集群安全机制。
- 编辑master_ssl.cnf。DNS.5为三台服务器的主机名,另行设置
/etc/hosts
。IP.1为Master Service虚拟服务的Cluster IP地址,IP.2为apiserver的服务器IP
[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local DNS.5 = k8s-node1 IP.1 = 169.169.0.1 IP.2 = 192.168.0.31
- 生成ssl证书文件
openssl genrsa -out apiserver.key 2048 openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=k8s-node1" -out apiserver.csr openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
- 使用cfssl创建sa.pub和sa-key.pem。cfssl和cfssljson可以从GitHub - cfssl下载
cat<<EOF > sa-csr.json { "CN":"sa", "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"BeiJing", "ST":"BeiJing", "O":"k8s", "OU":"System" } ] } EOF cfssl gencert -initca sa-csr.json | cfssljson -bare sa - openssl x509 -in sa.pem -pubkey -noout > sa.pub
- 编辑kube-apiserver的配置文件,注意根据实际情况修改文件路径和etcd地址
KUBE_API_ARGS="--secure-port=6443 --tls-cert-file=/home/rainux/apps/certs/apiserver.crt --tls-private-key-file=/home/rainux/apps/certs/apiserver.key --client-ca-file=/home/rainux/apps/certs/ca.crt --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/home/rainux/apps/certs/sa.pub --service-account-signing-key-file=/home/rainux/apps/certs/sa-key.pem --apiserver-count=1 --endpoint-reconciler-type=master-count --etcd-servers=https://192.168.0.31:2379 --etcd-cafile=/home/rainux/apps/certs/ca.crt --etcd-certfile=/home/rainux/apps/certs/etcd_client.crt --etcd-keyfile=/home/rainux/apps/certs/etcd_client.key --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=30000-32767 --allow-privileged=true --audit-log-maxsize=100 --audit-log-maxage=15 --audit-log-path=/home/rainux/apps/kubernetes/logs/apiserver.log --v=2"
- 编辑service文件。
/etc/systemd/system/kube-apiserver.service
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service [Service] EnvironmentFile=/home/rainux/apps/kubernetes/conf/apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动apiserver
systemctl daemon-reload systemctl start kube-apiserver systemctl enable kube-apiserver # 检查service状态 systemctl status kube-apiserver
- 生成客户端证书
openssl genrsa -out client.key 2048 # /CN的名称用于标识连接apiserver的客户端用户名称 openssl req -new -key client.key -subj "/CN=admin" -out client.csr openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500
- 创建客户端连接apiserver所需的kubeconfig配置文件。其中server为nginx监听地址。注意根据实际修改配置。这个kubeconfig配置文件也可以给kubectl使用,所以开发环境中可以直接文件路径置为
$HOME/.kube/config
apiVersion: v1 kind: Config clusters: - name: default cluster: server: https://192.168.0.31:6443 certificate-authority: /home/rainux/apps/certs/ca.crt users: - name: admin user: client-certificate: /home/rainux/apps/certs/client.crt client-key: /home/rainux/apps/certs/client.key contexts: - context: cluster: default user: admin name: default current-context: default
安装kube-controller-manager
controller-manager通过apiserver提供的接口实时监控集群中特定资源的状态变化,当资源对象不符合预期状态时,controller-manager会尝试将其状态调整为期望的状态。
- 编辑配置文件
/home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/home/rainux/.kube/config --leader-elect=true --service-cluster-ip-range=169.169.0.0/16 --service-account-private-key-file=/home/rainux/apps/certs/apiserver.key --root-ca-file=/home/rainux/apps/certs/ca.crt --v=0"
- 编辑service文件
/etc/systemd/system/kube-controller-manager.service
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=kube-apiserver.service [Service] EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动kube-controller-manager
systemctl daemon-reload systemctl start kube-controller-manager systemctl enable kube-controller-manager
安装kube-scheduler
- 编辑配置文件
/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER_ARGS="--kubeconfig=/home/rainux/.kube/config --leader-elect=true --v=0"
- 编辑service文件
/etc/systemd/system/kube-scheduler.service
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=kube-apiserver.service [Service] EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动
systemctl daemon-reload systemctl start kube-scheduler systemctl enable kube-scheduler
安装worker node
安装kubelet
- 编辑文件 /home/rainux/apps/kubernetes/conf/kubelet.conf。注意根据实际修改
hostname-override
和kubeconfig
。
KUBELET_ARGS="--kubeconfig=/home/rainux/.kube/config --config=/home/rainux/apps/kubernetes/conf/kubelet.config --hostname-override=k8s-node1 --v=0 --container-runtime-endpoint="unix:///run/containerd/containerd.sock"
- 编辑
/home/rainux/apps/kubernetes/conf/kubelet.config
文件。
kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 # 服务监听地址 port: 10250 # 服务监听端口号 cgroupDriver: systemd # cgroup驱动,默认为cgroupfs, 建议systemd clusterDNS: ["169.169.0.100"] # 集群DNS地址 clusterDomain: cluster.local # 服务DNS域名后缀 authentication: # 是否允许匿名访问或者是否使用webhook鉴权 anonymous: enabled: true
- 编辑service文件 /etc/systemd/system/kubelet.service
[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes After=containerd.service [Service] EnvironmentFile=/home/rainux/apps/kubernetes/conf/kubelet.conf ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动kubelet
systemctl daemon-reload systemctl start kubelet systemctl enable kubelet
安装kube-proxy
- 编辑配置文件
/home/rainux/apps/kubernetes/conf/kube-proxy.conf
。proxy-mode
参数默认为iptables,如果安装了ipvs,建议修改为ipvs
KUBE_PROXY_ARGS="--kubeconfig=/home/rainux/.kube/config --hostname-override=k8s-node1 --proxy-mode=ipvs --v=0"
- 编辑service文件
/etc/systemd/system/kube-proxy.service
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=kubelet.service [Service] EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-proxy.conf ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动kube-proxy
systemctl daemon-reload systemctl start kube-proxy systemctl enable kube-proxy
安装calico
- 下载calico配置文件
wget https://docs.projectcalico.org/manifests/calico.yaml
- 如果可以正常访问docker hub,则可以直接使用配置文件来创建calico资源对象,否则需要修改其中的镜像地址。如果用的calico版本也是3.25.0,可以用我在阿里云上传的镜像。
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:cni-v3.25.0 image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:node-v3.25.0 image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:kube-controllers-v3.25.0
- 执行安装
kubectl create -f calico.yaml
- 查看calico的pod是否正常运行。如果正常,状态应该都是running;若不正常,则需要describe pod的信息查看什么问题
kubectl get pods -A
安装CoreDNS
- 编辑部署文件 coredns.yaml。注意service中指定了clusterIP,以及镜像地址改为了我在阿里云上传的。
--- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | cluster.local { errors health { lameduck 5s } ready kubernetes cluster.local 169.169.0.0/16 { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } . { cache 30 loadbalance forward . /etc/resolv.conf } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-hangzhou.aliyuncs.com/rainux/coredns:1.11.3 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 169.169.0.100 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP
- 创建coredns服务
kubectl create -f coredns.yaml
- 在repo中我放了一份
test-dns.yaml
用来测试dns是否正常。创建这个测试对象后,在debian的pod中安装nslookup,测试能否解析出svc-nginx
# 创建测试dns的pod kubectl create -f test-dns.yaml # 在debian的pod中安装nslookup和curl apt update -y apt install -y dnsutils curl # 使用nslookup和curl测试能否通过域名请求到nginx服务 nslookup svc-nginx curl http://svc-nginx
安装metrics-server
在新版k8s中,系统资源的采集和HPA功能均需要使用metrics-server
- 编辑配置文件。注意镜像地址
apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - nodes/metrics verbs: - get - apiGroups: - "" resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls # 添加该行参数以使用自签名证书 image: registry.cn-hangzhou.aliyuncs.com/rainux/metrics-server:v0.7.2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 10250 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100
- 创建相关资源对象
kubectl create -f metrics-server.yaml
- 执行相关命令测试是否安装正常
kubectl top node kubectl top pod
小结
按照以上步骤执行完成后,一个用于开发测试的单机k8s就搭建好了,而且增加节点也比较方便,同时二进制部署方式也便于修改集群参数。