写在前面

1. 创建PV和PVC

这里我使用的是K8S中自带的PV,没有使用动态存储,因此我用的是nfs存储,同时集群的数据都存在了一个目录下,如果你的需求不一样请注意

--- 1. 安装nfs 服务端
[root@k8s-master ~]# yum install -y  rpcbind nfs-utils
## 启动rpcbind
[root@k8s-master ~]# systemctl enable --now rpcbind 
## 启动nfs
[root@k8s-master ~]# systemctl enable --now nfs 
## 配置
[root@k8s-master ~]# cat /etc/exports
/data/nfs/zookeeper/ 192.168.0.0/24(rw) ##共享/data/nfs/zookeeper/目录给192.168 的网段并赋予读写权限
##目录与权限
[root@k8s-master ~]#  chown -R  nfsnobody.nfsnobody /data/
[root@k8s-master ~]#  chown -R 1000:1000 /data/nfs/
[root@k8s-master ~]#  chmod -R 755  /data/nfs/
[root@k8s-master ~]# systemctl reload nfs
##挂载存储测试
mount -t nfs 192.168.0.160:/data/    /mnt/
##客户端安装
[root@k8s-master ~]#yum install -y nfs-utils

--- 2. 创建PV
[root@k8s-master /manifests/pv]# cat zk-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-pv
  namespace: zookeeper
spec:
  accessModes:
  - ReadWriteMany
  nfs:
   server: 192.168.0.160
   path: /data/nfs/zookeeper
  persistentVolumeReclaimPolicy: Retain
  capacity: 
    storage: 2Gi

--- 3. 创建PVC
[root@k8s-master /manifests/pv]# cat zk-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-pvc
  namespace: zookeeper
spec:
  ## 声明资源的访问模式
  accessModes:
  - ReadWriteMany
  ## 声明资源的使用量
  resources:
    requests:
     storage: 2Gi
  ## 绑定上面的pv
  volumeName: zookeeper-pv

2. 编写 Kubernetes 资源清单

2.1 ConfigMap 用于挂载配置文件

## 这里的主要挂在的是zoo.cfg的配置文件,如果有需要请自行添加
[root@k8s-master /manifests]# cat configmap/zoo.cfg 
apiVersion: v1
kind: ConfigMap
metadata:
  name: zk-config
  namespace: zookeeper
data:
  zoo.cfg: |
    tickTime=2000
    dataDir=/data
    clientPort=2181
    initLimit=10
    syncLimit=5
    server.1=zk-0.zk-headless:2888:3888
    server.2=zk-1.zk-headless:2888:3888
    server.3=zk-2.zk-headless:2888:3888

2.2 Headless Service 用于节点间互联

## 根据官方推荐使用Headless Service 
[root@k8s-master /manifests]# cat service/zookeeper.service 
apiVersion: v1
kind: Service
metadata:
  name: zk-headless
  namespace: zookeeper
  labels:
    app: zookeeper
spec:
  selector:
    app: zookeeper
  ports:
  - port: 2181
    name: client
  - port: 2888
    name: follower
  - port: 3888
    name: election

2.3 StatefulSet 用于部署有状态的 Zookeeper 节点

[root@k8s-master /manifests]# cat StatefulSet/zookeeper.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: zookeeper
spec:
  serviceName: zk-headless
  replicas: 3
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      initContainers:
        - name: set-myid
          image: busybox:1.28
          command:
            - sh
            - -c
            - |
              idx=${HOSTNAME##*-}
              echo $((idx + 1)) > /data/myid
          volumeMounts:
            - name: datadir
              mountPath: /data
      containers:
      - name: zookeeper
        image: 192.168.0.77:32237/k8s/zookeeper:latest 
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: follower
        - containerPort: 3888
          name: election
        env:
        - name: ZOO_MY_ID
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: datadir
          mountPath: /data
        - name: config
          mountPath: /conf/zoo.cfg
          subPath: zoo.cfg
      volumes:
      - name: config
        configMap:
          name: zk-config
      - name: datadir
        persistentVolumeClaim:
        ## 申明应用的PVC
          claimName: zookeeper-pvc

2.4 部署Zookeeper 集群

--- 1. 创建名称空间
[root@k8s-master /manifests]# kubectl create namespace zookeeper

--- 2. 部署 ConfigMap
[root@k8s-master /manifests]# kubectl  apply -f configmap/zoo.cfg

--- 3. 部署Service
[root@k8s-master /manifests]# kubectl  apply -f service/zookeeper.yaml

--- 4. 部署StatefulSet
[root@k8s-master /manifests]# kubectl  apply -f StatefulSet/zookeeper.yaml

--- 5. 查看状态
[root@k8s-master /manifests]# kubectl get pod -n zookeeper 

2.5 进入 Pod 测试客户端端口

## 进入客户端
[root@k8s-master /manifests]# kubectl exec -it zk-0 -n zookeeper -- zkCli.sh -server 127.0.0.1:2181
[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 127.0.0.1:2181(CONNECTED) 1] create /test "hello"
Created /test
[zk: 127.0.0.1:2181(CONNECTED) 2] get /test 
hello
[zk: 127.0.0.1:2181(CONNECTED) 3] delete /test[zk: 127.0.0.1:2181(CONNECTED) 3] delete /test

## 查看集群状态
[root@k8s-master /manifests]# kubectl exec -it zk-0 -n zookeeper -- zkServer.sh status 
[root@k8s-master /manifests]# kubectl exec -it zk-1 -n zookeeper -- zkServer.sh status 
[root@k8s-master /manifests]# kubectl exec -it zk-2 -n zookeeper -- zkServer.sh status 
Mode: leader
Mode: follower
Mode: follower

## 说明集群已经选出 Leader,其他是 Follower

## 使用nslookup来验证DNS
[root@k8s-master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -n zookeeper -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup zk-0.zk-headless
Server:    10.200.0.10
Address 1: 10.200.0.10 kube-dns.kube-system.svc.xingzhibang.top
Name:      zk-0.zk-headless
Address 1: 10.100.135.151 zk-0.zk-headless.zookeeper.svc.xingzhibang.top

3. Zookeeper的使用

3.1 基本概念

ZooKeeper 是一个开源的 分布式协调服务,主要用于分布式系统中管理配置、同步状态、命名服务、选主等问题。它的核心价值是 一致性、可靠性和高可用性

它不像数据库那样主要存储数据,而是提供一个 轻量级的数据结构(类似树的文件系统)和分布式锁/通知机制,方便多个节点协同工作。

3.2 未授权漏洞修复以及Zookeeper密码的修改

--- 1. 修改配置文件
vim zoo.cfg
## 添加参数
skipAcl=true

--- 2. 重启zk 保证配置生效
--- 3. 使用客户端进入,添加账号和密码
zkCli.sh
## 这里的zookeeper 是你的账号 Cosmic@2514 是密码
ddauth digest zookeeper:Cosmic@2514
setAcl -R / auth:zookeeper:Cosmic@2514:cdrwa
getAcl /