Keycloak误删PV补救措施

背景 K8s中的插件local-path-provisioner能够动态创建PV,使用local-pathPV时,无需手动建立PV,只需建立一个PVC,在其中指定storageClassName: local-path即可,插件会自动为其分配PV和存储目录 这个插件的好处是: 用户无需手动配置PV,只需创建PVC绑定即可; 在非文件共享的场景下,用户也无需关心服务Pod被调度至哪个节点,因为插件会自动根据Pod所在节点分配存储区 现在我误删了所有PV,结果是执行kubectl get pv命令时,看到了状态是Terminating的PV,这说明PV处于正在被删除的状态 但当我执行命令kubectl get pvc -n authen查看PVC时,却发现绑定了PV的PVC是正常Bound的状态 原因分析 之所以被删除时的PV表现为Terminating状态而非直接消失,是因为该PV存在**finalizer字段**,这个字段的作用就是在集群删除PV时,不立刻清除PV,而是将其标记起来,呈现Terminating状态,这是为了确保一些重要的清理动作在资源被物理删除前完成,此时,PVC仍然是可用的 我现在存在两个Terminating状态的PV,它们各自的名称是: pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 pvc-63a03d24-f078-43af-ae6c-0ce2328ba54e 查看其中一个PV的信息: master@master:~/authen/pvc$ kubectl get pv pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: local.path.provisioner/selected-node: master pv.kubernetes.io/provisioned-by: rancher.io/local-path creationTimestamp: "2025-10-10T07:46:40Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2025-10-22T06:34:12Z" finalizers: - kubernetes.io/pv-protection name: pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 resourceVersion: "30449839" uid: bea50dbb-7f29-4e21-bb22-d9e61ce13cec spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: ldap-config-pvc namespace: authen resourceVersion: "27503744" uid: 539fd3b7-12ed-488b-9c86-59e07de28b05 hostPath: path: /opt/local-path-provisioner/pvc-539fd3b7-12ed-488b-9c86-59e07de28b05_authen_ldap-config-pvc type: DirectoryOrCreate nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - master persistentVolumeReclaimPolicy: Delete storageClassName: local-path volumeMode: Filesystem status: phase: Bound 可以看到,在metadata下确实存在finalizers字段,正是它导致了PV始终处于Terminating状态 ...

October 23, 2025

kubeadm init 等待超时问题

问题描述 配置master节点时,执行kubeadm init --config kubeadm-config.yaml --v=5命令后,执行至初始化Pod步骤时报错: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) ...... 经过检查,host配置无误,cluster的IP地址也与本机相同 ...

January 15, 2025

Docker配置阿里云镜像后仍报错问题

Kubernetes需要使用docker作为运行时,在Linux机器上安装docker后,尝试拉取hello-world镜像时出现问题 安装docker时一切都很顺利: sudo apt update sudo apt install docker.io sudo systemctl start docker sudo systemctl enable docker 但在进行docker pull hello-world拉取镜像时报错: Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceed ed while awaiting headers) 解决方法 首先需要更换docker源,这里使用阿里云镜像加速器与另一个自己收集的源。进入/etc/docker/daemon.json文件中如此配置: { "registry-mirrors": [ "https://ok5mwqnl.mirror.aliyuncs.com", "https://docker.1ms.run" ] } 配置完成后执行sudo systemctl restart docker重启docker服务 重启后运行docker pull hello-world仍失败,报错信息与先前一致 这时考虑DNS配置问题,进入/etc/resolv.conf文件,在原有的若干nameserver之上再增加两条: # Generated by NetworkManager nameserver 8.8.8.8 nameserver 8.8.4.4 # 此处为原有的若干行nameserver # ... 配置完成后运行docker pull hello-world,成功拉取镜像,执行sudo docker images查看本地镜像: ...

January 13, 2025