K8s误删PV补救措施

背景 K8s中的插件local-path-provisioner能够动态创建PV,使用local-pathPV时,无需手动建立PV,只需建立一个PVC,在其中指定storageClassName: local-path即可,插件会自动为其分配PV和存储目录 这个插件的好处是: 用户无需手动配置PV,只需创建PVC绑定即可; 在非文件共享的场景下,用户也无需关心服务Pod被调度至哪个节点,因为插件会自动根据Pod所在节点分配存储区 现在我误删了所有PV,结果是执行kubectl get pv命令时,看到了状态是Terminating的PV,这说明PV处于正在被删除的状态 但当我执行命令kubectl get pvc -n authen查看PVC时,却发现绑定了PV的PVC是正常Bound的状态 原因分析 之所以被删除时的PV表现为Terminating状态而非直接消失,是因为该PV存在**finalizer字段**,这个字段的作用就是在集群删除PV时,不立刻清除PV,而是将其标记起来,呈现Terminating状态,这是为了确保一些重要的清理动作在资源被物理删除前完成,此时,PVC仍然是可用的 我现在存在两个Terminating状态的PV,它们各自的名称是: pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 pvc-63a03d24-f078-43af-ae6c-0ce2328ba54e 查看其中一个PV的信息: master@master:~/authen/pvc$ kubectl get pv pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: local.path.provisioner/selected-node: master pv.kubernetes.io/provisioned-by: rancher.io/local-path creationTimestamp: "2025-10-10T07:46:40Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2025-10-22T06:34:12Z" finalizers: - kubernetes.io/pv-protection name: pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 resourceVersion: "30449839" uid: bea50dbb-7f29-4e21-bb22-d9e61ce13cec spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: ldap-config-pvc namespace: authen resourceVersion: "27503744" uid: 539fd3b7-12ed-488b-9c86-59e07de28b05 hostPath: path: /opt/local-path-provisioner/pvc-539fd3b7-12ed-488b-9c86-59e07de28b05_authen_ldap-config-pvc type: DirectoryOrCreate nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - master persistentVolumeReclaimPolicy: Delete storageClassName: local-path volumeMode: Filesystem status: phase: Bound 可以看到,在metadata下确实存在finalizers字段,正是它导致了PV始终处于Terminating状态 ...

October 23, 2025

在K8s中部署openLDAP

安装 先创建一个命名空间:kubectl create namespace authen 设置PVC: # vi openldap-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ldap-data-pvc namespace: authen spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-path --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ldap-config-pvc namespace: authen spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-path 执行kubectl apply -f open-ldap-pvc.yaml创建PVC即可 设置初始化时需要插入的信息: # vi ldap-init.ldif dn: ou=People,dc=example,dc=com ou: People objectClass: organizationalUnit dn: ou=Group,dc=example,dc=com ou: Group objectClass: organizationalUnit 执行kubectl create configmap openldap-init --from-file=./ldap-init.ldif -n authen生成ConfigMap,供之后LDAP初始化时读取 创建部署文件: # vi openldap-deployment.yaml kind: Deployment apiVersion: apps/v1 metadata: name: openldap namespace: authen labels: app: openldap annotations: app.kubernetes.io/alias-name: LDAP app.kubernetes.io/description: 认证中心 spec: replicas: 1 selector: matchLabels: app: openldap template: metadata: labels: app: openldap spec: containers: - name: go-ldap-admin-openldap args: - --copy-service image: 'osixia/openldap:1.5.0' ports: - name: tcp-389 containerPort: 389 protocol: TCP - name: tcp-636 containerPort: 636 protocol: TCP env: - name: TZ value: Asia/Shanghai - name: LDAP_ORGANISATION value: "orgldap" - name: LDAP_DOMAIN value: "example.com" - name: LDAP_ADMIN_PASSWORD value: "123456" - name: LDAP_BACKEND value: mdb resources: limits: cpu: 500m memory: 500Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: ldap-config-pvc mountPath: /etc/ldap/slapd.d - name: ldap-data-pvc mountPath: /var/lib/ldap - name: openldap-init mountPath: /container/service/slapd/assets/config/bootstrap/ldif/custom/init.ldif subPath: init.ldif volumes: - name: ldap-config-pvc persistentVolumeClaim: claimName: ldap-config-pvc - name: ldap-data-pvc persistentVolumeClaim: claimName: ldap-data-pvc - name: openldap-init configMap: name: openldap-init --- apiVersion: v1 kind: Service metadata: name: openldap-svc namespace: authen labels: app: openldap-svc spec: ports: - name: tcp-389 port: 389 protocol: TCP targetPort: 389 - name: tcp-636 port: 636 protocol: TCP targetPort: 636 selector: app: openldap 执行kubectl apply -f ldap-deployment.yaml即可将LDAP部署为一个Pod ...

October 10, 2025

Keycloak安装自定义插件及代码解析

参考官方开发示例,使用Java为Keycloak系统编写一套个人隐私问题的认证模块 代码编写完成后,借助Maven将项目打包为jar文件,传输至集群主节点准备部署 注意在打包时需要添加resources/META-INF/services目录,并注册Factory类 导入插件 首先创建初始ConfigMap: kubectl create configmap secret-question-plugin --from-file=SecretQuestion.jar=./kc-plugins/SecretQuestion.jar -n keycloak 然后执行kubectl edit statefulset keycloak -n keycloak对keycloak的StatefulSet进行修改: spec: template: spec: volumes: - name: plugins-volume configMap: name: secret-question-plugin containers: - name: keycloak # 其他配置... volumeMounts: - name: plugins-volume mountPath: /opt/keycloak/providers/SecretQuestion.jar subPath: SecretQuestion.jar 更新插件 之后每次更新迭代插件时,需要先删除原ConfigMap,然后创建新ConfigMap,最后重启Pod即可 # 删除旧的ConfigMap kubectl delete configmap secret-question-plugin -n keycloak # 创建新的ConfigMap kubectl create configmap secret-question-plugin --from-file=SecretQuestion.jar=./kc-plugins/SecretQuestion.jar -n keycloak # 重启Keycloak Pod kubectl delete pod keycloak-0 -n keycloak 插件代码解析 将插件导入Keycloak系统后,需要先在【身份验证】【必需的操作】处将之开启,然后将自定义的认证执行器插入流程之中,如此才能使新认证功能发挥作用 这里将新认证器放置在“账户密码验证”之后且作为必需行动 当用户首先进入认证界面(账户密码界面)时,仅账户密码认证执行器在发挥作用 ...

July 13, 2025

为K8s集群配置基于Keycloak的认证

配置证书 由于之后要将keycloak接入k8s,所以Keycloak提供的服务必须也是HTTPS的,需要先生成crt和key文件 不过之前已经配置过Dex的HTTPS,所以可以直接将dex.crt和dex.key拿来用,这里直接将两个文件设置为kubectl的secret资源: kubectl create secret generic keycloak-tls-secret --from-file=tls.crt=./dex.crt --from-file=tls.key=./dex.key -n keycloak 如此配置后,只要注明secret名称:keycloak-tls-secret,tls.crt和tls.key可以随时被Pod挂载至目录中供引用 安装、运行Keycloak 首先获取Keycloak的部署配置文件,执行: wget https://gh-proxy.com/raw.githubusercontent.com/keycloak/keycloak-quickstarts/refs/heads/main/kubernetes/keycloak.yaml 接着对文件作出修改: 使用volumes和volumeMounts将keycloak-tls-secret挂载至容器目录/etc/x509/https volumes: - name: keycloak-tls secret: secretName: keycloak-tls-secret ... volumeMounts: - name: keycloak-tls mountPath: /etc/x509/https readOnly: true 设置环境变量,开启HTTPS并设置端口8443,引用挂载的证书和秘钥文件 env: - name: KC_HTTPS_CERTIFICATE_FILE value: /etc/x509/https/tls.crt - name: KC_HTTPS_CERTIFICATE_KEY_FILE value: /etc/x509/https/tls.key - name: KC_HTTPS_PORT value: "8443" - name: KC_HTTP_ENABLED value: "false" ... ports: - name: https containerPort: 8443 设置NodePort将服务暴露,可以映射为:8443 -> 30443 apiVersion: v1 kind: Service metadata: name: keycloak labels: app: keycloak spec: ports: - protocol: TCP port: 8443 targetPort: 8443 name: https nodePort: 30443 selector: app: keycloak type: NodePort 最终的keycloak.yaml文件: ...

June 10, 2025

在K8s集群部署Dex认证

Dex是一个开源的第三方身份认证系统,简化了与已有身份提供者或其他认证服务进行认证的开发流程,Dex将检查身份的过程对项目开发者隐藏,使得开发者只需要关注、控制认证业务进行的客体,无需亲自管理身份认证的各项细节 环境 OS:Debian-12.10.0-amd64 Kubernetes:v1.28.0 kubectl:v1.28.2 Helm:v3.17.3 master机已安装Git,已部署LDAP服务器 开发者没有备案的域名,无法进行DNS A解析,只能使用自签名证书 获取dex-k8s-authenticator Dex本身是可以直接使用的,但是如果想要将Dex部署至K8s集群中并提供友好的可视页面,则需要部署dex-k8s-authenticator(之后简称DKA) 执行:git clone https://github.com/mintel/dex-k8s-authenticator.git克隆该Git仓库,仓库的charts/路径中已经存在Dex和DKA的Chart文件,因此无需再单独克隆获取Dex文件 ficn@master:~$ git clone https://github.com/mintel/dex-k8s-authenticator.git ficn@master:~$ cd dex-k8s-authenticator/ ficn@master:~/dex-k8s-authenticator$ ls charts/ dex dex-k8s-authenticator README.md 运行Dex和DKA 执行: helm inspect values charts/dex > dex-values.yaml helm inspect values charts/dex-k8s-authenticator > dka-values.yaml 这两个指令目的是根据Dex和DKA的原始Chart生成各自的新values文件,用于之后覆盖原配置 接下来就对这两个values文件进行修改 Dex # sudo vi dex-values.yaml # Default values for dex # Deploy environment label, e.g. dev, test, prod global: deployEnv: dev replicaCount: 1 image: repository: dexidp/dex tag: v2.37.0 pullPolicy: IfNotPresent env: - name: KUBERNETES_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace service: type: NodePort port: 5556 nodePort: 30000 # For nodeport, specify the following: # type: NodePort # nodePort: <port-number> tls: create: false ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - dex.example.com tls: [] # - secretName: dex.example.com # hosts: # - dex.example.com rbac: # Specifies whether RBAC resources should be created create: true serviceAccount: # Specifies whether a ServiceAccount should be created create: true # The name of the ServiceAccount to use. # If not set and create is true, a name is generated using the fullname template name: resources: {} nodeSelector: {} tolerations: [] affinity: {} # Configuration file for Dex # Certainly secret fields can use environment variables # config: |- issuer: http://192.168.92.128:30000 storage: type: kubernetes config: inCluster: true web: http: 0.0.0.0:5556 # If enabled, be sure to configure tls settings above, or use a tool # such as let-encrypt to manage the certs. # Currently this chart does not support both http and https, and the port # is fixed to 5556 # # https: 0.0.0.0:5556 # tlsCert: /etc/dex/tls/tls.crt # tlsKey: /etc/dex/tls/tls.key frontend: theme: "coreos" issuer: "Example Co" issuerUrl: "https://example.com" logoUrl: https://example.com/images/logo-250x25.png expiry: signingKeys: "6h" idTokens: "24h" logger: level: debug format: json oauth2: responseTypes: ["code", "token", "id_token"] skipApprovalScreen: true # Remember you can have multiple connectors of the same 'type' (with different 'id's) # If you need e.g. logins with groups for two different Microsoft 'tenants' connectors: # These may not match the schema used by your LDAP server # https://github.com/coreos/dex/blob/master/Documentation/connectors/ldap.md - type: ldap id: ldap name: LDAP config: host: 192.168.92.128:389 insecureNoSSL: true startTLS: false bindDN: cn=admin,dc=example,dc=com bindPW: "647252" userSearch: # Query should be "(&(objectClass=inetorgperson)(cn=<username>))" baseDN: ou=People,dc=example,dc=com filter: "(objectClass=inetorgperson)" username: cn # DN must be in capitals idAttr: DN emailAttr: mail nameAttr: cn preferredUsernameAttr: cn groupSearch: # Query should be "(&(objectClass=groupOfUniqueNames)(uniqueMember=<userAttr>))" baseDN: ou=Group,dc=example,dc=com filter: "" # DN must be in capitals userAttr: DN groupAttr: member nameAttr: cn # The 'name' must match the k8s API server's 'oidc-client-id' staticClients: - id: my-cluster name: "my-cluster" secret: "pUBnBOY80SnXgjibTYM9ZWNzY2xreNGQok" redirectURIs: - http://192.168.92.128:30001/callback enablePasswordDB: True staticPasswords: - email: "[email protected]" # bcrypt hash of the string "password" hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W" username: "admin" userID: "08a8684b-db88-4b73-90a9-3cd1661f5466" # You should not enter your secrets here if this file will be stored in source control # Instead create a separate file to hold or override these values # You need only list the environment variables you used in the 'config' above # You can add any additional ones you need, or remove ones you don't need # envSecrets: # GitHub GITHUB_CLIENT_ID: "override-me" GITHUB_CLIENT_SECRET: "override-me" # Google (oidc) GOOGLE_CLIENT_ID: "override-me" GOOGLE_CLIENT_SECRET: "override-me" # Microsoft MICROSOFT_APPLICATION_ID: "override-me" MICROSOFT_CLIENT_SECRET: "override-me" # LDAP LDAP_BINDPW: "123456" 如上所示: ...

May 23, 2025