K8s误删PV补救措施

背景 K8s中的插件local-path-provisioner能够动态创建PV,使用local-pathPV时,无需手动建立PV,只需建立一个PVC,在其中指定storageClassName: local-path即可,插件会自动为其分配PV和存储目录 这个插件的好处是: 用户无需手动配置PV,只需创建PVC绑定即可; 在非文件共享的场景下,用户也无需关心服务Pod被调度至哪个节点,因为插件会自动根据Pod所在节点分配存储区 现在我误删了所有PV,结果是执行kubectl get pv命令时,看到了状态是Terminating的PV,这说明PV处于正在被删除的状态 但当我执行命令kubectl get pvc -n authen查看PVC时,却发现绑定了PV的PVC是正常Bound的状态 原因分析 之所以被删除时的PV表现为Terminating状态而非直接消失,是因为该PV存在**finalizer字段**,这个字段的作用就是在集群删除PV时,不立刻清除PV,而是将其标记起来,呈现Terminating状态,这是为了确保一些重要的清理动作在资源被物理删除前完成,此时,PVC仍然是可用的 我现在存在两个Terminating状态的PV,它们各自的名称是: pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 pvc-63a03d24-f078-43af-ae6c-0ce2328ba54e 查看其中一个PV的信息: master@master:~/authen/pvc$ kubectl get pv pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: local.path.provisioner/selected-node: master pv.kubernetes.io/provisioned-by: rancher.io/local-path creationTimestamp: "2025-10-10T07:46:40Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2025-10-22T06:34:12Z" finalizers: - kubernetes.io/pv-protection name: pvc-539fd3b7-12ed-488b-9c86-59e07de28b05 resourceVersion: "30449839" uid: bea50dbb-7f29-4e21-bb22-d9e61ce13cec spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: ldap-config-pvc namespace: authen resourceVersion: "27503744" uid: 539fd3b7-12ed-488b-9c86-59e07de28b05 hostPath: path: /opt/local-path-provisioner/pvc-539fd3b7-12ed-488b-9c86-59e07de28b05_authen_ldap-config-pvc type: DirectoryOrCreate nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - master persistentVolumeReclaimPolicy: Delete storageClassName: local-path volumeMode: Filesystem status: phase: Bound 可以看到,在metadata下确实存在finalizers字段,正是它导致了PV始终处于Terminating状态 ...

October 23, 2025

在K8s中部署openLDAP

安装 先创建一个命名空间:kubectl create namespace authen 设置PVC: # vi openldap-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ldap-data-pvc namespace: authen spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-path --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ldap-config-pvc namespace: authen spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-path 执行kubectl apply -f open-ldap-pvc.yaml创建PVC即可 设置初始化时需要插入的信息: # vi ldap-init.ldif dn: ou=People,dc=example,dc=com ou: People objectClass: organizationalUnit dn: ou=Group,dc=example,dc=com ou: Group objectClass: organizationalUnit 执行kubectl create configmap openldap-init --from-file=./ldap-init.ldif -n authen生成ConfigMap,供之后LDAP初始化时读取 创建部署文件: # vi openldap-deployment.yaml kind: Deployment apiVersion: apps/v1 metadata: name: openldap namespace: authen labels: app: openldap annotations: app.kubernetes.io/alias-name: LDAP app.kubernetes.io/description: 认证中心 spec: replicas: 1 selector: matchLabels: app: openldap template: metadata: labels: app: openldap spec: containers: - name: go-ldap-admin-openldap args: - --copy-service image: 'osixia/openldap:1.5.0' ports: - name: tcp-389 containerPort: 389 protocol: TCP - name: tcp-636 containerPort: 636 protocol: TCP env: - name: TZ value: Asia/Shanghai - name: LDAP_ORGANISATION value: "orgldap" - name: LDAP_DOMAIN value: "example.com" - name: LDAP_ADMIN_PASSWORD value: "123456" - name: LDAP_BACKEND value: mdb resources: limits: cpu: 500m memory: 500Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: ldap-config-pvc mountPath: /etc/ldap/slapd.d - name: ldap-data-pvc mountPath: /var/lib/ldap - name: openldap-init mountPath: /container/service/slapd/assets/config/bootstrap/ldif/custom/init.ldif subPath: init.ldif volumes: - name: ldap-config-pvc persistentVolumeClaim: claimName: ldap-config-pvc - name: ldap-data-pvc persistentVolumeClaim: claimName: ldap-data-pvc - name: openldap-init configMap: name: openldap-init --- apiVersion: v1 kind: Service metadata: name: openldap-svc namespace: authen labels: app: openldap-svc spec: ports: - name: tcp-389 port: 389 protocol: TCP targetPort: 389 - name: tcp-636 port: 636 protocol: TCP targetPort: 636 selector: app: openldap 执行kubectl apply -f ldap-deployment.yaml即可将LDAP部署为一个Pod ...

October 10, 2025

IDEA远程调试Keycloak自定义SPI

在基于Keycloak开发调试自定义SPI时,为了使其运行,通常需要: 手动将项目打包为jar文件 将其放入Keycloak的/providers目录中 在命令行重启Keycloak服务 对于需要观察运行状态、乃至打断点的调试来说十分不便 考虑使用JVM远程调试 + HotSwap实现对Keycloak的实时调试 JVM配置 首先需要让JVM在5005端口上监听调试请求,所以在终端设置参数(以Windows为例): set JAVA_OPTS_APPEND=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 然后在同一个终端下加载jar包、运行服务: bin\kc.bat start-dev 注意 使用set命令设置的环境变量仅在当前终端窗口生效,所以需要在同一个终端下运行以上两条命令;或者可以手动在系统设置中配置环境变量,然后打开一个新终端直接运行Keycloak 服务启动后终端显示大概如下: IDEA配置 打开IDEA,在右上角找到调试下拉框(就是正常运行、调试代码的部分),进入编辑配置页面后点击页面左上角加号添加新调试,在左侧选择添加【远程JVM调试】 接下来配置该调试的各参数,名称随意,注意主机和端口要正确填写: 完成后点击调试按钮,即可看到IDEA下方调试栏中提示:已连接到地址为 ''localhost:5005',传输: '套接字'' 的目标虚拟机,则环境配置成功 HotSwap 在已连接的情况下,对代码进行修改时,代码编辑框体的右上角会出现Code changed提示与按钮,点击后即可实现JVM对class的热更新 如此一来,每次修改代码、需要调试时,无需重新打包、导入自定义SPI的繁复操作,点击按钮即可自动编译项目、更新JVM中运行的class文件 局限 由于HotSwap只能对编译生成的class文件热更新,所以本文章的方法并不能对前端FTL文件进行实时调试,每当修改了FTL文件时,仍然需要重新打包并导入Keycloak;另外,HotSwap也不支持对类名、类增减以及方法增减的热更新

August 21, 2025

使用VSCode解决fork项目的同步冲突

在Github上Fork了一个iptv-api的项目,设置自动获取信息的Action实现了对IPTV源的每日更新,但有时上游仓库会对项目功能作更新或修复,这时就需要将Fork仓库与上游仓库同步,此时就可能出现冲突 Github无法在线解决冲突,这里使用VSCode解决 首先打开VScode,进入Fork项目的目录中,确保VScode已识别本地仓库且已添加上游仓库,然后新建终端执行: git fetch upstream 接着输入git branch确保正在需要同步的分支上,在本例中,只存在一个分支master;如果存在多个分支,则使用git checkout <branch>切换即可 执行合并: git merge upstream/master 执行后,VSCode左侧导航栏就会提示存在冲突的文件,可以鼠标点击选择是否保留先前内容,选择完成后保存、提交并同步即可,流程与正常使用VSCode执行Git操作一致

August 15, 2025

在K8s集群部署Dex认证

Dex是一个开源的第三方身份认证系统,简化了与已有身份提供者或其他认证服务进行认证的开发流程,Dex将检查身份的过程对项目开发者隐藏,使得开发者只需要关注、控制认证业务进行的客体,无需亲自管理身份认证的各项细节 环境 OS:Debian-12.10.0-amd64 Kubernetes:v1.28.0 kubectl:v1.28.2 Helm:v3.17.3 master机已安装Git,已部署LDAP服务器 开发者没有备案的域名,无法进行DNS A解析,只能使用自签名证书 获取dex-k8s-authenticator Dex本身是可以直接使用的,但是如果想要将Dex部署至K8s集群中并提供友好的可视页面,则需要部署dex-k8s-authenticator(之后简称DKA) 执行:git clone https://github.com/mintel/dex-k8s-authenticator.git克隆该Git仓库,仓库的charts/路径中已经存在Dex和DKA的Chart文件,因此无需再单独克隆获取Dex文件 ficn@master:~$ git clone https://github.com/mintel/dex-k8s-authenticator.git ficn@master:~$ cd dex-k8s-authenticator/ ficn@master:~/dex-k8s-authenticator$ ls charts/ dex dex-k8s-authenticator README.md 运行Dex和DKA 执行: helm inspect values charts/dex > dex-values.yaml helm inspect values charts/dex-k8s-authenticator > dka-values.yaml 这两个指令目的是根据Dex和DKA的原始Chart生成各自的新values文件,用于之后覆盖原配置 接下来就对这两个values文件进行修改 Dex # sudo vi dex-values.yaml # Default values for dex # Deploy environment label, e.g. dev, test, prod global: deployEnv: dev replicaCount: 1 image: repository: dexidp/dex tag: v2.37.0 pullPolicy: IfNotPresent env: - name: KUBERNETES_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace service: type: NodePort port: 5556 nodePort: 30000 # For nodeport, specify the following: # type: NodePort # nodePort: <port-number> tls: create: false ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - dex.example.com tls: [] # - secretName: dex.example.com # hosts: # - dex.example.com rbac: # Specifies whether RBAC resources should be created create: true serviceAccount: # Specifies whether a ServiceAccount should be created create: true # The name of the ServiceAccount to use. # If not set and create is true, a name is generated using the fullname template name: resources: {} nodeSelector: {} tolerations: [] affinity: {} # Configuration file for Dex # Certainly secret fields can use environment variables # config: |- issuer: http://192.168.92.128:30000 storage: type: kubernetes config: inCluster: true web: http: 0.0.0.0:5556 # If enabled, be sure to configure tls settings above, or use a tool # such as let-encrypt to manage the certs. # Currently this chart does not support both http and https, and the port # is fixed to 5556 # # https: 0.0.0.0:5556 # tlsCert: /etc/dex/tls/tls.crt # tlsKey: /etc/dex/tls/tls.key frontend: theme: "coreos" issuer: "Example Co" issuerUrl: "https://example.com" logoUrl: https://example.com/images/logo-250x25.png expiry: signingKeys: "6h" idTokens: "24h" logger: level: debug format: json oauth2: responseTypes: ["code", "token", "id_token"] skipApprovalScreen: true # Remember you can have multiple connectors of the same 'type' (with different 'id's) # If you need e.g. logins with groups for two different Microsoft 'tenants' connectors: # These may not match the schema used by your LDAP server # https://github.com/coreos/dex/blob/master/Documentation/connectors/ldap.md - type: ldap id: ldap name: LDAP config: host: 192.168.92.128:389 insecureNoSSL: true startTLS: false bindDN: cn=admin,dc=example,dc=com bindPW: "647252" userSearch: # Query should be "(&(objectClass=inetorgperson)(cn=<username>))" baseDN: ou=People,dc=example,dc=com filter: "(objectClass=inetorgperson)" username: cn # DN must be in capitals idAttr: DN emailAttr: mail nameAttr: cn preferredUsernameAttr: cn groupSearch: # Query should be "(&(objectClass=groupOfUniqueNames)(uniqueMember=<userAttr>))" baseDN: ou=Group,dc=example,dc=com filter: "" # DN must be in capitals userAttr: DN groupAttr: member nameAttr: cn # The 'name' must match the k8s API server's 'oidc-client-id' staticClients: - id: my-cluster name: "my-cluster" secret: "pUBnBOY80SnXgjibTYM9ZWNzY2xreNGQok" redirectURIs: - http://192.168.92.128:30001/callback enablePasswordDB: True staticPasswords: - email: "[email protected]" # bcrypt hash of the string "password" hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W" username: "admin" userID: "08a8684b-db88-4b73-90a9-3cd1661f5466" # You should not enter your secrets here if this file will be stored in source control # Instead create a separate file to hold or override these values # You need only list the environment variables you used in the 'config' above # You can add any additional ones you need, or remove ones you don't need # envSecrets: # GitHub GITHUB_CLIENT_ID: "override-me" GITHUB_CLIENT_SECRET: "override-me" # Google (oidc) GOOGLE_CLIENT_ID: "override-me" GOOGLE_CLIENT_SECRET: "override-me" # Microsoft MICROSOFT_APPLICATION_ID: "override-me" MICROSOFT_CLIENT_SECRET: "override-me" # LDAP LDAP_BINDPW: "123456" 如上所示: ...

May 23, 2025