为K8s集群配置基于Keycloak的认证

配置证书 由于之后要将keycloak接入k8s,所以Keycloak提供的服务必须也是HTTPS的,需要先生成crt和key文件 不过之前已经配置过Dex的HTTPS,所以可以直接将dex.crt和dex.key拿来用,这里直接将两个文件设置为kubectl的secret资源: kubectl create secret generic keycloak-tls-secret --from-file=tls.crt=./dex.crt --from-file=tls.key=./dex.key -n keycloak 如此配置后,只要注明secret名称:keycloak-tls-secret,tls.crt和tls.key可以随时被Pod挂载至目录中供引用 安装、运行Keycloak 首先获取Keycloak的部署配置文件,执行: wget https://gh-proxy.com/raw.githubusercontent.com/keycloak/keycloak-quickstarts/refs/heads/main/kubernetes/keycloak.yaml 接着对文件作出修改: 使用volumes和volumeMounts将keycloak-tls-secret挂载至容器目录/etc/x509/https volumes: - name: keycloak-tls secret: secretName: keycloak-tls-secret ... volumeMounts: - name: keycloak-tls mountPath: /etc/x509/https readOnly: true 设置环境变量,开启HTTPS并设置端口8443,引用挂载的证书和秘钥文件 env: - name: KC_HTTPS_CERTIFICATE_FILE value: /etc/x509/https/tls.crt - name: KC_HTTPS_CERTIFICATE_KEY_FILE value: /etc/x509/https/tls.key - name: KC_HTTPS_PORT value: "8443" - name: KC_HTTP_ENABLED value: "false" ... ports: - name: https containerPort: 8443 设置NodePort将服务暴露,可以映射为:8443 -> 30443 apiVersion: v1 kind: Service metadata: name: keycloak labels: app: keycloak spec: ports: - protocol: TCP port: 8443 targetPort: 8443 name: https nodePort: 30443 selector: app: keycloak type: NodePort 最终的keycloak.yaml文件: ...

June 10, 2025

在K8s集群部署Dex认证

Dex是一个开源的第三方身份认证系统,简化了与已有身份提供者或其他认证服务进行认证的开发流程,Dex将检查身份的过程对项目开发者隐藏,使得开发者只需要关注、控制认证业务进行的客体,无需亲自管理身份认证的各项细节 环境 OS:Debian-12.10.0-amd64 Kubernetes:v1.28.0 kubectl:v1.28.2 Helm:v3.17.3 master机已安装Git,已部署LDAP服务器 开发者没有备案的域名,无法进行DNS A解析,只能使用自签名证书 获取dex-k8s-authenticator Dex本身是可以直接使用的,但是如果想要将Dex部署至K8s集群中并提供友好的可视页面,则需要部署dex-k8s-authenticator(之后简称DKA) 执行:git clone https://github.com/mintel/dex-k8s-authenticator.git克隆该Git仓库,仓库的charts/路径中已经存在Dex和DKA的Chart文件,因此无需再单独克隆获取Dex文件 ficn@master:~$ git clone https://github.com/mintel/dex-k8s-authenticator.git ficn@master:~$ cd dex-k8s-authenticator/ ficn@master:~/dex-k8s-authenticator$ ls charts/ dex dex-k8s-authenticator README.md 运行Dex和DKA 执行: helm inspect values charts/dex > dex-values.yaml helm inspect values charts/dex-k8s-authenticator > dka-values.yaml 这两个指令目的是根据Dex和DKA的原始Chart生成各自的新values文件,用于之后覆盖原配置 接下来就对这两个values文件进行修改 Dex # sudo vi dex-values.yaml # Default values for dex # Deploy environment label, e.g. dev, test, prod global: deployEnv: dev replicaCount: 1 image: repository: dexidp/dex tag: v2.37.0 pullPolicy: IfNotPresent env: - name: KUBERNETES_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace service: type: NodePort port: 5556 nodePort: 30000 # For nodeport, specify the following: # type: NodePort # nodePort: <port-number> tls: create: false ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - dex.example.com tls: [] # - secretName: dex.example.com # hosts: # - dex.example.com rbac: # Specifies whether RBAC resources should be created create: true serviceAccount: # Specifies whether a ServiceAccount should be created create: true # The name of the ServiceAccount to use. # If not set and create is true, a name is generated using the fullname template name: resources: {} nodeSelector: {} tolerations: [] affinity: {} # Configuration file for Dex # Certainly secret fields can use environment variables # config: |- issuer: http://192.168.92.128:30000 storage: type: kubernetes config: inCluster: true web: http: 0.0.0.0:5556 # If enabled, be sure to configure tls settings above, or use a tool # such as let-encrypt to manage the certs. # Currently this chart does not support both http and https, and the port # is fixed to 5556 # # https: 0.0.0.0:5556 # tlsCert: /etc/dex/tls/tls.crt # tlsKey: /etc/dex/tls/tls.key frontend: theme: "coreos" issuer: "Example Co" issuerUrl: "https://example.com" logoUrl: https://example.com/images/logo-250x25.png expiry: signingKeys: "6h" idTokens: "24h" logger: level: debug format: json oauth2: responseTypes: ["code", "token", "id_token"] skipApprovalScreen: true # Remember you can have multiple connectors of the same 'type' (with different 'id's) # If you need e.g. logins with groups for two different Microsoft 'tenants' connectors: # These may not match the schema used by your LDAP server # https://github.com/coreos/dex/blob/master/Documentation/connectors/ldap.md - type: ldap id: ldap name: LDAP config: host: 192.168.92.128:389 insecureNoSSL: true startTLS: false bindDN: cn=admin,dc=example,dc=com bindPW: "647252" userSearch: # Query should be "(&(objectClass=inetorgperson)(cn=<username>))" baseDN: ou=People,dc=example,dc=com filter: "(objectClass=inetorgperson)" username: cn # DN must be in capitals idAttr: DN emailAttr: mail nameAttr: cn preferredUsernameAttr: cn groupSearch: # Query should be "(&(objectClass=groupOfUniqueNames)(uniqueMember=<userAttr>))" baseDN: ou=Group,dc=example,dc=com filter: "" # DN must be in capitals userAttr: DN groupAttr: member nameAttr: cn # The 'name' must match the k8s API server's 'oidc-client-id' staticClients: - id: my-cluster name: "my-cluster" secret: "pUBnBOY80SnXgjibTYM9ZWNzY2xreNGQok" redirectURIs: - http://192.168.92.128:30001/callback enablePasswordDB: True staticPasswords: - email: "[email protected]" # bcrypt hash of the string "password" hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W" username: "admin" userID: "08a8684b-db88-4b73-90a9-3cd1661f5466" # You should not enter your secrets here if this file will be stored in source control # Instead create a separate file to hold or override these values # You need only list the environment variables you used in the 'config' above # You can add any additional ones you need, or remove ones you don't need # envSecrets: # GitHub GITHUB_CLIENT_ID: "override-me" GITHUB_CLIENT_SECRET: "override-me" # Google (oidc) GOOGLE_CLIENT_ID: "override-me" GOOGLE_CLIENT_SECRET: "override-me" # Microsoft MICROSOFT_APPLICATION_ID: "override-me" MICROSOFT_CLIENT_SECRET: "override-me" # LDAP LDAP_BINDPW: "123456" 如上所示: ...

May 23, 2025

测试K8s集群内部联通性

设置集群时,有时需要测试集群内部Pod之间的连通性,或是测试DNS服务是否正常 可以使用kubectl run命令来创建一个Pod,并指定Pod名称、镜像以及命名空间 例如,创建一个busybox镜像,目的是测试kube-authen命名空间下的Pod是否能ping通、DNS服务是否生效: kubectl run ping-test --image=busybox -n kube-authen -it -- sh 以上命令的含义是:在kube-authen命名空间内创建一个镜像为busybox的Pod,Pod名称为ping-test,创建后立刻打开一个交互式终端 在终端内即可进行ping测试或是nslookup测试 有时仅需要进行临时测试,测试完毕后需要立刻删除Pod,那么可以执行指令: kubectl run ping-test --image=busybox -n kube-authen -it --rm -- ping google.com 这里的--rm参数意为“会话结束后自动删除Pod” 注意 如果仅创建了busybox的Pod而没有指定运行命令,例如kubectl run ping-test --image=busybox -n kube-authen,则会导致Pod无限重启,原因是: 在未指定命令的情况下,busybox使用默认入口点,默认行为是启动shell,但启动shell后没有可执行的命令,于是shell自动退出,这表现为容器进程退出,K8s即认为容器启动失败并进行自动重启,如此不断循环,产生CrashLoopBackOff错误

April 16, 2025

kubeadm init 等待超时问题

问题描述 配置master节点时,执行kubeadm init --config kubeadm-config.yaml --v=5命令后,执行至初始化Pod步骤时报错: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) ...... 经过检查,host配置无误,cluster的IP地址也与本机相同 ...

January 15, 2025