Helm部署MinIO集群
添加Helm仓库
# 创建一个minio目录,后面的操作都在改目录下进行 [root@node1 ~]# mkdir minio [root@node1 ~]# cd minio/ # 添加仓库 [root@node1 minio]# helm repo add minio https://helm.min.io/ "minio" has been added to your repositories [root@node1 minio]#
搜索并下载MinIO Chart
[root@node1 minio]# helm search repo minio NAME CHART VERSION APP VERSION DESCRIPTION bitnami/minio 12.8.6 2023.9.16 MinIO(R) is an object storage server, compatibl... minio/minio 8.0.10 master High Performance, Kubernetes Native Object Storage stable/minio 0.5.5 Distributed object storage server built for clo... # 下载MinIO Chart [root@node1 minio]# helm fetch minio/minio [root@node1 minio]# ls minio-8.0.10.tgz # 解压下载下来的包 [root@node1 minio]# tar -xf minio-8.0.10.tgz [root@node1 minio]# ls minio minio-8.0.10.tgz [root@node1 minio]#
查看目录结构
# 进入minio目录 [root@node1 minio]# cd minio/ # 里面的文件结构如下 [root@node1 minio]# ls Chart.yaml ci README.md templates values.yaml
创建名称空间
后面将MinIO集群部署到该名称空间下
[root@node1 ~]# kubectl create ns minio namespace/minio created
创建一个Secret
# 创建名为tls-ssl-minio的Secret到minio名称空间下 [root@node1 cert]# kubectl create secret generic tls-ssl-minio --from-file=private.key --from-file=public.crt -n minio secret/tls-ssl-minio created # 查看是否创建成功 [root@node1 cert]# kubectl get secret -n minio NAME TYPE DATA AGE tls-ssl-minio Opaque 2 10s
修改value.yaml中的一些变量值
## Provide a name in place of minio for `app:` labels ## nameOverride: "" ## Provide a name to substitute for the full names of resources ## fullnameOverride: "" ## set kubernetes cluster domain where minio is running ## clusterDomain: cluster.local ## Set default image, imageTag, and imagePullPolicy. mode is used to indicate the ## image: repository: minio/minio tag: RELEASE.2021-02-14T04-01-33Z pullPolicy: IfNotPresent ## Set default image, imageTag, and imagePullPolicy for the `mc` (the minio ## client used to create a default bucket). ## mcImage: repository: minio/mc tag: RELEASE.2021-02-14T04-28-06Z pullPolicy: IfNotPresent ## Set default image, imageTag, and imagePullPolicy for the `jq` (the JSON ## process used to create secret for prometheus ServiceMonitor). ## helmKubectlJqImage: repository: bskim45/helm-kubectl-jq tag: 3.1.0 pullPolicy: IfNotPresent ## minio server mode, i.e. standalone or distributed. ## Distributed Minio ref: https://docs.minio.io/docs/distributed-minio-quickstart-guide ## mode: distributed # 部署为分布式 ## Additional labels to include with deployment or statefulset additionalLabels: [] ## Additional annotations to include with deployment or statefulset additionalAnnotations: [] ## Additional arguments to pass to minio binary extraArgs: [] ## Update strategy for Deployments DeploymentUpdate: type: RollingUpdate maxUnavailable: 0 maxSurge: 100% ## Update strategy for StatefulSets StatefulSetUpdate: updateStrategy: RollingUpdate ## Pod priority settings ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ## priorityClassName: "" ## Set default accesskey, secretkey, Minio config file path, volume mount path and ## number of nodes (only used for Minio distributed mode) ## AccessKey and secretKey is generated when not set ## Distributed Minio ref: https://docs.minio.io/docs/distributed-minio-quickstart-guide ## accessKey: "admin" # 访问网页端的用户名 secretKey: "minio123456" # 访问网页端的密码 certsPath: "/etc/minio/certs/" configPathmc: "/etc/minio/mc/" mountPath: "/export" ## Use existing Secret that store following variables: ## ## | Chart var | .data. in Secret | ## |:----------------------|:-------------------------| ## | accessKey | accesskey | ## | secretKey | secretkey | ## | gcsgateway.gcsKeyJson | gcs_key.json | ## | s3gateway.accessKey | awsAccessKeyId | ## | s3gateway.secretKey | awsSecretAccessKey | ## | etcd.clientCert | etcd_client_cert.pem | ## | etcd.clientCertKey | etcd_client_cert_key.pem | ## ## All mentioned variables will be ignored in values file. ## .data.accesskey and .data.secretkey are mandatory, ## others depend on enabled status of corresponding sections. existingSecret: "" ## Override the root directory which the minio server should serve from. ## If left empty, it defaults to the value of {{ .Values.mountPath }} ## If defined, it must be a sub-directory of the path specified in {{ .Values.mountPath }} bucketRoot: "" # Number of drives attached to a node drivesPerNode: 1 # Number of MinIO containers running replicas: 4 # Number of expanded MinIO clusters zones: 1 ## TLS Settings for Minio tls: enabled: false ## Create a secret with private.key and public.crt files and pass that here. Ref: https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret certSecret: "" publicCrt: public.crt privateKey: private.key ## Trusted Certificates Settings for Minio. Ref: https://docs.minio.io/docs/how-to-secure-access-to-minio-server-with-tls#install-certificates-from-third-party-cas ## Bundle multiple trusted certificates into one secret and pass that here. Ref: https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret ## When using self-signed certificates, remember to include Minio's own certificate in the bundle with key public.crt. ## If certSecret is left empty and tls is enabled, this chart installs the public certificate from .Values.tls.certSecret. trustedCertsSecret: "" ## Enable persistence using Persistent Volume Claims ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ ## persistence: enabled: true ## A manually managed Persistent Volume and Claim ## Requires persistence.enabled: true ## If defined, PVC must be created manually before volume will be bound existingClaim: "" ## minio data Persistent Volume Storage Class ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## ## Storage class of PV to bind. By default it looks for standard storage class. ## If the PV uses a different storage class, specify that here. storageClass: "rook-ceph-block" # 设置持久化所使用的sc VolumeName: "" accessMode: ReadWriteOnce size: 10Gi # 设置每个Pod所申请的PVC大小 ## If subPath is set mount a sub folder of a volume instead of the root of the volume. ## This is especially handy for volume plugins that don't natively support sub mounting (like glusterfs). ## subPath: "" ## Expose the Minio service to be accessed from outside the cluster (LoadBalancer service). ## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it. ## ref: http://kubernetes.io/docs/user-guide/services/ ## service: type: NodePort # 如果要想下面的nodePort生效,这里需要设置为NodePort clusterIP: ~ port: 9000 nodePort: 32100 # 设置nodePort 端口 ## List of IP addresses at which the Prometheus server service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # - externalIp1 annotations: {} # prometheus.io/scrape: 'true' # prometheus.io/path: '/minio/prometheus/metrics' # prometheus.io/port: '9000' ## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/ ## imagePullSecrets: [] # - name: "image-pull-secret" ingress: enabled: false labels: {} # node-role.kubernetes.io/ingress: platform annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" # kubernetes.io/ingress.allow-http: "false" # kubernetes.io/ingress.global-static-ip-name: "" # nginx.ingress.kubernetes.io/secure-backends: "true" # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0 path: / hosts: - chart-example.local tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local ## Node labels for pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} tolerations: [] affinity: {} ## Add stateful containers to have security context, if enabled MinIO will run as this ## user and group NOTE: securityContext is only enabled if persistence.enabled=true securityContext: enabled: true runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 # Additational pod annotations podAnnotations: {} # Additional pod labels podLabels: {} ## Configure resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: requests: memory: 4Gi ## Create a bucket after minio install ## defaultBucket: enabled: false ## If enabled, must be a string with length > 0 name: bucket ## Can be one of none|download|upload|public policy: none ## Purge if bucket exists already purge: false ## set versioning for bucket true|false # versioning: false ## Create multiple buckets after minio install ## Enabling `defaultBucket` will take priority over this list ## buckets: [] # - name: bucket1 # policy: none # purge: false # - name: bucket2 # policy: none # purge: false ## Additional Annotations for the Kubernetes Batch (make-bucket-job) makeBucketJob: podAnnotations: annotations: securityContext: enabled: false runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 resources: requests: memory: 128Mi ## Additional Annotations for the Kubernetes Batch (update-prometheus-secret) updatePrometheusJob: podAnnotations: annotations: securityContext: enabled: false runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 s3gateway: enabled: false replicas: 4 serviceEndpoint: "" accessKey: "" secretKey: "" ## Use minio as an azure blob gateway, you should disable data persistence so no volume claim are created. ## https://docs.minio.io/docs/minio-gateway-for-azure azuregateway: enabled: false # Number of parallel instances replicas: 4 ## Use minio as GCS (Google Cloud Storage) gateway, you should disable data persistence so no volume claim are created. ## https://docs.minio.io/docs/minio-gateway-for-gcs gcsgateway: enabled: false # Number of parallel instances replicas: 4 # credential json file of service account key gcsKeyJson: "" # Google cloud project-id projectId: "" ## Use minio on NAS backend ## https://docs.minio.io/docs/minio-gateway-for-nas nasgateway: enabled: false # Number of parallel instances replicas: 4 # For NAS Gateway, you may want to bind the PVC to a specific PV. To ensure that happens, PV to bind to should have # a label like "pv: ", use value here. pv: ~ ## Use this field to add environment variables relevant to Minio server. These fields will be passed on to Minio container(s) ## when Chart is deployed environment: {} ## Please refer for comprehensive list https://docs.minio.io/docs/minio-server-configuration-guide.html ## MINIO_DOMAIN: "chart-example.local" ## MINIO_BROWSER: "off" networkPolicy: enabled: false allowExternal: true ## PodDisruptionBudget settings ## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ ## podDisruptionBudget: enabled: false maxUnavailable: 1 ## Specify the service account to use for the Minio pods. If 'create' is set to 'false' ## and 'name' is left unspecified, the account 'default' will be used. serviceAccount: create: true ## The name of the service account to use. If 'create' is 'true', a service account with that name ## will be created. Otherwise, a name will be auto-generated. name: metrics: # Metrics can not be disabled yet: https://github.com/minio/minio/issues/7493 serviceMonitor: enabled: false additionalLabels: {} relabelConfigs: {} # namespace: monitoring # interval: 30s # scrapeTimeout: 10s ## ETCD settings: https://github.com/minio/minio/blob/master/docs/sts/etcd.md ## Define endpoints to enable this section. etcd: endpoints: [] pathPrefix: "" corednsPathPrefix: "" clientCert: "" clientCertKey: ""
使用Helm安装
[root@node1 minio]# helm install minio . -n minio [root@node1 minio]# helm install minio . -n minio NAME: minio LAST DEPLOYED: Fri Sep 22 13:03:56 2023 NAMESPACE: minio STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Minio can be accessed via port 9000 on the following DNS name from within your cluster: minio.minio.svc.cluster.local To access Minio from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace minio -l "release=minio" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace minio Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access Minio server on http://localhost:9000. Follow the below steps to connect to Minio server with mc client: 1. Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide 2. Get the ACCESS_KEY=$(kubectl get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode) and the SECRET_KEY=$(kubectl get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode) 3. mc alias set minio-local http://localhost:9000 "$ACCESS_KEY" "$SECRET_KEY" --api s3v4 4. mc ls minio-local Alternately, you can use your browser or the Minio SDK to access the server - https://docs.minio.io/categories/17
查看pod运行情况
[root@node1 ~]# kubectl get pods -n minio NAME READY STATUS RESTARTS AGE minio-0 1/1 Running 0 3m50s minio-1 1/1 Running 0 3m50s minio-2 1/1 Running 0 3m50s minio-3 1/1 Running 0 3m49s # 查看pvc [root@node1 minio]# kubectl get pvc -n minio NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE export-minio-0 Bound pvc-a70a5caa-2bad-40ab-bf19-dbbfccd3d151 10Gi RWO rook-ceph-block 28s export-minio-1 Bound pvc-c3c1a8c5-d429-47e4-acf6-a57da3bacd5a 10Gi RWO rook-ceph-block 28s export-minio-2 Bound pvc-15ed67b3-7ea0-4f06-ba46-93137480066c 10Gi RWO rook-ceph-block 28s export-minio-3 Bound pvc-a58ceb7f-dac0-4ee7-9a34-473c8f336d28 10Gi RWO rook-ceph-block 28s
查看Service地址
[root@node1 ~]# kubectl get service -n minio NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio NodePort 10.98.63.2 9000:32100/TCP 8m25s minio-svc ClusterIP None 9000/TCP 8m25s
登陆网页端上传文件验证
命令行方式访问测试
# 下载客户端 curl https://dl.min.io/client/mc/release/linux-amd64/mc --create-dirs -o $HOME/minio-binaries/mc chmod +x $HOME/minio-binaries/mc export PATH=$PATH:$HOME/minio-binaries/ # 查看命令行的帮助信息 [root@node1 minio]# mc --help NAME: mc - MinIO Client for object storage and filesystems. USAGE: mc [FLAGS] COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: alias manage server credentials in configuration file ls list buckets and objects mb make a bucket rb remove a bucket cp copy objects mv move objects rm remove object(s) mirror synchronize object(s) to a remote site cat display object contents head display first 'n' lines of an object pipe stream STDIN to an object find search for objects sql run sql queries on objects stat show object metadata tree list buckets and objects in a tree format du summarize disk usage recursively retention set retention for object(s) legalhold manage legal hold for object(s) support support related commands license license related commands share generate URL for temporary access to an object version manage bucket versioning ilm manage bucket lifecycle quota manage bucket quota encrypt manage bucket encryption config event manage object notifications
访问MinIO中存储的文件
# 获取用户名和密码 [root@node1 minio]# ACCESS_KEY=$(kubectl get secret minio -o jsonpath="{.data.accesskey}" -n minio| base64 --decode) [root@node1 minio]# echo $ACCESS_KEY admin [root@node1 minio]# SECRET_KEY=$(kubectl get secret minio -o jsonpath="{.data.secretkey}" -n minio| base64 --decode) [root@node1 minio]# echo $SECRET_KEY minio123456 # 连接MinIO # 使用mc alias将服务器的密钥信息保存到配置文件 [root@node1 minio]# mc alias set my-minio http://192.168.0.184:32100 "$ACCESS_KEY" "$SECRET_KEY" --api s3v4 Added `my-minio` successfully. # 查看my-minio这个服务器里面存储的内容 [root@node1 minio]# mc ls my-minio [2023-09-22 19:14:56 CST] 0B books/ [root@node1 minio]# mc ls my-minio/books [2023-09-22 19:15:19 CST] 5.7MiB STANDARD 植物 鲜花.jpg
遇到的问题
问题一
[root@node1 minio]# helm install minio . -n minio Error: INSTALLATION FAILED: create: failed to create: Secret "sh.helm.release.v1.minio.v1" is invalid: data: Too long: must have at most 1048576 bytes
由于我在minio chart的解压目录新创建了一个cert文件夹,并生成了一些证书文件,导致了该错误的产生
解决办法: 删除除了minio Chart 包中以外的文件包括文件夹,重新执行安装。
参考文章
- github.com/minio/minio…
- artifacthub.io/packages/he…
- github.com/minio/opera…