The NFS subdir external provisioner is an automatic provisioner for Kubernetes that uses your already configured NFS server, automatically creating Persistent Volumes.
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path
This charts installs custom storage class into a Kubernetes cluster using the Helm package manager. It also installs a NFS client provisioner into the cluster which dynamically creates persistent volumes from single NFS share.
To install the chart with the release name my-release
:
$ helm install my-release nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=x.x.x.x \
--set nfs.path=/exported/path
The command deploys the given storage class in the default configuration. It can be used afterwards to provision persistent volumes. The configuration section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
To uninstall/delete the my-release
deployment:
$ helm delete my-release
The command removes all the Kubernetes components associated with the chart and deletes the release.
The following tables lists the configurable parameters of this chart and their default values.
Parameter | Description | Default |
---|---|---|
replicaCount |
Number of provisioner instances to deployed | 1 |
strategyType |
Specifies the strategy used to replace old Pods by new ones | Recreate |
image.repository |
Provisioner image | registry.k8s.io/sig-storage/nfs-subdir-external-provisioner |
image.tag |
Version of provisioner image | v4.0.2 |
image.pullPolicy |
Image pull policy | IfNotPresent |
imagePullSecrets |
Image pull secrets | [] |
storageClass.name |
Name of the storageClass | nfs-client |
storageClass.defaultClass |
Set as the default StorageClass | false |
storageClass.allowVolumeExpansion |
Allow expanding the volume | true |
storageClass.reclaimPolicy |
Method used to reclaim an obsoleted volume | Delete |
storageClass.provisionerName |
Name of the provisionerName | null |
storageClass.archiveOnDelete |
Archive PVC when deleting | true |
storageClass.onDelete |
Strategy on PVC deletion. Overrides archiveOnDelete when set to lowercase values 'delete' or 'retain' | null |
storageClass.pathPattern |
Specifies a template for the directory name | null |
storageClass.accessModes |
Set access mode for PV | ReadWriteOnce |
storageClass.volumeBindingMode |
Set volume binding mode for Storage Class | Immediate |
storageClass.annotations |
Set additional annotations for the StorageClass | {} |
leaderElection.enabled |
Enables or disables leader election | true |
nfs.server |
Hostname of the NFS server (required) | null (ip or hostname) |
nfs.path |
Basepath of the mount point to be used | /nfs-storage |
nfs.mountOptions |
Mount options (e.g. 'nfsvers=3') | null |
nfs.volumeName |
Volume name used inside the pods | nfs-subdir-external-provisioner-root |
nfs.reclaimPolicy |
Reclaim policy for the main nfs volume used for subdir provisioning | Retain |
resources |
Resources required (e.g. CPU, memory) | {} |
rbac.create |
Use Role-based Access Control | true |
podSecurityPolicy.enabled |
Create & use Pod Security Policy resources | false |
podAnnotations |
Additional annotations for the Pods | {} |
priorityClassName |
Set pod priorityClassName | null |
serviceAccount.create |
Should we create a ServiceAccount | true |
serviceAccount.name |
Name of the ServiceAccount to use | null |
serviceAccount.annotations |
Additional annotations for the ServiceAccount | {} |
nodeSelector |
Node labels for pod assignment | {} |
affinity |
Affinity settings | {} |
tolerations |
List of node taints to tolerate | [] |
labels |
Additional labels for any resource created | {} |
podDisruptionBudget.enabled |
Create and use Pod Disruption Budget | false |
podDisruptionBudget.maxUnavailable |
Set maximum unavailable pods in the Pod Disruption Budget | 1 |
It is possible to install more than one provisioner in your cluster to have access to multiple nfs servers and/or multiple exports from a single nfs server. Each provisioner must have a different storageClass.provisionerName
and a different storageClass.name
. For example:
helm install second-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=y.y.y.y \
--set nfs.path=/other/exported/path \
--set storageClass.name=second-nfs-client \
--set storageClass.provisionerName=k8s-sigs.io/second-nfs-subdir-external-provisioner
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。