Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Test if workers can mount nfs
- -----------------------------
- You can make use of dynamic provisioning, so as an admin you won't have to manually create a PV
- for every PVC requesting this PV. So the concept in dynamic provisioning is users can request
- PV using the VolumeClaimTemplate or PVC and behind the scenes there is a provisioner that deploys
- the PV on the admin's behalf. So, all the PV will be dynamically provisioned based on the PVC.
- When you delete the PVC, then the PV will also be deleted as well. This is how it works if
- you are running your k8s cluster in the cloud.
- If you are running it locally, you need:
- 1. Access to a NFS server (e.g. your hostmachine, exporting /sr/nfs/kubedata)
- 2. A k8s cluster with multiple workers (e.g. 1 master, 2 workers).
- 3. A special type of Pod called NFS-client-provisioner
- What this POD does? It is going to mount the NFS volume (/srv/nfs/kubedata) under the /persistentVolumes
- this pod can run in any of the worker nodes. You just need to make sure this pod is always running.
- That is the gateway.
- 4. Create a ServiceAccount
- 5. A role for the service account
- 6. A rolebinding for the serice account
- 7. A clusterrole for the service account
- 8. A clusterrolebinding for the service account
- 9. Create a storageclass
- 10. Point that storageclass to the provisioner pod
- 11. Create a deployment for the nfs provisioner
- 12. That deployment will create a ReplicaSet
- 13. That will be one instance of the pod running the replicaset
- Setup the NFS server:
- ---------------------
- sudo mkdir /srv/nfs/kubedata
- sudo chown nobody: /srv/nfs/kubedata
- sudo systemctl enable nfs-server
- sudo systemctl start nfs-server
- terminate the firewall if it's open
- sudo vi /etc/exports
- -> /srv/nfs/kubedata *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
- sudo exportfs -rav
- -> exporting starts now
- sudo exportfs -v
- -> it tells you what is exported
- Go to your workers and try to mount this NFS to make sure it works.
- SSH into one of the workers
- test if the worker can ping the NFS server
- -> ping ultron.suse.de
- test the directory that NFS exports from the server machine
- -> showmount -e ultron.suse.de
- --> this will return the dir (e.g. /srv/nfs/kubedata *)
- mount the NFS into the worker
- -> mount -t nfs ultron.suse.de:/srv/nfs/kubedata /mnt
- test if it's mounted:
- -> mount | grep kubedata
- --> you should see it
- unmount it now
- -> umount /mnt
- Repeat this for every worker to make sure they can all mount NFS
- If all fine, proceed: https://github.com/justmeandopensource/kubernetes/tree/master/yamls/nfs-provisioner
- # Create a service account called "nfs-client-provisioner"
- kind: ServiceAccount
- apiVersion: v1
- metadata:
- name: nfs-client-provisioner
- ---
- # Create a ClusterRole called nfs-client-provisioner-runner
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: nfs-client-provisioner-runner
- rules:
- - apiGroups: [""]
- resources: ["persistentvolumes"]
- verbs: ["get", "list", "watch", "create", "delete"]
- - apiGroups: [""]
- resources: ["persistentvolumeclaims"]
- verbs: ["get", "list", "watch", "update"]
- - apiGroups: ["storage.k8s.io"]
- resources: ["storageclasses"]
- verbs: ["get", "list", "watch"]
- - apiGroups: [""]
- resources: ["events"]
- verbs: ["create", "update", "patch"]
- ---
- # Create a ClusterRoleBinding to bind this ClusterRole to its ServiceAccount
- # So binding the nfs-client-provisioner SA to the nfs-client-proisioner-runner ClusterRole
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: run-nfs-client-provisioner
- subjects:
- - kind: ServiceAccount
- name: nfs-client-provisioner
- namespace: default
- roleRef:
- kind: ClusterRole
- name: nfs-client-provisioner-runner
- apiGroup: rbac.authorization.k8s.io
- ---
- # Similarly create a ROLE
- kind: Role
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: leader-locking-nfs-client-provisioner
- rules:
- - apiGroups: [""]
- resources: ["endpoints"]
- verbs: ["get", "list", "watch", "create", "update", "patch"]
- ---
- # Now create a RoleBinding for that role
- kind: RoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: leader-locking-nfs-client-provisioner
- subjects:
- - kind: ServiceAccount
- name: nfs-client-provisioner
- # replace with namespace where provisioner is deployed
- namespace: default
- roleRef:
- kind: Role
- name: leader-locking-nfs-client-provisioner
- apiGroup: rbac.authorization.k8s.io
- ---
- Check those:
- -> kubectl get clusterrole,clusterrolebinding,role,rolebinding | grep nfs
- # Next we need to create the StorageClass. This is the important object here.
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: managed-nfs-storage
- provisioner: example.com/nfs
- parameters:
- archiveOnDelete: "false"
- The name of the storage class is very important. We named it 'managed-nfs-storage' and we
- are going to need that when doing PVC otherwise it won't provision the PV. Secondly
- the name of the provisioner would be "example.com/nfs".
- # Finally look at the deployment.
- kind: Deployment
- apiVersion: extensions/v1beta1
- metadata:
- name: nfs-client-provisioner
- spec:
- replicas: 1
- strategy:
- type: Recreate
- template:
- metadata:
- labels:
- app: nfs-client-provisioner
- spec:
- serviceAccountName: nfs-client-provisioner <------ bind it to the SA
- containers:
- - name: nfs-client-provisioner
- image: quay.io/external_storage/nfs-client-provisioner:latest
- volumeMounts:
- - name: nfs-client-root
- mountPath: /persistentvolumes <----- where it is going to mount it on the pod
- env:
- - name: PROVISIONER_NAME
- value: example.com/nfs <------ taken from storageclass provisioner field
- - name: NFS_SERVER
- value: <<NFS Server IP>> <-------- ultron.suse.de
- - name: NFS_PATH
- value: /srv/nfs/kubedata <-------- directory NFS server is serving
- volumes:
- - name: nfs-client-root
- nfs:
- server: <<NFS Server IP>> <--------- ultron.suse.de
- path: /srv/nfs/kubedata <--------- directory NFS server is serving
- So, we are deploying a deployment with the name "nfs-client-provisioner" with 1 replica.
- We are also going to use the SA we created earlier "nfs-client-provisioner".
- In terms of volumeMounts we are going to mount something under the "/persistentvolumes"
- which is going to be the NFS volume from the NFS server. As for the "env" "name" we set
- the value to the name of our provisioner that is found at the storageclass provisioner field
- that is "example.com/nfs" and we also set the IP address alng with the directory it serves
- that is "/srv/nfs/kubedata". So that's the volumeMounts section which is referring by the name
- of "nfs-client-root" that connects to the "nfs-client-root" volume section right below. There
- we need to change the IP of my NFS server and the path it serves.
- See the provisioner pod:
- -> kubectl describet pod nfs-client-provisioner-blahblahblah | less
- -->see the environment variables and the volume that is mounted
- -> kubectl get pv,pvc
- --> Nothing
- Let's test it:
- Go to the NFS and check if there are any data:
- -> ls /srv/nfs/kubedata
- --> none
- Let's create PVC
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: pvc1
- spec:
- storageClassName: managed-nfs-storage <----- change this to your storageclass
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 500Mi
- The name of the PVC would be "pvc1". The storageclass was "managed-nfs-storage" (run kubectl get storageclass
- and look at the NAME to get this value). Try again kubectl get pv,pvc
- This will claim a PV, that will trigger a request for PV dynamically so we dont have to do this ourselves.
- Create a pod to test it:
- apiVersion: v1
- kind: Pod
- metadata:
- name: busybox
- spec:
- volumes:
- - name: host-volume
- persistentVolumeClaim:
- claimName: pvc1 <----- that's the PVC we created
- containers:
- - image: busybox
- name: busybox
- command: ["/bin/sh"]
- args: ["-c", "sleep 600"]
- volumeMounts:
- - name: host-volume
- mountPath: /mydata <---- disk inside the container
- Go inside this pod and try to write something
- -> kubectl exec -it busybox -- sh
- -> touch /mydata/hello
- -> exit
- go to the server and check /srv/nfs/kubedata/default-pvc1-pvc-blahblahblah and to 'ls'
- If you delete the pods, it's not going to delete the PVC (because we manually created).
- The rRECLAIM POLICY at the pv is says 'delete'. So when you delete PVC, then PV will auto deleted as well.
- First delete the pod:
- -> kubectl delete pod busybox
- then the PVC
- -> kubectl delete pvc --all
- -> kubectl get pv,pvc
- ---> nothing
- https://github.com/justmeandopensource/kubernetes/tree/master/yamls
- https://github.com/SUSE/cf-ci/tree/master/automation-scripts/nfs-provisioner
- If you have an NFS server: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
- If you don't have an NFS server: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement