Advertisement
drpanwe

PV

Jan 21st, 2020
224
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.17 KB | None | 0 0
  1. Test if workers can mount nfs
  2. -----------------------------
  3.  
  4.  
  5.  
  6.  
  7.  
  8. You can make use of dynamic provisioning, so as an admin you won't have to manually create a PV
  9. for every PVC requesting this PV. So the concept in dynamic provisioning is users can request
  10. PV using the VolumeClaimTemplate or PVC and behind the scenes there is a provisioner that deploys
  11. the PV on the admin's behalf. So, all the PV will be dynamically provisioned based on the PVC.
  12. When you delete the PVC, then the PV will also be deleted as well. This is how it works if
  13. you are running your k8s cluster in the cloud.
  14.  
  15. If you are running it locally, you need:
  16.  
  17. 1. Access to a NFS server (e.g. your hostmachine, exporting /sr/nfs/kubedata)
  18. 2. A k8s cluster with multiple workers (e.g. 1 master, 2 workers).
  19. 3. A special type of Pod called NFS-client-provisioner
  20.  
  21. What this POD does? It is going to mount the NFS volume (/srv/nfs/kubedata) under the /persistentVolumes
  22. this pod can run in any of the worker nodes. You just need to make sure this pod is always running.
  23. That is the gateway.
  24.  
  25. 4. Create a ServiceAccount
  26. 5. A role for the service account
  27. 6. A rolebinding for the serice account
  28. 7. A clusterrole for the service account
  29. 8. A clusterrolebinding for the service account
  30. 9. Create a storageclass
  31. 10. Point that storageclass to the provisioner pod
  32. 11. Create a deployment for the nfs provisioner
  33. 12. That deployment will create a ReplicaSet
  34. 13. That will be one instance of the pod running the replicaset
  35.  
  36. Setup the NFS server:
  37. ---------------------
  38. sudo mkdir /srv/nfs/kubedata
  39. sudo chown nobody: /srv/nfs/kubedata
  40. sudo systemctl enable nfs-server
  41. sudo systemctl start nfs-server
  42. terminate the firewall if it's open
  43. sudo vi /etc/exports
  44. -> /srv/nfs/kubedata *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
  45. sudo exportfs -rav
  46. -> exporting starts now
  47. sudo exportfs -v
  48. -> it tells you what is exported
  49.  
  50. Go to your workers and try to mount this NFS to make sure it works.
  51. SSH into one of the workers
  52. test if the worker can ping the NFS server
  53. -> ping ultron.suse.de
  54. test the directory that NFS exports from the server machine
  55. -> showmount -e ultron.suse.de
  56. --> this will return the dir (e.g. /srv/nfs/kubedata *)
  57. mount the NFS into the worker
  58. -> mount -t nfs ultron.suse.de:/srv/nfs/kubedata /mnt
  59. test if it's mounted:
  60. -> mount | grep kubedata
  61. --> you should see it
  62. unmount it now
  63. -> umount /mnt
  64. Repeat this for every worker to make sure they can all mount NFS
  65.  
  66. If all fine, proceed: https://github.com/justmeandopensource/kubernetes/tree/master/yamls/nfs-provisioner
  67.  
  68. # Create a service account called "nfs-client-provisioner"
  69.  
  70. kind: ServiceAccount
  71. apiVersion: v1
  72. metadata:
  73. name: nfs-client-provisioner
  74. ---
  75.  
  76.  
  77.  
  78.  
  79. # Create a ClusterRole called nfs-client-provisioner-runner
  80.  
  81. kind: ClusterRole
  82. apiVersion: rbac.authorization.k8s.io/v1
  83. metadata:
  84. name: nfs-client-provisioner-runner
  85. rules:
  86. - apiGroups: [""]
  87. resources: ["persistentvolumes"]
  88. verbs: ["get", "list", "watch", "create", "delete"]
  89. - apiGroups: [""]
  90. resources: ["persistentvolumeclaims"]
  91. verbs: ["get", "list", "watch", "update"]
  92. - apiGroups: ["storage.k8s.io"]
  93. resources: ["storageclasses"]
  94. verbs: ["get", "list", "watch"]
  95. - apiGroups: [""]
  96. resources: ["events"]
  97. verbs: ["create", "update", "patch"]
  98. ---
  99.  
  100.  
  101. # Create a ClusterRoleBinding to bind this ClusterRole to its ServiceAccount
  102. # So binding the nfs-client-provisioner SA to the nfs-client-proisioner-runner ClusterRole
  103.  
  104. kind: ClusterRoleBinding
  105. apiVersion: rbac.authorization.k8s.io/v1
  106. metadata:
  107. name: run-nfs-client-provisioner
  108. subjects:
  109. - kind: ServiceAccount
  110. name: nfs-client-provisioner
  111. namespace: default
  112. roleRef:
  113. kind: ClusterRole
  114. name: nfs-client-provisioner-runner
  115. apiGroup: rbac.authorization.k8s.io
  116. ---
  117.  
  118.  
  119.  
  120. # Similarly create a ROLE
  121.  
  122. kind: Role
  123. apiVersion: rbac.authorization.k8s.io/v1
  124. metadata:
  125. name: leader-locking-nfs-client-provisioner
  126. rules:
  127. - apiGroups: [""]
  128. resources: ["endpoints"]
  129. verbs: ["get", "list", "watch", "create", "update", "patch"]
  130. ---
  131.  
  132.  
  133.  
  134. # Now create a RoleBinding for that role
  135. kind: RoleBinding
  136. apiVersion: rbac.authorization.k8s.io/v1
  137. metadata:
  138. name: leader-locking-nfs-client-provisioner
  139. subjects:
  140. - kind: ServiceAccount
  141. name: nfs-client-provisioner
  142. # replace with namespace where provisioner is deployed
  143. namespace: default
  144. roleRef:
  145. kind: Role
  146. name: leader-locking-nfs-client-provisioner
  147. apiGroup: rbac.authorization.k8s.io
  148. ---
  149.  
  150.  
  151. Check those:
  152. -> kubectl get clusterrole,clusterrolebinding,role,rolebinding | grep nfs
  153.  
  154.  
  155. # Next we need to create the StorageClass. This is the important object here.
  156.  
  157. apiVersion: storage.k8s.io/v1
  158. kind: StorageClass
  159. metadata:
  160. name: managed-nfs-storage
  161. provisioner: example.com/nfs
  162. parameters:
  163. archiveOnDelete: "false"
  164.  
  165. The name of the storage class is very important. We named it 'managed-nfs-storage' and we
  166. are going to need that when doing PVC otherwise it won't provision the PV. Secondly
  167. the name of the provisioner would be "example.com/nfs".
  168.  
  169.  
  170.  
  171. # Finally look at the deployment.
  172.  
  173. kind: Deployment
  174. apiVersion: extensions/v1beta1
  175. metadata:
  176. name: nfs-client-provisioner
  177. spec:
  178. replicas: 1
  179. strategy:
  180. type: Recreate
  181. template:
  182. metadata:
  183. labels:
  184. app: nfs-client-provisioner
  185. spec:
  186. serviceAccountName: nfs-client-provisioner <------ bind it to the SA
  187. containers:
  188. - name: nfs-client-provisioner
  189. image: quay.io/external_storage/nfs-client-provisioner:latest
  190. volumeMounts:
  191. - name: nfs-client-root
  192. mountPath: /persistentvolumes <----- where it is going to mount it on the pod
  193. env:
  194. - name: PROVISIONER_NAME
  195. value: example.com/nfs <------ taken from storageclass provisioner field
  196. - name: NFS_SERVER
  197. value: <<NFS Server IP>> <-------- ultron.suse.de
  198. - name: NFS_PATH
  199. value: /srv/nfs/kubedata <-------- directory NFS server is serving
  200. volumes:
  201. - name: nfs-client-root
  202. nfs:
  203. server: <<NFS Server IP>> <--------- ultron.suse.de
  204. path: /srv/nfs/kubedata <--------- directory NFS server is serving
  205.  
  206.  
  207. So, we are deploying a deployment with the name "nfs-client-provisioner" with 1 replica.
  208. We are also going to use the SA we created earlier "nfs-client-provisioner".
  209. In terms of volumeMounts we are going to mount something under the "/persistentvolumes"
  210. which is going to be the NFS volume from the NFS server. As for the "env" "name" we set
  211. the value to the name of our provisioner that is found at the storageclass provisioner field
  212. that is "example.com/nfs" and we also set the IP address alng with the directory it serves
  213. that is "/srv/nfs/kubedata". So that's the volumeMounts section which is referring by the name
  214. of "nfs-client-root" that connects to the "nfs-client-root" volume section right below. There
  215. we need to change the IP of my NFS server and the path it serves.
  216.  
  217.  
  218. See the provisioner pod:
  219.  
  220. -> kubectl describet pod nfs-client-provisioner-blahblahblah | less
  221. -->see the environment variables and the volume that is mounted
  222.  
  223. -> kubectl get pv,pvc
  224. --> Nothing
  225.  
  226.  
  227. Let's test it:
  228.  
  229. Go to the NFS and check if there are any data:
  230. -> ls /srv/nfs/kubedata
  231. --> none
  232.  
  233. Let's create PVC
  234.  
  235.  
  236. apiVersion: v1
  237. kind: PersistentVolumeClaim
  238. metadata:
  239. name: pvc1
  240. spec:
  241. storageClassName: managed-nfs-storage <----- change this to your storageclass
  242. accessModes:
  243. - ReadWriteMany
  244. resources:
  245. requests:
  246. storage: 500Mi
  247.  
  248.  
  249. The name of the PVC would be "pvc1". The storageclass was "managed-nfs-storage" (run kubectl get storageclass
  250. and look at the NAME to get this value). Try again kubectl get pv,pvc
  251.  
  252. This will claim a PV, that will trigger a request for PV dynamically so we dont have to do this ourselves.
  253.  
  254. Create a pod to test it:
  255.  
  256. apiVersion: v1
  257. kind: Pod
  258. metadata:
  259. name: busybox
  260. spec:
  261. volumes:
  262. - name: host-volume
  263. persistentVolumeClaim:
  264. claimName: pvc1 <----- that's the PVC we created
  265. containers:
  266. - image: busybox
  267. name: busybox
  268. command: ["/bin/sh"]
  269. args: ["-c", "sleep 600"]
  270. volumeMounts:
  271. - name: host-volume
  272. mountPath: /mydata <---- disk inside the container
  273.  
  274.  
  275.  
  276. Go inside this pod and try to write something
  277. -> kubectl exec -it busybox -- sh
  278. -> touch /mydata/hello
  279. -> exit
  280.  
  281. go to the server and check /srv/nfs/kubedata/default-pvc1-pvc-blahblahblah and to 'ls'
  282.  
  283.  
  284. If you delete the pods, it's not going to delete the PVC (because we manually created).
  285. The rRECLAIM POLICY at the pv is says 'delete'. So when you delete PVC, then PV will auto deleted as well.
  286. First delete the pod:
  287. -> kubectl delete pod busybox
  288. then the PVC
  289. -> kubectl delete pvc --all
  290. -> kubectl get pv,pvc
  291. ---> nothing
  292.  
  293.  
  294. https://github.com/justmeandopensource/kubernetes/tree/master/yamls
  295.  
  296. https://github.com/SUSE/cf-ci/tree/master/automation-scripts/nfs-provisioner
  297.  
  298.  
  299.  
  300. If you have an NFS server: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
  301. If you don't have an NFS server: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement