Advertisement
lifeboy

Added a node and osd to an existing cluster

Apr 27th, 2016
141
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 29.99 KB | None | 0 0
  1. Steps I took to try to reinstall and add a node (host s2) to a ceph cluster:
  2.  
  3. Do clean Debian Jessie installation.
  4. After some test, ran "ceph-deploy purge s2" and then "ceph-deploy purgedata s2"
  5. No error reported.
  6.  
  7. I then cleaned the crush map to remove references of the old s2 host from "ceph osd tree"
  8. (see https://arvimal.wordpress.com/2015/05/07/how-to-remove-a-host-from-a-ceph-cluster/)
  9.  
  10. root@h1:~# ceph-deploy install --release hammer s2
  11. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  12. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy install --release hammer s2
  13. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  14. [ceph_deploy.cli][INFO  ]  verbose                       : False
  15. [ceph_deploy.cli][INFO  ]  testing                       : None
  16. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f00b20089e0>
  17. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  18. [ceph_deploy.cli][INFO  ]  dev_commit                    : None
  19. [ceph_deploy.cli][INFO  ]  install_mds                   : False
  20. [ceph_deploy.cli][INFO  ]  stable                        : None
  21. [ceph_deploy.cli][INFO  ]  default_release               : False
  22. [ceph_deploy.cli][INFO  ]  username                      : None
  23. [ceph_deploy.cli][INFO  ]  adjust_repos                  : True
  24. [ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f00b28d2de8>
  25. [ceph_deploy.cli][INFO  ]  install_all                   : False
  26. [ceph_deploy.cli][INFO  ]  repo                          : False
  27. [ceph_deploy.cli][INFO  ]  host                          : ['s2']
  28. [ceph_deploy.cli][INFO  ]  install_rgw                   : False
  29. [ceph_deploy.cli][INFO  ]  install_tests                 : False
  30. [ceph_deploy.cli][INFO  ]  repo_url                      : None
  31. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  32. [ceph_deploy.cli][INFO  ]  install_osd                   : False
  33. [ceph_deploy.cli][INFO  ]  version_kind                  : stable
  34. [ceph_deploy.cli][INFO  ]  install_common                : False
  35. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  36. [ceph_deploy.cli][INFO  ]  quiet                         : False
  37. [ceph_deploy.cli][INFO  ]  dev                           : master
  38. [ceph_deploy.cli][INFO  ]  local_mirror                  : None
  39. [ceph_deploy.cli][INFO  ]  release                       : hammer
  40. [ceph_deploy.cli][INFO  ]  install_mon                   : False
  41. [ceph_deploy.cli][INFO  ]  gpg_url                       : None
  42. [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts s2
  43. [ceph_deploy.install][DEBUG ] Detecting platform for host s2 ...
  44. [s2][DEBUG ] connected to host: s2
  45. [s2][DEBUG ] detect platform information from remote host
  46. [s2][DEBUG ] detect machine type
  47. [ceph_deploy.install][INFO  ] Distro info: debian 8.4 jessie
  48. [s2][INFO  ] installing Ceph on s2
  49. [s2][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
  50. [s2][DEBUG ] Reading package lists...
  51. [s2][DEBUG ] Building dependency tree...
  52. [s2][DEBUG ] Reading state information...
  53. [s2][DEBUG ] apt-transport-https is already the newest version.
  54. [s2][DEBUG ] ca-certificates is already the newest version.
  55. [s2][DEBUG ] The following packages were automatically installed and are no longer required:
  56. [s2][DEBUG ]   cryptsetup-bin gdisk libaio1 libbabeltrace-ctf1 libbabeltrace1
  57. [s2][DEBUG ]   libboost-program-options1.55.0 libboost-system1.55.0 libboost-thread1.55.0
  58. [s2][DEBUG ]   libcephfs1 libfcgi0ldbl libgoogle-perftools4 libjs-jquery libleveldb1
  59. [s2][DEBUG ]   liblttng-ust-ctl2 liblttng-ust0 libnspr4 libnss3 librados2 librbd1
  60. [s2][DEBUG ]   libsnappy1 libtcmalloc-minimal4 libunwind8 liburcu2 python-cephfs
  61. [s2][DEBUG ]   python-flask python-itsdangerous python-jinja2 python-markupsafe
  62. [s2][DEBUG ]   python-rados python-rbd python-requests python-urllib3 python-werkzeug
  63. [s2][DEBUG ]   sdparm xfsprogs
  64. [s2][DEBUG ] Use 'apt-get autoremove' to remove them.
  65. [s2][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
  66. [s2][INFO  ] Running command: wget -O release.asc https://download.ceph.com/keys/release.asc
  67. [s2][WARNIN] --2016-04-27 19:35:21--  https://download.ceph.com/keys/release.asc
  68. [s2][WARNIN] Resolving download.ceph.com (download.ceph.com)... 173.236.253.173, 2607:f298:6050:51f3:f816:3eff:fe71:9135
  69. [s2][WARNIN] Connecting to download.ceph.com (download.ceph.com)|173.236.253.173|:443... connected.
  70. [s2][WARNIN] HTTP request sent, awaiting response... 200 OK
  71. [s2][WARNIN] Length: 1645 (1.6K) [application/octet-stream]
  72. [s2][WARNIN] Saving to: ‘release.asc’
  73. [s2][WARNIN]
  74. [s2][WARNIN]      0K .                                                     100% 22.6M=0s
  75. [s2][WARNIN]
  76. [s2][WARNIN] 2016-04-27 19:35:22 (22.6 MB/s) - ‘release.asc’ saved [1645/1645]
  77. [s2][WARNIN]
  78. [s2][INFO  ] Running command: apt-key add release.asc
  79. [s2][DEBUG ] OK
  80. [s2][DEBUG ] add deb repo to /etc/apt/sources.list.d/
  81. [s2][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
  82. [s2][DEBUG ] Ign http://debian.mirror.ac.za jessie InRelease
  83. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie Release.gpg
  84. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie Release
  85. [s2][DEBUG ] Hit http://security.debian.org jessie/updates InRelease
  86. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/main Sources
  87. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/non-free Sources
  88. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/main amd64 Packages
  89. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/non-free amd64 Packages
  90. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/main Translation-en
  91. [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/non-free Translation-en
  92. [s2][DEBUG ] Hit http://security.debian.org jessie/updates/main Sources
  93. [s2][DEBUG ] Hit http://security.debian.org jessie/updates/main amd64 Packages
  94. [s2][DEBUG ] Hit http://security.debian.org jessie/updates/main Translation-en
  95. [s2][DEBUG ] Hit https://download.ceph.com jessie InRelease
  96. [s2][DEBUG ] Hit https://download.ceph.com jessie/main amd64 Packages
  97. [s2][DEBUG ] Get:1 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
  98. [s2][DEBUG ] Get:2 https://download.ceph.com jessie/main Translation-en [177 B]
  99. [s2][DEBUG ] Get:3 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
  100. [s2][DEBUG ] Get:4 https://download.ceph.com jessie/main Translation-en [177 B]
  101. [s2][DEBUG ] Get:5 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
  102. [s2][DEBUG ] Get:6 https://download.ceph.com jessie/main Translation-en [177 B]
  103. [s2][DEBUG ] Get:7 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
  104. [s2][DEBUG ] Get:8 https://download.ceph.com jessie/main Translation-en [177 B]
  105. [s2][DEBUG ] Get:9 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
  106. [s2][DEBUG ] Ign https://download.ceph.com jessie/main Translation-en_ZA
  107. [s2][DEBUG ] Get:10 https://download.ceph.com jessie/main Translation-en [177 B]
  108. [s2][DEBUG ] Ign https://download.ceph.com jessie/main Translation-en
  109. [s2][DEBUG ] Reading package lists...
  110. [s2][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph ceph-mds radosgw
  111. [s2][DEBUG ] Reading package lists...
  112. [s2][DEBUG ] Building dependency tree...
  113. [s2][DEBUG ] Reading state information...
  114. [s2][DEBUG ] The following extra packages will be installed:
  115. [s2][DEBUG ]   ceph-common
  116. [s2][DEBUG ] Recommended packages:
  117. [s2][DEBUG ]   btrfs-tools libradosstriper1 ceph-fs-common ceph-fuse
  118. [s2][DEBUG ] The following NEW packages will be installed:
  119. [s2][DEBUG ]   ceph ceph-common ceph-mds radosgw
  120. [s2][DEBUG ] 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
  121. [s2][DEBUG ] Need to get 0 B/28.2 MB of archives.
  122. [s2][DEBUG ] After this operation, 133 MB of additional disk space will be used.
  123. [s2][DEBUG ] Selecting previously unselected package ceph-common.
  124. (Reading database ... 28166 files and directories currently installed.)
  125. [s2][DEBUG ] Preparing to unpack .../ceph-common_0.94.6-1~bpo80+1_amd64.deb ...
  126. [s2][DEBUG ] Unpacking ceph-common (0.94.6-1~bpo80+1) ...
  127. [s2][DEBUG ] Selecting previously unselected package ceph.
  128. [s2][DEBUG ] Preparing to unpack .../ceph_0.94.6-1~bpo80+1_amd64.deb ...
  129. [s2][DEBUG ] Unpacking ceph (0.94.6-1~bpo80+1) ...
  130. [s2][DEBUG ] Selecting previously unselected package ceph-mds.
  131. [s2][DEBUG ] Preparing to unpack .../ceph-mds_0.94.6-1~bpo80+1_amd64.deb ...
  132. [s2][DEBUG ] Unpacking ceph-mds (0.94.6-1~bpo80+1) ...
  133. [s2][DEBUG ] Selecting previously unselected package radosgw.
  134. [s2][DEBUG ] Preparing to unpack .../radosgw_0.94.6-1~bpo80+1_amd64.deb ...
  135. [s2][DEBUG ] Unpacking radosgw (0.94.6-1~bpo80+1) ...
  136. [s2][DEBUG ] Processing triggers for man-db (2.7.0.2-5) ...
  137. [s2][DEBUG ] Processing triggers for systemd (215-17+deb8u4) ...
  138. [s2][DEBUG ] Setting up ceph-common (0.94.6-1~bpo80+1) ...
  139. [s2][DEBUG ] Setting up ceph (0.94.6-1~bpo80+1) ...
  140. [s2][DEBUG ] Setting up ceph-mds (0.94.6-1~bpo80+1) ...
  141. [s2][DEBUG ] Setting up radosgw (0.94.6-1~bpo80+1) ...
  142. [s2][DEBUG ] Processing triggers for systemd (215-17+deb8u4) ...
  143. [s2][DEBUG ] Processing triggers for libc-bin (2.19-18+deb8u4) ...
  144. [s2][INFO  ] Running command: ceph --version
  145. [s2][DEBUG ] ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
  146.  
  147. root@h1:~# ceph-deploy mon create s2
  148. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  149. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy mon create s2
  150. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  151. [ceph_deploy.cli][INFO  ]  username                      : None
  152. [ceph_deploy.cli][INFO  ]  verbose                       : False
  153. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  154. [ceph_deploy.cli][INFO  ]  subcommand                    : create
  155. [ceph_deploy.cli][INFO  ]  quiet                         : False
  156. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5a182a35a8>
  157. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  158. [ceph_deploy.cli][INFO  ]  mon                           : ['s2']
  159. [ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f5a18714668>
  160. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  161. [ceph_deploy.cli][INFO  ]  default_release               : False
  162. [ceph_deploy.cli][INFO  ]  keyrings                      : None
  163. [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts s2
  164. [ceph_deploy.mon][DEBUG ] detecting platform for host s2 ...
  165. [s2][DEBUG ] connected to host: s2
  166. [s2][DEBUG ] detect platform information from remote host
  167. [s2][DEBUG ] detect machine type
  168. [s2][DEBUG ] find the location of an executable
  169. [ceph_deploy.mon][INFO  ] distro info: debian 8.4 jessie
  170. [s2][DEBUG ] determining if provided host has same hostname in remote
  171. [s2][DEBUG ] get remote short hostname
  172. [s2][DEBUG ] deploying mon to s2
  173. [s2][DEBUG ] get remote short hostname
  174. [s2][DEBUG ] remote hostname: s2
  175. [s2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  176. [s2][DEBUG ] create the mon path if it does not exist
  177. [s2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-s2/done
  178. [s2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-s2/done
  179. [s2][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-s2.mon.keyring
  180. [s2][DEBUG ] create the monitor keyring file
  181. [s2][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i s2 --keyring /var/lib/ceph/tmp/ceph-s2.mon.keyring
  182. [s2][DEBUG ] ceph-mon: mon.noname-a 192.168.121.32:6789/0 is local, renaming to mon.s2
  183. [s2][DEBUG ] ceph-mon: set fsid to d46f81c5-7a6d-4151-8fc2-f9899ae8d311
  184. [s2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-s2 for mon.s2
  185. [s2][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-s2.mon.keyring
  186. [s2][DEBUG ] create a done file to avoid re-doing the mon deployment
  187. [s2][DEBUG ] create the init path if it does not exist
  188. [s2][DEBUG ] locating the `service` executable...
  189. [s2][INFO  ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.s2
  190. [s2][DEBUG ] === mon.s2 ===
  191. [s2][DEBUG ] Starting Ceph mon.s2 on s2...
  192. [s2][WARNIN] Running as unit ceph-mon.s2.1461778779.597847648.service.
  193. [s2][DEBUG ] Starting ceph-create-keys on s2...
  194. [s2][INFO  ] Running command: systemctl enable ceph
  195. [s2][WARNIN] Synchronizing state for ceph.service with sysvinit using update-rc.d...
  196. [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph defaults
  197. [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph enable
  198. [s2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s2.asok mon_status
  199. [s2][DEBUG ] ********************************************************************************
  200. [s2][DEBUG ] status for monitor: mon.s2
  201. [s2][DEBUG ] {
  202. [s2][DEBUG ]   "election_epoch": 2,
  203. [s2][DEBUG ]   "extra_probe_peers": [],
  204. [s2][DEBUG ]   "monmap": {
  205. [s2][DEBUG ]     "created": "0.000000",
  206. [s2][DEBUG ]     "epoch": 1,
  207. [s2][DEBUG ]     "fsid": "d46f81c5-7a6d-4151-8fc2-f9899ae8d311",
  208. [s2][DEBUG ]     "modified": "0.000000",
  209. [s2][DEBUG ]     "mons": [
  210. [s2][DEBUG ]       {
  211. [s2][DEBUG ]         "addr": "192.168.121.32:6789/0",
  212. [s2][DEBUG ]         "name": "s2",
  213. [s2][DEBUG ]         "rank": 0
  214. [s2][DEBUG ]       }
  215. [s2][DEBUG ]     ]
  216. [s2][DEBUG ]   },
  217. [s2][DEBUG ]   "name": "s2",
  218. [s2][DEBUG ]   "outside_quorum": [],
  219. [s2][DEBUG ]   "quorum": [
  220. [s2][DEBUG ]     0
  221. [s2][DEBUG ]   ],
  222. [s2][DEBUG ]   "rank": 0,
  223. [s2][DEBUG ]   "state": "leader",
  224. [s2][DEBUG ]   "sync_provider": []
  225. [s2][DEBUG ] }
  226. [s2][DEBUG ] ********************************************************************************
  227. [s2][INFO  ] monitor: mon.s2 is running
  228. [s2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s2.asok mon_status
  229.  
  230. root@h1:~# ceph-deploy gatherkeys s2
  231. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  232. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy gatherkeys s2
  233. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  234. [ceph_deploy.cli][INFO  ]  username                      : None
  235. [ceph_deploy.cli][INFO  ]  verbose                       : False
  236. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  237. [ceph_deploy.cli][INFO  ]  quiet                         : False
  238. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc60ffc9518>
  239. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  240. [ceph_deploy.cli][INFO  ]  mon                           : ['s2']
  241. [ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x7fc61042d050>
  242. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  243. [ceph_deploy.cli][INFO  ]  default_release               : False
  244. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.client.admin.keyring
  245. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
  246. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-osd.keyring
  247. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-mds.keyring
  248. [ceph_deploy.gatherkeys][DEBUG ] Checking s2 for /var/lib/ceph/bootstrap-rgw/ceph.keyring
  249. [s2][DEBUG ] connected to host: s2
  250. [s2][DEBUG ] detect platform information from remote host
  251. [s2][DEBUG ] detect machine type
  252. [s2][DEBUG ] fetch remote file
  253. [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from s2.
  254.  
  255. root@h1:~# ceph-deploy disk list s2
  256. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  257. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy disk list s2
  258. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  259. [ceph_deploy.cli][INFO  ]  username                      : None
  260. [ceph_deploy.cli][INFO  ]  verbose                       : False
  261. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  262. [ceph_deploy.cli][INFO  ]  subcommand                    : list
  263. [ceph_deploy.cli][INFO  ]  quiet                         : False
  264. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4df78d95f0>
  265. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  266. [ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f4df78b6578>
  267. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  268. [ceph_deploy.cli][INFO  ]  default_release               : False
  269. [ceph_deploy.cli][INFO  ]  disk                          : [('s2', None, None)]
  270. [s2][DEBUG ] connected to host: s2
  271. [s2][DEBUG ] detect platform information from remote host
  272. [s2][DEBUG ] detect machine type
  273. [s2][DEBUG ] find the location of an executable
  274. [ceph_deploy.osd][INFO  ] Distro info: debian 8.4 jessie
  275. [ceph_deploy.osd][DEBUG ] Listing disks on s2...
  276. [s2][DEBUG ] find the location of an executable
  277. [s2][INFO  ] Running command: /usr/sbin/ceph-disk list
  278. [s2][DEBUG ] /dev/sda :
  279. [s2][DEBUG ]  /dev/sda1 other, ext2, mounted on /boot
  280. [s2][DEBUG ]  /dev/sda2 other, 0x5
  281. [s2][DEBUG ]  /dev/sda5 other, LVM2_member
  282. [s2][DEBUG ] /dev/sdb other, unknown
  283.  
  284. NOTE: So here is /dev/sdb unformatted and ready to be prepared as an OSD.
  285. root@h1:~# ceph-deploy osd prepare s2:sdb
  286. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  287. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd prepare s2:sdb
  288. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  289. [ceph_deploy.cli][INFO  ]  username                      : None
  290. [ceph_deploy.cli][INFO  ]  disk                          : [('s2', '/dev/sdb', None)]
  291. [ceph_deploy.cli][INFO  ]  dmcrypt                       : False
  292. [ceph_deploy.cli][INFO  ]  verbose                       : False
  293. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  294. [ceph_deploy.cli][INFO  ]  subcommand                    : prepare
  295. [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
  296. [ceph_deploy.cli][INFO  ]  quiet                         : False
  297. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f15600a0fc8>
  298. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  299. [ceph_deploy.cli][INFO  ]  fs_type                       : xfs
  300. [ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f1560078500>
  301. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  302. [ceph_deploy.cli][INFO  ]  default_release               : False
  303. [ceph_deploy.cli][INFO  ]  zap_disk                      : False
  304. [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks s2:/dev/sdb:
  305. [s2][DEBUG ] connected to host: s2
  306. [s2][DEBUG ] detect platform information from remote host
  307. [s2][DEBUG ] detect machine type
  308. [s2][DEBUG ] find the location of an executable
  309. [ceph_deploy.osd][INFO  ] Distro info: debian 8.4 jessie
  310. [ceph_deploy.osd][DEBUG ] Deploying osd to s2
  311. [s2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  312. [ceph_deploy.osd][DEBUG ] Preparing host s2 disk /dev/sdb journal None activate False
  313. [s2][INFO  ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb
  314. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  315. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
  316. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
  317. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  318. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  319. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
  320. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
  321. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
  322. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
  323. [s2][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
  324. [s2][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb
  325. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:80d33d45-f122-4e2a-be80-5dd88ff000ba --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
  326. [s2][DEBUG ] Setting name!
  327. [s2][DEBUG ] partNum is 1
  328. [s2][DEBUG ] REALLY setting name!
  329. [s2][DEBUG ] The operation has completed successfully.
  330. [s2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
  331. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
  332. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
  333. [s2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/80d33d45-f122-4e2a-be80-5dd88ff000ba
  334. [s2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/80d33d45-f122-4e2a-be80-5dd88ff000ba
  335. [s2][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
  336. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:025957f8-448b-45bc-b461-e62c34943585 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
  337. [s2][DEBUG ] Setting name!
  338. [s2][DEBUG ] partNum is 0
  339. [s2][DEBUG ] REALLY setting name!
  340. [s2][DEBUG ] The operation has completed successfully.
  341. [s2][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdb
  342. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
  343. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
  344. [s2][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
  345. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
  346. [s2][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=60719917 blks
  347. [s2][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
  348. [s2][DEBUG ]          =                       crc=0        finobt=0
  349. [s2][DEBUG ] data     =                       bsize=4096   blocks=242879665, imaxpct=25
  350. [s2][DEBUG ]          =                       sunit=0      swidth=0 blks
  351. [s2][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
  352. [s2][DEBUG ] log      =internal log           bsize=4096   blocks=118593, version=2
  353. [s2][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
  354. [s2][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
  355. [s2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.mtqv6i with options noatime,inode64
  356. [s2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.mtqv6i
  357. [s2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.mtqv6i
  358. [s2][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.mtqv6i/journal -> /dev/disk/by-partuuid/80d33d45-f122-4e2a-be80-5dd88ff000ba
  359. [s2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.mtqv6i
  360. [s2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.mtqv6i
  361. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
  362. [s2][DEBUG ] Warning: The kernel is still using the old partition table.
  363. [s2][DEBUG ] The new table will be used at the next reboot.
  364. [s2][DEBUG ] The operation has completed successfully.
  365. [s2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
  366. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
  367. [s2][INFO  ] checking OSD status...
  368. [s2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
  369. [ceph_deploy.osd][DEBUG ] Host s2 is now ready for osd use.
  370.  
  371. Just to check:
  372.  
  373. root@h1:~# ceph-deploy disk list s2
  374. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  375. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy disk list s2
  376. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  377. [ceph_deploy.cli][INFO  ]  username                      : None
  378. [ceph_deploy.cli][INFO  ]  verbose                       : False
  379. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  380. [ceph_deploy.cli][INFO  ]  subcommand                    : list
  381. [ceph_deploy.cli][INFO  ]  quiet                         : False
  382. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f412c5a15f0>
  383. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  384. [ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f412c57e578>
  385. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  386. [ceph_deploy.cli][INFO  ]  default_release               : False
  387. [ceph_deploy.cli][INFO  ]  disk                          : [('s2', None, None)]
  388. [s2][DEBUG ] connected to host: s2
  389. [s2][DEBUG ] detect platform information from remote host
  390. [s2][DEBUG ] detect machine type
  391. [s2][DEBUG ] find the location of an executable
  392. [ceph_deploy.osd][INFO  ] Distro info: debian 8.4 jessie
  393. [ceph_deploy.osd][DEBUG ] Listing disks on s2...
  394. [s2][DEBUG ] find the location of an executable
  395. [s2][INFO  ] Running command: /usr/sbin/ceph-disk list
  396. [s2][DEBUG ] /dev/sda :
  397. [s2][DEBUG ]  /dev/sda1 other, ext2, mounted on /boot
  398. [s2][DEBUG ]  /dev/sda2 other, 0x5
  399. [s2][DEBUG ]  /dev/sda5 other, LVM2_member
  400. [s2][DEBUG ] /dev/sdb :
  401. [s2][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
  402. [s2][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1
  403.  
  404. So, /dev/sdb and been partitioned as expected.
  405. root@h1:~# ceph-deploy osd activate s2:/dev/sdb1
  406. [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
  407. [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd activate s2:/dev/sdb1
  408. [ceph_deploy.cli][INFO  ] ceph-deploy options:
  409. [ceph_deploy.cli][INFO  ]  username                      : None
  410. [ceph_deploy.cli][INFO  ]  verbose                       : False
  411. [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
  412. [ceph_deploy.cli][INFO  ]  subcommand                    : activate
  413. [ceph_deploy.cli][INFO  ]  quiet                         : False
  414. [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fdef2922fc8>
  415. [ceph_deploy.cli][INFO  ]  cluster                       : ceph
  416. [ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fdef28fa500>
  417. [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
  418. [ceph_deploy.cli][INFO  ]  default_release               : False
  419. [ceph_deploy.cli][INFO  ]  disk                          : [('s2', '/dev/sdb1', None)]
  420. [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks s2:/dev/sdb1:
  421. [s2][DEBUG ] connected to host: s2
  422. [s2][DEBUG ] detect platform information from remote host
  423. [s2][DEBUG ] detect machine type
  424. [s2][DEBUG ] find the location of an executable
  425. [ceph_deploy.osd][INFO  ] Distro info: debian 8.4 jessie
  426. [ceph_deploy.osd][DEBUG ] activating host s2 disk /dev/sdb1
  427. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  428. [s2][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
  429. [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
  430. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  431. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  432. [s2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.fFKTJr with options noatime,inode64
  433. [s2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.fFKTJr
  434. [s2][WARNIN] DEBUG:ceph-disk:Cluster uuid is d46f81c5-7a6d-4151-8fc2-f9899ae8d311
  435. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  436. [s2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
  437. [s2][WARNIN] DEBUG:ceph-disk:OSD uuid is 025957f8-448b-45bc-b461-e62c34943585
  438. [s2][WARNIN] DEBUG:ceph-disk:OSD id is 0
  439. [s2][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
  440. [s2][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.fFKTJr
  441. [s2][WARNIN] INFO:ceph-disk:ceph osd.0 already mounted in position; unmounting ours.
  442. [s2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.fFKTJr
  443. [s2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.fFKTJr
  444. [s2][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
  445. [s2][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
  446. [s2][DEBUG ] === osd.0 ===
  447. [s2][DEBUG ] Starting Ceph osd.0 on s2...already running
  448. [s2][INFO  ] checking OSD status...
  449. [s2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
  450. [s2][INFO  ] Running command: systemctl enable ceph
  451. [s2][WARNIN] Synchronizing state for ceph.service with sysvinit using update-rc.d...
  452. [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph defaults
  453. [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph enable
  454. root@h1:~# ceph osd tree
  455. # id    weight  type name   up/down reweight
  456. -1  10.23   root default
  457. -2  8.14        host h1
  458. 1   0.9         osd.1   up  1  
  459. 3   0.9         osd.3   up  1  
  460. 4   0.9         osd.4   up  1  
  461. 5   0.68            osd.5   up  1  
  462. 6   0.68            osd.6   up  1  
  463. 7   0.68            osd.7   up  1  
  464. 8   0.68            osd.8   up  1  
  465. 9   0.68            osd.9   up  1  
  466. 10  0.68            osd.10  up  1  
  467. 11  0.68            osd.11  up  1  
  468. 12  0.68            osd.12  up  1  
  469. -3  0.45        host s3
  470. 2   0.45            osd.2   down    0  
  471. -5  1.64        host s1
  472. 14  0.29            osd.14  up  1  
  473. 0   0.27            osd.0   up  1  
  474. 15  0.27            osd.15  up  1  
  475. 16  0.27            osd.16  up  1  
  476. 17  0.27            osd.17  up  1  
  477. 18  0.27            osd.18  up  1  
  478.  
  479. So, s2 has not been added to the cluster.  :-(
  480.  
  481. Checking on s2:
  482.  
  483. root@s2:~# ceph osd tree
  484. ID WEIGHT  TYPE NAME     UP/DOWN REWEIGHT PRIMARY-AFFINITY
  485. -1 0.89999 root default                                    
  486. -2 0.89999     host s2                                    
  487.  0 0.89999         osd.0      up  1.00000          1.00000
  488.  
  489. S2 is running on it's own!
  490.  
  491. Now, how to fix that?  What did I miss?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement