Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Steps I took to try to reinstall and add a node (host s2) to a ceph cluster:
- Do clean Debian Jessie installation.
- After some test, ran "ceph-deploy purge s2" and then "ceph-deploy purgedata s2"
- No error reported.
- I then cleaned the crush map to remove references of the old s2 host from "ceph osd tree"
- (see https://arvimal.wordpress.com/2015/05/07/how-to-remove-a-host-from-a-ceph-cluster/)
- root@h1:~# ceph-deploy install --release hammer s2
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy install --release hammer s2
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] testing : None
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f00b20089e0>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] dev_commit : None
- [ceph_deploy.cli][INFO ] install_mds : False
- [ceph_deploy.cli][INFO ] stable : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] adjust_repos : True
- [ceph_deploy.cli][INFO ] func : <function install at 0x7f00b28d2de8>
- [ceph_deploy.cli][INFO ] install_all : False
- [ceph_deploy.cli][INFO ] repo : False
- [ceph_deploy.cli][INFO ] host : ['s2']
- [ceph_deploy.cli][INFO ] install_rgw : False
- [ceph_deploy.cli][INFO ] install_tests : False
- [ceph_deploy.cli][INFO ] repo_url : None
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] install_osd : False
- [ceph_deploy.cli][INFO ] version_kind : stable
- [ceph_deploy.cli][INFO ] install_common : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] dev : master
- [ceph_deploy.cli][INFO ] local_mirror : None
- [ceph_deploy.cli][INFO ] release : hammer
- [ceph_deploy.cli][INFO ] install_mon : False
- [ceph_deploy.cli][INFO ] gpg_url : None
- [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts s2
- [ceph_deploy.install][DEBUG ] Detecting platform for host s2 ...
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [ceph_deploy.install][INFO ] Distro info: debian 8.4 jessie
- [s2][INFO ] installing Ceph on s2
- [s2][INFO ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
- [s2][DEBUG ] Reading package lists...
- [s2][DEBUG ] Building dependency tree...
- [s2][DEBUG ] Reading state information...
- [s2][DEBUG ] apt-transport-https is already the newest version.
- [s2][DEBUG ] ca-certificates is already the newest version.
- [s2][DEBUG ] The following packages were automatically installed and are no longer required:
- [s2][DEBUG ] cryptsetup-bin gdisk libaio1 libbabeltrace-ctf1 libbabeltrace1
- [s2][DEBUG ] libboost-program-options1.55.0 libboost-system1.55.0 libboost-thread1.55.0
- [s2][DEBUG ] libcephfs1 libfcgi0ldbl libgoogle-perftools4 libjs-jquery libleveldb1
- [s2][DEBUG ] liblttng-ust-ctl2 liblttng-ust0 libnspr4 libnss3 librados2 librbd1
- [s2][DEBUG ] libsnappy1 libtcmalloc-minimal4 libunwind8 liburcu2 python-cephfs
- [s2][DEBUG ] python-flask python-itsdangerous python-jinja2 python-markupsafe
- [s2][DEBUG ] python-rados python-rbd python-requests python-urllib3 python-werkzeug
- [s2][DEBUG ] sdparm xfsprogs
- [s2][DEBUG ] Use 'apt-get autoremove' to remove them.
- [s2][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
- [s2][INFO ] Running command: wget -O release.asc https://download.ceph.com/keys/release.asc
- [s2][WARNIN] --2016-04-27 19:35:21-- https://download.ceph.com/keys/release.asc
- [s2][WARNIN] Resolving download.ceph.com (download.ceph.com)... 173.236.253.173, 2607:f298:6050:51f3:f816:3eff:fe71:9135
- [s2][WARNIN] Connecting to download.ceph.com (download.ceph.com)|173.236.253.173|:443... connected.
- [s2][WARNIN] HTTP request sent, awaiting response... 200 OK
- [s2][WARNIN] Length: 1645 (1.6K) [application/octet-stream]
- [s2][WARNIN] Saving to: ‘release.asc’
- [s2][WARNIN]
- [s2][WARNIN] 0K . 100% 22.6M=0s
- [s2][WARNIN]
- [s2][WARNIN] 2016-04-27 19:35:22 (22.6 MB/s) - ‘release.asc’ saved [1645/1645]
- [s2][WARNIN]
- [s2][INFO ] Running command: apt-key add release.asc
- [s2][DEBUG ] OK
- [s2][DEBUG ] add deb repo to /etc/apt/sources.list.d/
- [s2][INFO ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
- [s2][DEBUG ] Ign http://debian.mirror.ac.za jessie InRelease
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie Release.gpg
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie Release
- [s2][DEBUG ] Hit http://security.debian.org jessie/updates InRelease
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/main Sources
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/non-free Sources
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/main amd64 Packages
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/non-free amd64 Packages
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/main Translation-en
- [s2][DEBUG ] Hit http://debian.mirror.ac.za jessie/non-free Translation-en
- [s2][DEBUG ] Hit http://security.debian.org jessie/updates/main Sources
- [s2][DEBUG ] Hit http://security.debian.org jessie/updates/main amd64 Packages
- [s2][DEBUG ] Hit http://security.debian.org jessie/updates/main Translation-en
- [s2][DEBUG ] Hit https://download.ceph.com jessie InRelease
- [s2][DEBUG ] Hit https://download.ceph.com jessie/main amd64 Packages
- [s2][DEBUG ] Get:1 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
- [s2][DEBUG ] Get:2 https://download.ceph.com jessie/main Translation-en [177 B]
- [s2][DEBUG ] Get:3 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
- [s2][DEBUG ] Get:4 https://download.ceph.com jessie/main Translation-en [177 B]
- [s2][DEBUG ] Get:5 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
- [s2][DEBUG ] Get:6 https://download.ceph.com jessie/main Translation-en [177 B]
- [s2][DEBUG ] Get:7 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
- [s2][DEBUG ] Get:8 https://download.ceph.com jessie/main Translation-en [177 B]
- [s2][DEBUG ] Get:9 https://download.ceph.com jessie/main Translation-en_ZA [177 B]
- [s2][DEBUG ] Ign https://download.ceph.com jessie/main Translation-en_ZA
- [s2][DEBUG ] Get:10 https://download.ceph.com jessie/main Translation-en [177 B]
- [s2][DEBUG ] Ign https://download.ceph.com jessie/main Translation-en
- [s2][DEBUG ] Reading package lists...
- [s2][INFO ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o Dpkg::Options::=--force-confnew ceph ceph-mds radosgw
- [s2][DEBUG ] Reading package lists...
- [s2][DEBUG ] Building dependency tree...
- [s2][DEBUG ] Reading state information...
- [s2][DEBUG ] The following extra packages will be installed:
- [s2][DEBUG ] ceph-common
- [s2][DEBUG ] Recommended packages:
- [s2][DEBUG ] btrfs-tools libradosstriper1 ceph-fs-common ceph-fuse
- [s2][DEBUG ] The following NEW packages will be installed:
- [s2][DEBUG ] ceph ceph-common ceph-mds radosgw
- [s2][DEBUG ] 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
- [s2][DEBUG ] Need to get 0 B/28.2 MB of archives.
- [s2][DEBUG ] After this operation, 133 MB of additional disk space will be used.
- [s2][DEBUG ] Selecting previously unselected package ceph-common.
- (Reading database ... 28166 files and directories currently installed.)
- [s2][DEBUG ] Preparing to unpack .../ceph-common_0.94.6-1~bpo80+1_amd64.deb ...
- [s2][DEBUG ] Unpacking ceph-common (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Selecting previously unselected package ceph.
- [s2][DEBUG ] Preparing to unpack .../ceph_0.94.6-1~bpo80+1_amd64.deb ...
- [s2][DEBUG ] Unpacking ceph (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Selecting previously unselected package ceph-mds.
- [s2][DEBUG ] Preparing to unpack .../ceph-mds_0.94.6-1~bpo80+1_amd64.deb ...
- [s2][DEBUG ] Unpacking ceph-mds (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Selecting previously unselected package radosgw.
- [s2][DEBUG ] Preparing to unpack .../radosgw_0.94.6-1~bpo80+1_amd64.deb ...
- [s2][DEBUG ] Unpacking radosgw (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Processing triggers for man-db (2.7.0.2-5) ...
- [s2][DEBUG ] Processing triggers for systemd (215-17+deb8u4) ...
- [s2][DEBUG ] Setting up ceph-common (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Setting up ceph (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Setting up ceph-mds (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Setting up radosgw (0.94.6-1~bpo80+1) ...
- [s2][DEBUG ] Processing triggers for systemd (215-17+deb8u4) ...
- [s2][DEBUG ] Processing triggers for libc-bin (2.19-18+deb8u4) ...
- [s2][INFO ] Running command: ceph --version
- [s2][DEBUG ] ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
- root@h1:~# ceph-deploy mon create s2
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy mon create s2
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] subcommand : create
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5a182a35a8>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] mon : ['s2']
- [ceph_deploy.cli][INFO ] func : <function mon at 0x7f5a18714668>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] keyrings : None
- [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts s2
- [ceph_deploy.mon][DEBUG ] detecting platform for host s2 ...
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [s2][DEBUG ] find the location of an executable
- [ceph_deploy.mon][INFO ] distro info: debian 8.4 jessie
- [s2][DEBUG ] determining if provided host has same hostname in remote
- [s2][DEBUG ] get remote short hostname
- [s2][DEBUG ] deploying mon to s2
- [s2][DEBUG ] get remote short hostname
- [s2][DEBUG ] remote hostname: s2
- [s2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [s2][DEBUG ] create the mon path if it does not exist
- [s2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-s2/done
- [s2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-s2/done
- [s2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-s2.mon.keyring
- [s2][DEBUG ] create the monitor keyring file
- [s2][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i s2 --keyring /var/lib/ceph/tmp/ceph-s2.mon.keyring
- [s2][DEBUG ] ceph-mon: mon.noname-a 192.168.121.32:6789/0 is local, renaming to mon.s2
- [s2][DEBUG ] ceph-mon: set fsid to d46f81c5-7a6d-4151-8fc2-f9899ae8d311
- [s2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-s2 for mon.s2
- [s2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-s2.mon.keyring
- [s2][DEBUG ] create a done file to avoid re-doing the mon deployment
- [s2][DEBUG ] create the init path if it does not exist
- [s2][DEBUG ] locating the `service` executable...
- [s2][INFO ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.s2
- [s2][DEBUG ] === mon.s2 ===
- [s2][DEBUG ] Starting Ceph mon.s2 on s2...
- [s2][WARNIN] Running as unit ceph-mon.s2.1461778779.597847648.service.
- [s2][DEBUG ] Starting ceph-create-keys on s2...
- [s2][INFO ] Running command: systemctl enable ceph
- [s2][WARNIN] Synchronizing state for ceph.service with sysvinit using update-rc.d...
- [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph defaults
- [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph enable
- [s2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s2.asok mon_status
- [s2][DEBUG ] ********************************************************************************
- [s2][DEBUG ] status for monitor: mon.s2
- [s2][DEBUG ] {
- [s2][DEBUG ] "election_epoch": 2,
- [s2][DEBUG ] "extra_probe_peers": [],
- [s2][DEBUG ] "monmap": {
- [s2][DEBUG ] "created": "0.000000",
- [s2][DEBUG ] "epoch": 1,
- [s2][DEBUG ] "fsid": "d46f81c5-7a6d-4151-8fc2-f9899ae8d311",
- [s2][DEBUG ] "modified": "0.000000",
- [s2][DEBUG ] "mons": [
- [s2][DEBUG ] {
- [s2][DEBUG ] "addr": "192.168.121.32:6789/0",
- [s2][DEBUG ] "name": "s2",
- [s2][DEBUG ] "rank": 0
- [s2][DEBUG ] }
- [s2][DEBUG ] ]
- [s2][DEBUG ] },
- [s2][DEBUG ] "name": "s2",
- [s2][DEBUG ] "outside_quorum": [],
- [s2][DEBUG ] "quorum": [
- [s2][DEBUG ] 0
- [s2][DEBUG ] ],
- [s2][DEBUG ] "rank": 0,
- [s2][DEBUG ] "state": "leader",
- [s2][DEBUG ] "sync_provider": []
- [s2][DEBUG ] }
- [s2][DEBUG ] ********************************************************************************
- [s2][INFO ] monitor: mon.s2 is running
- [s2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.s2.asok mon_status
- root@h1:~# ceph-deploy gatherkeys s2
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy gatherkeys s2
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc60ffc9518>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] mon : ['s2']
- [ceph_deploy.cli][INFO ] func : <function gatherkeys at 0x7fc61042d050>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.gatherkeys][DEBUG ] Have ceph.client.admin.keyring
- [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
- [ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-osd.keyring
- [ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-mds.keyring
- [ceph_deploy.gatherkeys][DEBUG ] Checking s2 for /var/lib/ceph/bootstrap-rgw/ceph.keyring
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [s2][DEBUG ] fetch remote file
- [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from s2.
- root@h1:~# ceph-deploy disk list s2
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy disk list s2
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] subcommand : list
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4df78d95f0>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] func : <function disk at 0x7f4df78b6578>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] disk : [('s2', None, None)]
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [s2][DEBUG ] find the location of an executable
- [ceph_deploy.osd][INFO ] Distro info: debian 8.4 jessie
- [ceph_deploy.osd][DEBUG ] Listing disks on s2...
- [s2][DEBUG ] find the location of an executable
- [s2][INFO ] Running command: /usr/sbin/ceph-disk list
- [s2][DEBUG ] /dev/sda :
- [s2][DEBUG ] /dev/sda1 other, ext2, mounted on /boot
- [s2][DEBUG ] /dev/sda2 other, 0x5
- [s2][DEBUG ] /dev/sda5 other, LVM2_member
- [s2][DEBUG ] /dev/sdb other, unknown
- NOTE: So here is /dev/sdb unformatted and ready to be prepared as an OSD.
- root@h1:~# ceph-deploy osd prepare s2:sdb
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy osd prepare s2:sdb
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] disk : [('s2', '/dev/sdb', None)]
- [ceph_deploy.cli][INFO ] dmcrypt : False
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] subcommand : prepare
- [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f15600a0fc8>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] fs_type : xfs
- [ceph_deploy.cli][INFO ] func : <function osd at 0x7f1560078500>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] zap_disk : False
- [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks s2:/dev/sdb:
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [s2][DEBUG ] find the location of an executable
- [ceph_deploy.osd][INFO ] Distro info: debian 8.4 jessie
- [ceph_deploy.osd][DEBUG ] Deploying osd to s2
- [s2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [ceph_deploy.osd][DEBUG ] Preparing host s2 disk /dev/sdb journal None activate False
- [s2][INFO ] Running command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
- [s2][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
- [s2][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:80d33d45-f122-4e2a-be80-5dd88ff000ba --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
- [s2][DEBUG ] Setting name!
- [s2][DEBUG ] partNum is 1
- [s2][DEBUG ] REALLY setting name!
- [s2][DEBUG ] The operation has completed successfully.
- [s2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
- [s2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/80d33d45-f122-4e2a-be80-5dd88ff000ba
- [s2][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/80d33d45-f122-4e2a-be80-5dd88ff000ba
- [s2][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:025957f8-448b-45bc-b461-e62c34943585 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
- [s2][DEBUG ] Setting name!
- [s2][DEBUG ] partNum is 0
- [s2][DEBUG ] REALLY setting name!
- [s2][DEBUG ] The operation has completed successfully.
- [s2][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
- [s2][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
- [s2][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=60719917 blks
- [s2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
- [s2][DEBUG ] = crc=0 finobt=0
- [s2][DEBUG ] data = bsize=4096 blocks=242879665, imaxpct=25
- [s2][DEBUG ] = sunit=0 swidth=0 blks
- [s2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
- [s2][DEBUG ] log =internal log bsize=4096 blocks=118593, version=2
- [s2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
- [s2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
- [s2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.mtqv6i with options noatime,inode64
- [s2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.mtqv6i
- [s2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.mtqv6i
- [s2][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.mtqv6i/journal -> /dev/disk/by-partuuid/80d33d45-f122-4e2a-be80-5dd88ff000ba
- [s2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.mtqv6i
- [s2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.mtqv6i
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
- [s2][DEBUG ] Warning: The kernel is still using the old partition table.
- [s2][DEBUG ] The new table will be used at the next reboot.
- [s2][DEBUG ] The operation has completed successfully.
- [s2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
- [s2][INFO ] checking OSD status...
- [s2][INFO ] Running command: ceph --cluster=ceph osd stat --format=json
- [ceph_deploy.osd][DEBUG ] Host s2 is now ready for osd use.
- Just to check:
- root@h1:~# ceph-deploy disk list s2
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy disk list s2
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] subcommand : list
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f412c5a15f0>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] func : <function disk at 0x7f412c57e578>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] disk : [('s2', None, None)]
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [s2][DEBUG ] find the location of an executable
- [ceph_deploy.osd][INFO ] Distro info: debian 8.4 jessie
- [ceph_deploy.osd][DEBUG ] Listing disks on s2...
- [s2][DEBUG ] find the location of an executable
- [s2][INFO ] Running command: /usr/sbin/ceph-disk list
- [s2][DEBUG ] /dev/sda :
- [s2][DEBUG ] /dev/sda1 other, ext2, mounted on /boot
- [s2][DEBUG ] /dev/sda2 other, 0x5
- [s2][DEBUG ] /dev/sda5 other, LVM2_member
- [s2][DEBUG ] /dev/sdb :
- [s2][DEBUG ] /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
- [s2][DEBUG ] /dev/sdb2 ceph journal, for /dev/sdb1
- So, /dev/sdb and been partitioned as expected.
- root@h1:~# ceph-deploy osd activate s2:/dev/sdb1
- [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.31): /usr/bin/ceph-deploy osd activate s2:/dev/sdb1
- [ceph_deploy.cli][INFO ] ceph-deploy options:
- [ceph_deploy.cli][INFO ] username : None
- [ceph_deploy.cli][INFO ] verbose : False
- [ceph_deploy.cli][INFO ] overwrite_conf : False
- [ceph_deploy.cli][INFO ] subcommand : activate
- [ceph_deploy.cli][INFO ] quiet : False
- [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fdef2922fc8>
- [ceph_deploy.cli][INFO ] cluster : ceph
- [ceph_deploy.cli][INFO ] func : <function osd at 0x7fdef28fa500>
- [ceph_deploy.cli][INFO ] ceph_conf : None
- [ceph_deploy.cli][INFO ] default_release : False
- [ceph_deploy.cli][INFO ] disk : [('s2', '/dev/sdb1', None)]
- [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks s2:/dev/sdb1:
- [s2][DEBUG ] connected to host: s2
- [s2][DEBUG ] detect platform information from remote host
- [s2][DEBUG ] detect machine type
- [s2][DEBUG ] find the location of an executable
- [ceph_deploy.osd][INFO ] Distro info: debian 8.4 jessie
- [ceph_deploy.osd][DEBUG ] activating host s2 disk /dev/sdb1
- [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
- [s2][INFO ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
- [s2][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [s2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.fFKTJr with options noatime,inode64
- [s2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.fFKTJr
- [s2][WARNIN] DEBUG:ceph-disk:Cluster uuid is d46f81c5-7a6d-4151-8fc2-f9899ae8d311
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [s2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
- [s2][WARNIN] DEBUG:ceph-disk:OSD uuid is 025957f8-448b-45bc-b461-e62c34943585
- [s2][WARNIN] DEBUG:ceph-disk:OSD id is 0
- [s2][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
- [s2][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.fFKTJr
- [s2][WARNIN] INFO:ceph-disk:ceph osd.0 already mounted in position; unmounting ours.
- [s2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.fFKTJr
- [s2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.fFKTJr
- [s2][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
- [s2][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.0
- [s2][DEBUG ] === osd.0 ===
- [s2][DEBUG ] Starting Ceph osd.0 on s2...already running
- [s2][INFO ] checking OSD status...
- [s2][INFO ] Running command: ceph --cluster=ceph osd stat --format=json
- [s2][INFO ] Running command: systemctl enable ceph
- [s2][WARNIN] Synchronizing state for ceph.service with sysvinit using update-rc.d...
- [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph defaults
- [s2][WARNIN] Executing /usr/sbin/update-rc.d ceph enable
- root@h1:~# ceph osd tree
- # id weight type name up/down reweight
- -1 10.23 root default
- -2 8.14 host h1
- 1 0.9 osd.1 up 1
- 3 0.9 osd.3 up 1
- 4 0.9 osd.4 up 1
- 5 0.68 osd.5 up 1
- 6 0.68 osd.6 up 1
- 7 0.68 osd.7 up 1
- 8 0.68 osd.8 up 1
- 9 0.68 osd.9 up 1
- 10 0.68 osd.10 up 1
- 11 0.68 osd.11 up 1
- 12 0.68 osd.12 up 1
- -3 0.45 host s3
- 2 0.45 osd.2 down 0
- -5 1.64 host s1
- 14 0.29 osd.14 up 1
- 0 0.27 osd.0 up 1
- 15 0.27 osd.15 up 1
- 16 0.27 osd.16 up 1
- 17 0.27 osd.17 up 1
- 18 0.27 osd.18 up 1
- So, s2 has not been added to the cluster. :-(
- Checking on s2:
- root@s2:~# ceph osd tree
- ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
- -1 0.89999 root default
- -2 0.89999 host s2
- 0 0.89999 osd.0 up 1.00000 1.00000
- S2 is running on it's own!
- Now, how to fix that? What did I miss?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement