Advertisement
cortes_

ceph-debug-1

Jan 15th, 2025 (edited)
71
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 2.75 KB | None | 0 0
  1. ceph orch device ls | grep rh-ceph4
  2.  
  3. rh-ceph4  /dev/nvme1n1  ssd   Amazon_Elastic_Block_Store_vol056d443058a938bce         200G  No         2m ago     Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
  4. rh-ceph4  /dev/nvme2n1  ssd   Amazon_Elastic_Block_Store_vol0e32f2adf727decff         100G  No         2m ago     Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
  5. rh-ceph4  /dev/nvme3n1  ssd   Amazon_EC2_NVMe_Instance_Storage_AWS23F16CD4B4C8D6D0C   139G  Yes        2m ago`
  6.  
  7. ceph orch ps --daemon_type osd
  8. NAME   HOST      PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION                IMAGE ID      CONTAINER ID
  9. osd.0  rh-ceph1         running (13m)     3m ago   7w    74.8M    4096M  19.3.0-6177-g57904001  eeb6b3eeb312  a4b2eaafb04e
  10. osd.1  rh-ceph2         running (13m)     3m ago   7w    74.4M    1867M  19.3.0-6177-g57904001  eeb6b3eeb312  e208262f150b
  11. osd.2  rh-ceph3         running (13m)     3m ago   7w    72.8M    1867M  19.3.0-6177-g57904001  eeb6b3eeb312  f2b2be53da81
  12. osd.3  rh-ceph4         running (13m)     3m ago   7w    61.0M    4257M  19.3.0-6177-g57904001  eeb6b3eeb312  54e9a45b0779
  13. osd.4  rh-ceph1         running (13m)     3m ago   7w    71.9M    4096M  19.3.0-6177-g57904001  eeb6b3eeb312  de7bb0a72f58
  14. osd.5  rh-ceph2         running (13m)     3m ago   7w    66.3M    1867M  19.3.0-6177-g57904001  eeb6b3eeb312  67f2de77988d
  15. osd.6  rh-ceph3         running (13m)     3m ago   7w    67.3M    1867M  19.3.0-6177-g57904001  eeb6b3eeb312  84a046ff09f2
  16. osd.7  rh-ceph4         running (13m)     3m ago   7w    52.9M    4257M  19.3.0-6177-g57904001  eeb6b3eeb312  50882409a79d
  17.  
  18. ceph config dump | grep public_network
  19. global                 advanced  public_network                         10.0.0.0/8                                                                                         *
  20. mon                    advanced  public_network                         10.0.0.0/8
  21. ceph -s
  22.   cluster:
  23.     id:     06eaef42-ab18-11ef-90ec-020006852e3d
  24.     health: HEALTH_ERR
  25.             2 osds(s) are not reachable
  26.             Degraded data redundancy: 49/220 objects degraded (22.273%), 37 pgs degraded, 256 pgs undersized
  27.             263 pgs not deep-scrubbed in time
  28.             258 pgs not scrubbed in time
  29.   services:
  30.     mon: 3 daemons, quorum rh-ceph1,rh-ceph2,rh-ceph3 (age 16m)
  31.     mgr: rh-ceph1.hpyswc(active, since 16m), standbys: rh-ceph3.emmlnm, rh-ceph2.pvhskl
  32.     osd: 8 osds: 8 up (since 16m), 8 in (since 7w)
  33.   data:
  34.     pools:   4 pools, 289 pgs
  35.     objects: 57 objects, 116 MiB
  36.     usage:   620 MiB used, 1.2 TiB / 1.2 TiB avail
  37.     pgs:     49/220 objects degraded (22.273%)
  38.              219 active+undersized
  39.              37  active+undersized+degraded
  40.              33  active+clean
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement