MON在Ceph中承担比较重要的角色,所以在部署完毕之后,其状态必须为正常才可以。
通过ceph -s 检查状态
cluster 27d39faa-48ae-4356-a8e3-19d5b81e179e
health HEALTH_OK
monmap e11: 3 mons at {server-61.0.lg.ustack.in=10.1.0.61:6789/0,server-62.0.lg.ustack.in=10.1.0.62:6789/0,server-63.0.lg.ustack.in=10.1.0.63:6789/0}, election epoch 2198, quorum 0,1,2 server-61.0.lg.ustack.in,server-62.0.lg.ustack.in,server-63.0.lg.ustack.in
osdmap e206790: 234 osds: 234 up, 234 in
pgmap v63915540: 12352 pgs, 3 pools, 31353 GB data, 2214 kobjects
95480 GB used, 236 TB / 329 TB avail
12352 active+clean
client io 135 MB/s rd, 94498 kB/s wr, 19738 op/s
通过ceph mon_status | python -mjson.tool,来查看mon的status,请检查输出结果的quorum是为设定的个数,mon的ip是否为设定的。
{
"election_epoch": 170,
"extra_probe_peers": [],
"monmap": {
"created": "0.000000",
"epoch": 1,
"fsid": "27d39faa-48ae-4356-a8e3-19d5b81e179e",
"modified": "0.000000",
"mons": [
{
"addr": "10.3.0.61:6789/0",
"name": "server-61.0.zhsh.ustack.in",
"rank": 0
},
{
"addr": "10.3.0.62:6789/0",
"name": "server-62.0.zhsh.ustack.in",
"rank": 1
},
{
"addr": "10.3.0.63:6789/0",
"name": "server-63.0.zhsh.ustack.in",
"rank": 2
}
]
},
"name": "server-62.0.zhsh.ustack.in",
"outside_quorum": [],
"quorum": [
0,
1,
2
],
"rank": 1,
"state": "peon",
"sync_provider": []
}
首先需要检查是不是osd都起来了,通过执行ceph osd stat来查看
# ceph osd stat
osdmap e137: 27 osds: 27 up, 27 in
其次是检查pg是不是都clean,通过 ceph pg stat 来查看
# ceph pg stat
v71203: 1152 pgs: 1152 active+clean; 3514 MB data, 11845 MB used, 13668 GB / 13680 GB avail
然后检查osd的网络是不是按照设想的进行了配置,通过执行ceph osd dump | awk '{print $1,$14,$15,$16,$17}'来查看
# ceph osd dump | awk '{print $1,$14,$15,$16,$17}'
osd.0 172.28.162.69:6800/2142 172.28.162.69:6801/2142 172.28.162.69:6802/2142 172.28.162.69:6803/2142
osd.1 172.28.162.69:6810/2873 172.28.162.69:6811/2873 172.28.162.69:6812/2873 172.28.162.69:6813/2873
osd.2 172.28.162.69:6805/2521 172.28.162.69:6806/2521 172.28.162.69:6807/2521 172.28.162.69:6808/2521
osd.3 172.28.162.71:6800/30554 172.28.162.71:6801/30554 172.28.162.71:6802/30554 172.28.162.71:6803/30554
osd.4 172.28.162.71:6805/30914 172.28.162.71:6806/30914 172.28.162.71:6807/30914 172.28.162.71:6808/30914
osd.5 172.28.162.70:6805/5649 172.28.162.70:6806/5649 172.28.162.70:6807/5649 172.28.162.70:6808/5649
osd.6 172.28.162.71:6810/31369 172.28.162.71:6811/31369 172.28.162.71:6812/31369 172.28.162.71:6813/31369
osd.7 172.28.162.70:6810/6178 172.28.162.70:6811/6178 172.28.162.70:6812/6178 172.28.162.70:6813/6178
osd.8 172.28.162.70:6800/5236 172.28.162.70:6801/5236 172.28.162.70:6802/5236 172.28.162.70:6803/5236
osd.9 172.28.162.206:6800/11064 172.28.162.206:6801/11064 172.28.162.206:6802/11064 172.28.162.206:6803/11064
osd.10 172.28.162.206:6820/12943 172.28.162.206:6821/12943 172.28.162.206:6822/12943 172.28.162.206:6823/12943
osd.11 172.28.162.206:6810/11931 172.28.162.206:6811/11931 172.28.162.206:6812/11931 172.28.162.206:6813/11931
osd.12 172.28.162.206:6815/12456 172.28.162.206:6816/12456 172.28.162.206:6817/12456 172.28.162.206:6818/12456
osd.13 172.28.162.206:6825/13486 172.28.162.206:6826/13486 172.28.162.206:6827/13486 172.28.162.206:6828/13486
osd.14 172.28.162.204:6810/28559 172.28.162.204:6811/28559 172.28.162.204:6812/28559 172.28.162.204:6813/28559
osd.15 172.28.162.204:6805/28031 172.28.162.204:6806/28031 172.28.162.204:6807/28031 172.28.162.204:6808/28031
osd.16 172.28.162.204:6815/29035 172.28.162.204:6816/29035 172.28.162.204:6817/29035 172.28.162.204:6818/29035
osd.17 172.28.162.204:6820/29434 172.28.162.204:6821/29434 172.28.162.204:6822/29434 172.28.162.204:6823/29434
osd.18 172.28.162.204:6800/27766 172.28.162.204:6801/27766 172.28.162.204:6802/27766 172.28.162.204:6803/27766
osd.19 172.28.162.204:6825/30030 172.28.162.204:6826/30030 172.28.162.204:6827/30030 172.28.162.204:6828/30030
osd.20 172.28.162.206:6805/11425 172.28.162.206:6806/11425 172.28.162.206:6807/11425 172.28.162.206:6808/11425
osd.21 172.28.162.205:6820/19850 172.28.162.205:6821/19850 172.28.162.205:6822/19850 172.28.162.205:6823/19850
osd.22 172.28.162.205:6805/18390 172.28.162.205:6806/18390 172.28.162.205:6807/18390 172.28.162.205:6808/18390
osd.23 172.28.162.205:6800/17949 172.28.162.205:6801/17949 172.28.162.205:6802/17949 172.28.162.205:6803/17949
osd.24 172.28.162.205:6810/18837 172.28.162.205:6811/18837 172.28.162.205:6812/18837 172.28.162.205:6813/18837
osd.25 172.28.162.205:6815/19364 172.28.162.205:6816/19364 172.28.162.205:6817/19364 172.28.162.205:6818/19364
osd.26 172.28.162.205:6825/20357 172.28.162.205:6826/20357 172.28.162.205:6827/20357 172.28.162.205:6828/20357
mds状态只有在建立了cephfs之后,才会体现在ceph -s里,所以在启动mds之后,ceph -s里没有它的状态,请不要惊奇。
可以通过如下命令来查看mds的状态,这里active的是server-68上的mds,一般现在建议使用单活多从
# ceph mds stat
e6: 1/1/1 up {0=server-68=up:active}, 1 up:standby
如果想看更详细的信息可以通过ceph mds dump
# ceph mds dump
dumped mdsmap epoch 6
epoch 6
flags 0
created 2016-06-26 11:44:54.927583
modified 2016-06-26 12:03:06.087873
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table}
max_mds 1
in 0
up {0=4468}
failed
stopped
data_pools 22
metadata_pool 21
inline_data disabled
4560: 10.0.3.69:6812/48301 'server-69' mds.-1.0 up:standby seq 1
4468: 10.0.3.68:6812/51658 'server-68' mds.0.1 up:active seq 300
这里可以看到更多的信息,比如session_timeout, pool的id,mds的ip等。
请通过/etc/init.d/ceph-radosgw status来查看