-
Notifications
You must be signed in to change notification settings - Fork 8
Install Ceph Poc on Debian
Henryk Paluch edited this page Aug 14, 2022
·
9 revisions
Goal: install single node Ceph on Debian 11 VM.
Tested environment:
- OS Debian 11 (VM in Proxmox VE)
- 1 CPU
- 2 GB RAM
- 14 GB / fs
- 2 GB Swap
Hostname: d11-ceph
IP: 192.168.0.100
Network: 192.168.0.0/24
You should have appropriate entry in /etc/hosts
that point
to real IP address (not loopback), for example:
192.168.0.100 d11-cephadm.example.com d11-cephadm
After lot of headache I decided to use recommended
cephadm
from:
TODO: Review and filter NOTES: from https://docs.ceph.com/en/quincy/cephadm/install/#single-host
sudo apt install curl
cd
curl -fLOJ https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
chmod +x cephadm
sudo ./cephadm add-repo --release quincy
sudo ./cephadm install
# now we use packaged binary from /usr/sbin
# this makes basic setup (MONitor etc...)
# TODO: try --single-host-defaults
sudo cephadm bootstrap --mon-ip 192.168.0.100
# this installs `ceph` command so you need not to use contaienr
sudo cephadm install ceph-common
sudo ceph -s
sudo ceph config ls
sudo ceph config get osd osd_pool_default_size
sudo ceph config get osd osd_pool_default_min_size
Adding disk /dev/sdb
to OSD`:
sudo ceph orch device ls
# did nothing:
sudo ceph orch apply osd --all-available-devices
sudo ceph orch device ls
sudo ceph orch daemon add osd d11-cephadm:/dev/sdb
sudo ceph orch host ls
sudo ceph -s
Create CEPHFS (requiers MDS):
sudo ceph osd pool create cephfs_data 8
sudo ceph osd pool create cephfs_metadata 8
sudo ceph fs new cephfs cephfs_metadata cephfs_data
sudo ceph fs ls
sudo ceph orch apply mds cephfs --placement=d11-cephadm
sudo ceph -s
sudo apt-get install cephfs-shell
sudo ceph -s
sudo ceph health
sudo ceph health detail
sudo ceph config set osd osd_pool_default_min_size 1
sudo ceph config set osd osd_pool_default_size 1
sudo ceph config get osd osd_pool_default_min_size
sudo ceph config get osd osd_pool_min_size
sudo ceph config get osd osd_pool_default_size
sudo ceph osd df
sudo ceph osd pool ls
sudo ceph osd pool set cephfs_data min_size 1
sudo ceph config get mon mon_allow_pool_size_one
sudo ceph config set mon mon_allow_pool_size_one true
sudo ceph osd pool set cephfs_data size 1
sudo ceph osd pool set cephfs_data size 1 --yes-i-really-mean-it
sudo ceph osd pool set cephfs_metadata size 1 --yes-i-really-mean-it
sudo ceph osd pool set cephfs_metadata min_size 1
sudo ceph -s
sudo cephfs-shell
ls
mkdir xxx
put /etc/hosts hosts
cat hosts
df
quit
Create RBD:
RBD_POOL=rbd1
sudo ceph osd pool create $RBD_POOL 8
sudo ceph osd pool set $RBD_POOL size 1 --yes-i-really-mean-it
sudo rbd pool init $RBD_POOL
sudo ceph osd pool stats
sudo rbd create --size 2048 $RBD_POOL/image1
sudo rbd info $RBD_POOL/image1
- TODO: how to use RBD from client machine
Copyright © Henryk Paluch. All rights reserved.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License