Cephadm

Cephadm and Ceph orch

Documentation

You can find the official documentation here:

Official documentation

Usage

Bootstrap Cluster

$ sudo cephadm bootstrap --mon-ip <mon-ip> --cluster-network <cluster_net> --ssh-user <user>
$ sudo cephadm bootstrap --mon-ip 192.168.0.10  --ssh-user vagrant

We need to deploy ceph’s ssh key on cluster’s hosts

$ ssh-copy-id -f -i /etc/ceph/ceph.pub <user>@<new-host>

Open a shell into your cluster

$ sudo cephadm shell --fsid 254961fc-6b7b-11eb-adbf-ab1c4f35b52e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Configure network and monitor

Set the public network into centralized configuration

# ceph config set mon public_network <mon-cidr-network>

Set number of monitor on your cluster and initialize it

# ceph orch apply mon <number-of-monitors>
# ceph orch apply mon <host1,host2,host3,...>

An other way to deploy monitors, labeled it and deploy all host who have th label

# ceph orch host label add <hostname> mon
# ceph orch host ls
# ceph orch apply mon label:mon

Adding OSDs

Adding all device available in your cluster

# ceph orch device ls
# ceph orch apply osd --all-available-devices

Adding one device

# ceph orch daemon add osd <host>:<device-path>

Add filesystem

# ceph fs volume create <fs_name> --placement="<placement>""

Add RGW

# ceph orch apply rgw <realm-name> <zone-name> --placement="<num-daemons> [<host>...]"

Replace OSDs

Make sure it is safe to destroy the OSD:

# while ! ceph osd safe-to-destroy osd.{id} ; do sleep 10 ; done

Destroy the OSD first:

# ceph osd destroy {id} --yes-i-really-mean-it

Zap a disk for the new OSD, if the disk was used before for other purposes. It’s not necessary for a new disk: This command must be run on node who host the osd.

# cephadm ceph-volume lvm zap /dev/sdX

Adding the osd with ceph orch command

# ceph orch daemon add osd <host<>:<device-path>