Did you run into a situation where you've deployed containerized Ceph, and now for some reason need to zap an individual OSD? Well so have I, and below is all you'll need to accomplish this task. I took this reference from purge-docker.yml playbook and slimmed it down to perform just this task.
- Create the following playbook:
---
- name: Zap OSDs from cluster
hosts: <target_host> # replace <target_host> with an individual node or 'all'
vars:
container_binary: docker # podman for rhel/centos 8
vars_files:
- group_vars/all.yml
- group_vars/osds.yml
tasks:
- name: zap and destroy osds created by ceph-volume with lvm_volumes
ceph_volume:
data: "{{ item.data }}"
data_vg: "{{ item.data_vg|default(omit) }}"
journal: "{{ item.journal|default(omit) }}"
journal_vg: "{{ item.journal_vg|default(omit) }}"
db: "{{ item.db|default(omit) }}"
db_vg: "{{ item.db_vg|default(omit) }}"
wal: "{{ item.wal|default(omit) }}"
wal_vg: "{{ item.wal_vg|default(omit) }}"
action: "zap"
environment:
CEPH_VOLUME_DEBUG: 1
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_items: "{{ lvm_volumes | default([]) }}"
when: lvm_volumes | default([]) | length > 0
- name: zap and destroy osds created by ceph-volume with devices
ceph_volume:
data: "{{ item }}"
action: "zap"
environment:
CEPH_VOLUME_DEBUG: 1
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
loop: "{{ devices | default([]) }}"
when: devices | default([]) | length > 0
...
Note: you will also need to update the devices loop items, unless you want to zap all disks in the osds.yml. Same holds true for lvmvolumes. I recommend using hostvars if individual node layout holds true.
- Run the playbook, only after ensuring that hosts has been updated in the playbook:
ansible-playbook playbook.yml
Comments