How to remove obsolete disk device on RHEL
Some time ago we noticed in a Relax-and-Recover (ReaR) log file the following:
2023-08-29 03:10:09.469653943 Including layout/save/GNU/Linux/200_partition_layout.sh
2023-08-29 03:10:09.516209226 Saving disk partitions.
Error: /dev/sdb: unrecognised disk label
Error: /dev/sdc: unrecognised disk label
blockdev: cannot open /dev/sde: No such device or address
Error: Error opening /dev/sde: No such device or address
2023-08-29 03:10:10.445270582 Including layout/save/GNU/Linux/210_raid_layout.sh
2023-08-29 03:10:10.471962608 Including layout/save/GNU/Linux/220_lvm_layout.sh
2023-08-29 03:10:10.485640640 Begin saving LVM layout ...
2023-08-29 03:10:12.211927471 End saving LVM layout
2023-08-29 03:10:12.240185970 Including layout/save/GNU/Linux/230_filesystem_layout.sh
Strange a disk device /dev/sde
that did not exist anymore? It was a disk device that was generated by longhorn (iSCSI device) and that seemd to be come obsolete after an kubernetes POD incident. For some reason it was not automatically deleted.
Therefore we decided to remove it manually, however, being cautious we did some investigations first:
#-> ll /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 30 16:22 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 30 16:22 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 30 16:22 /dev/sda2
brw-rw---- 1 root disk 8, 16 Aug 30 16:22 /dev/sdb
brw-rw---- 1 root disk 8, 32 Aug 30 16:22 /dev/sdc
brw-rw---- 1 root disk 8, 48 Aug 30 16:22 /dev/sdd
brw-rw---- 1 root disk 8, 64 Aug 30 16:22 /dev/sde <<<<<
brw-rw---- 1 root disk 8, 80 Aug 30 16:22 /dev/sdf
#-> parted print /dev/sde
Error: Could not stat device print - No such file or directory.
Retry/Cancel? c
#-> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 99.5G 0 part
├─vg00-lv_root 253:0 0 8G 0 lvm /
├─vg00-swap 253:1 0 4G 0 lvm
├─vg00-lv_usr 253:2 0 5G 0 lvm /usr
├─vg00-lv_home 253:5 0 5G 0 lvm /home
├─vg00-lv_tmp 253:6 0 4G 0 lvm /tmp
├─vg00-var 253:7 0 23G 0 lvm /var
├─vg00-lv_log 253:8 0 11G 0 lvm /var/log
├─vg00-lv_audit 253:9 0 6G 0 lvm /var/log/audit
├─vg00-lv_opt 253:10 0 5G 0 lvm /opt
├─vg00-lv_tanium 253:11 0 3G 0 lvm /opt/Tanium
└─vg00-lv_openv 253:12 0 7G 0 lvm /usr/openv
sdb 8:16 0 270G 0 disk
└─vg_util-lg_util 253:3 0 270G 0 lvm /app/util
sdc 8:32 0 540G 0 disk
└─vg_gtsc-lg_gtsc 253:4 0 540G 0 lvm /var/lib/rancher
sdd 8:48 0 8G 0 disk /var/lib/kubelet/pods/ddbe6983-8453-4000-acad-56288882c356/volumes/kubernetes.io~csi/pvc-d2c37891-8478-418e-a108-31cb50034027/mount
sde 8:64 0 140G 0 disk <<<<<<<<<<<<<
sdf 8:80 0 10G 0 disk /var/lib/kubelet/pods/bd9f0658-6523-4bbf-84ef-258c20fd4300/volumes/kubernetes.io~csi/pvc-d4722a3c-abf3-4602-94f2-97c083ad9ce5/mount
#-> cat /sys/block/sde/device/state
transport-offline
#-> echo 1 > /sys/block/sde/device/delete
#-> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 99.5G 0 part
├─vg00-lv_root 253:0 0 8G 0 lvm /
├─vg00-swap 253:1 0 4G 0 lvm
├─vg00-lv_usr 253:2 0 5G 0 lvm /usr
├─vg00-lv_home 253:5 0 5G 0 lvm /home
├─vg00-lv_tmp 253:6 0 4G 0 lvm /tmp
├─vg00-var 253:7 0 23G 0 lvm /var
├─vg00-lv_log 253:8 0 11G 0 lvm /var/log
├─vg00-lv_audit 253:9 0 6G 0 lvm /var/log/audit
├─vg00-lv_opt 253:10 0 5G 0 lvm /opt
├─vg00-lv_tanium 253:11 0 3G 0 lvm /opt/Tanium
└─vg00-lv_openv 253:12 0 7G 0 lvm /usr/openv
sdb 8:16 0 270G 0 disk
└─vg_util-lg_util 253:3 0 270G 0 lvm /app/util
sdc 8:32 0 540G 0 disk
└─vg_gtsc-lg_gtsc 253:4 0 540G 0 lvm /var/lib/rancher
sdd 8:48 0 8G 0 disk /var/lib/kubelet/pods/ddbe6983-8453-4000-acad-56288882c356/volumes/kubernetes.io~csi/pvc-d2c37891-8478-418e-a108-31cb50034027/mount
sdf 8:80 0 10G 0 disk /var/lib/kubelet/pods/bd9f0658-6523-4bbf-84ef-258c20fd4300/volumes/kubernetes.io~csi/pvc-d4722a3c-abf3-4602-94f2-97c083ad9ce5/mount
#-> ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdc /dev/sdd /dev/sdf
That was it - not that difficult ;-)
References
[1] Knowledge article from RedHat around removing a storage device
Comments
comments powered by Disqus