LVM raid1 recovery: mount disk on another linux system debian buster
%%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#3ed72b', 'primaryTextColor': '#000', 'primaryBorderColor': '#000', 'lineColor': '#fff', 'secondaryColor': '#e6f01b', 'tertiaryColor': '#fff' } } }%% flowchart LR disk1---|LVM raid1|disk2
Overview
- LVM
- Raid1
- remove missing partial disk of radid1
- rename Volu group VG
- mount
- recover
scenario
in my showcase, i was using two ssd 2TB disks in LVM in raid1 mode.
My system was debian buster (10).
My Mainboard was crashed and i was not able to boot up my system. My OS was installed on seperatly 240GB ssd.
So my soft raid1 with LVM for the 2 disk with 2TB ssd, i was not able to get my data from there.
- 240gb OS # only OS
- 2TB LVM Raid1 disk1 # data
- 2TB LVM Raid1 disk2 # data
Here is my solution for recovery data from LVM raid1 from one part of disk from lvm raid1
remove disk from damaged hardware
remove the one part of the ssd disk to connect with usb ssd adapter (sabrent) to another system. in my case i working with debian buster (10)
connect disk to another working system
connect the disk with usb ssd adapter to your another working machine. in my case debian buster (10)
get info from connected ssd with usb
fdisk -l |
info: all this commands gives you output and you would get information about disk, lovume group, partiton volume, VG name, PV name
pvdisplay and vgdisplay output should give you in the first line a WARNING:
WARNING: Couldn't find device with uuid kGfIXi-i1m2-7pqw-5Aqb-mH2U-Q20f-YMrg1M. |
info: the is valid and it is ok because the partial disk (second disk) is not foundable
mount lvm raid1 partial disk
first, scan all connected disk for LVM with lvm tool lvscan
lvscan |
output:
ACTIVE '/dev/VG01_STORAGE_RAID1/LV01_STORAGE_RAID1' [<1,82 TiB] inherit |
info: in my scenario, i named my volume group VG01_STORAGE_RAID1/LV01_STORAGE_RAID1 . this part of disk is not mountable, until i removed the missing part disk (second disk > raid1)
removing missing partial disk from LVM group
vgreduce --removemissing --force VG01_STORAGE_RAID1 |
output:
Couldn't find device with uuid kGfIXi-i1m2-7pqw-5Aqb-mH2U-Q20f-YMrg1M. |
info: failed/bad disk is now successfully removed from LVM group
get UUID from partial VG disk
to rename VG volume group we have to get VG UUID first.
vgdisplay |
output:
--- Volume group --- |
info: VG UUID is WNR45t-kuju-bSZq-iF2f-LJoA-ZCdb-G2uDkl
rename VG volume group name
to mount as single disk we have to rename the LVM VG name first
#vgrename <UUID> NEW_NAME |
info: now VG name is renamed to NEW_VG01_STORAGE_RAID1
mount LVM to mountpoint
enable dm-mod in kernel
modprobe dm-mod |
create mountpoint
mkdir /mnt/LVM-RECOVED-DISK1 |
mount LVM DISK VG
mount /dev/NEW_VG01_STORAGE_RAID1/LV01_STORAGE_RAID1 /mnt/LVM-RECOVED-DISK1 |
recover your data
now you are able to force your data intire /mnt/LVM-RECOVED-DISK1
cd /mnt/LVM-RECOVED-DISK1 |
mindmap root((LVM)) raid 1 2 disks VG volume group LG logical group mainboard/cpu defect Restore detache one part of disk attach to another system remove missing part of raid disk reneme VG load md module mount Tools lvscan vgdisplay lvdisplay vgreduce pvdisplay vgrename
thanks for reading , i hope i helped you
–
Aysad Kozanoglu