LVM raid1 recovery: mount disk on another linux system debian buster

%%{
  init: {
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#3ed72b',
      'primaryTextColor': '#000',
      'primaryBorderColor': '#000',
      'lineColor': '#fff',
      'secondaryColor': '#e6f01b',
      'tertiaryColor': '#fff'
    }
  }
}%%


flowchart LR
disk1---|LVM raid1|disk2

Overview

  • LVM
    • Raid1
    • remove missing partial disk of radid1
    • rename Volu group VG
    • mount
    • recover

scenario

in my showcase, i was using two ssd 2TB disks in LVM in raid1 mode.
My system was debian buster (10).

My Mainboard was crashed and i was not able to boot up my system. My OS was installed on seperatly 240GB ssd.

So my soft raid1 with LVM for the 2 disk with 2TB ssd, i was not able to get my data from there.

  • 240gb OS # only OS
  • 2TB LVM Raid1 disk1 # data
  • 2TB LVM Raid1 disk2 # data

Here is my solution for recovery data from LVM raid1 from one part of disk from lvm raid1

remove disk from damaged hardware

remove the one part of the ssd disk to connect with usb ssd adapter (sabrent) to another system. in my case i working with debian buster (10)

connect disk to another working system

connect the disk with usb ssd adapter to your another working machine. in my case debian buster (10)

get info from connected ssd with usb

fdisk -l
pvdisplay
vgdisplay

info: all this commands gives you output and you would get information about disk, lovume group, partiton volume, VG name, PV name

pvdisplay and vgdisplay output should give you in the first line a WARNING:

WARNING: Couldn't find device with uuid kGfIXi-i1m2-7pqw-5Aqb-mH2U-Q20f-YMrg1M.
WARNING: VG VG01_STORAGE_RAID1 is missing PV kGfIXi-i1m2-7pqw-5Aqb-mH2U-Q20f-YMrg1M (last written to /dev/sdc).

--- Physical volume ---
PV Name [unknown]
VG Name VG01_STORAGE_RAID1
PV Size <1,82 TiB / not usable <1,09 MiB
Allocatable yes (but full)
PE Size 4,00 MiB
Total PE 476932
Free PE 0
Allocated PE 476932
PV UUID kGfIXi-i1m2-7pqw-5Aqb-mH2U-Q20f-YMrg1M

--- Physical volume ---
PV Name /dev/sdc
VG Name VG01_STORAGE_RAID1
PV Size <1,82 TiB / not usable <1,09 MiB
Allocatable yes (but full)
PE Size 4,00 MiB
Total PE 476932
Free PE 0
Allocated PE 476932
PV UUID O0cX9Z-3YD8-dkFX-qPCA-i6RS-fAGe-Nbplt6


info: the is valid and it is ok because the partial disk (second disk) is not foundable

mount lvm raid1 partial disk

first, scan all connected disk for LVM with lvm tool lvscan

lvscan

output:

ACTIVE '/dev/VG01_STORAGE_RAID1/LV01_STORAGE_RAID1' [<1,82 TiB] inherit

info: in my scenario, i named my volume group VG01_STORAGE_RAID1/LV01_STORAGE_RAID1 . this part of disk is not mountable, until i removed the missing part disk (second disk > raid1)

removing missing partial disk from LVM group

vgreduce --removemissing --force VG01_STORAGE_RAID1

output:

Couldn't find device with uuid kGfIXi-i1m2-7pqw-5Aqb-mH2U-Q20f-YMrg1M.
Removing partial LV LV01_STORAGE_RAID1.
Logical volume "LV01_STORAGE_RAID1" successfully removed
Wrote out consistent volume group VG01_STORAGE_RAID1

info: failed/bad disk is now successfully removed from LVM group

get UUID from partial VG disk

to rename VG volume group we have to get VG UUID first.

vgdisplay

output:

--- Volume group ---
VG Name VG01_STORAGE_RAID1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 1
VG Size <3,64 TiB
PE Size 4,00 MiB
Total PE 953864
Alloc PE / Size 953864 / <3,64 TiB
Free PE / Size 0 / 0
VG UUID WNR45t-kuju-bSZq-iF2f-LJoA-ZCdb-G2uDkl

info: VG UUID is WNR45t-kuju-bSZq-iF2f-LJoA-ZCdb-G2uDkl

rename VG volume group name

to mount as single disk we have to rename the LVM VG name first

#vgrename <UUID>  NEW_NAME
vgrename WNR45t-kuju-bSZq-iF2f-LJoA-ZCdb-G2uDkl NEW_VG01_STORAGE_RAID1
vgchange -ay
lvscan

$#: ACTIVE '/dev/NEW_VG01_STORAGE_RAID1/LV01_STORAGE_RAID1' [<1,82 TiB] inherit

info: now VG name is renamed to NEW_VG01_STORAGE_RAID1

mount LVM to mountpoint

enable dm-mod in kernel

modprobe dm-mod

create mountpoint

mkdir /mnt/LVM-RECOVED-DISK1

mount LVM DISK VG

mount /dev/NEW_VG01_STORAGE_RAID1/LV01_STORAGE_RAID1 /mnt/LVM-RECOVED-DISK1

recover your data

now you are able to force your data intire /mnt/LVM-RECOVED-DISK1

cd /mnt/LVM-RECOVED-DISK1
ls -la
mindmap
  root((LVM))
    raid 1
      2 disks
    VG
      volume group
    LG
      logical group
    mainboard/cpu
      defect
    Restore
      detache one part of disk
      attach to another system
      remove missing part of raid disk
      reneme VG
      load md module
      mount
    Tools
      lvscan
      vgdisplay
      lvdisplay
      vgreduce
      pvdisplay
      vgrename

thanks for reading , i hope i helped you


Aysad Kozanoglu