If you’re arriving at this post, clearly, things are not going well.
You’ve booted up your system and wound up with some odd LVM errors (if you’ve determined that they are LVM I suppose). There’s very little definitive information out there on this problem as far as I can tell right now. Given that my error popped on a CentOS system, the two possibly useful posts that I was able to find were RedHat knowledge base articles which I couldn’t access.
MEGA DISCLAIMER: This is intended to help. However, this CAN blow up your data, your install, everything you love dearly in life so be careful what you do. There’s no way to know if this will work for you, it’s only information that DID work for me in my configuration, and my situation. That said, proceed and use recommendations here at your own risk. Backups? …Puppies?
Okay, so, specifically you’ll see something like this:
[[email protected]~]# lvdisplay Duplicate of PV f79a9b05-5f41-439a-840b-49cb596cb1bf dev /dev/mapper/8382938a1001c4683dbcae6523420ba3d5 exists on unknown device 8:5 --- Logical volume --- LV Path /dev/vg_happydata/lv_omg LV Name lv_omg VG Name vg_happydata LV UUID 66dbd6de-10ed-4953-b26e-bbe479997b71
[[email protected] ~]# pvdisplay Duplicate of PV f79a9b05-5f41-439a-840b-49cb596cb1bf dev /dev/mapper/8382938a1001c4683dbcae6523420ba3d5 exists on unknown device 8:5 --- Physical volume --- PV Name /dev/sdb2 VG Name centos_debianisbetter PV Size 14.35 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 3672 Free PE 10 Allocated PE 3662 PV UUID a35045c9-6c56-44d3-8176-5291c5937b5a
And in your dmesg file from booting you may see tons of messages like this…
[76064.214417] device-mapper: table: 253:5: multipath: error getting device [76064.215541] device-mapper: ioctl: error adding target to table
Now, some information out there suggests editing the lvm.conf file and filtering devices. Maybe that will work as well but based on the fact that my system was working without that and then wasn’t, without LVM configuration changes, I didn’t believe this was the right course of action and sought a different solution.
I found a post on serverfault.com that suggested restoring the configuration (http://serverfault.com/questions/223361/how-to-recover-logical-volume-deleted-with-lvremove)
So I checked to see that I had backups in the following locations
[[email protected]~]# ls /etc/lvm/backup [[email protected]~]# ls /etc/lvm/archive
Luckily, I did, in the top of the volume group files that are in those locations you should be provided with a line that might help indicate when the backup happened. For instance, mine showed:
description = "Created *after* executing 'pvscan --cache --activate ay 8:5'" creation_host = "disasterwaiting.com" # Linux tohappenwiththis.com 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 creation_time = 1462247101 # Mon May 2 23:45:01 2016
To me, this seemed okay and the information in the file related to physical and logical volumes also appeared correct to the best of my ability to tell. Then it dawned on me. Some of the messages I was seeing were citing device on 8:5 as the problem. It looks like the data for the volume group may have been corrupted and did indeed need to be restored.
Now, just in case I also copied each of the files in those backup directories to another location to make sure if something ate or modified the files in the process I would still have copies of the original.
In addition to this I also ran the vgcfgbackup command and output the file for storage, again, just in case something went wrong.
[[email protected]~]# vgcfgbackup -f /root/vgcfg.backup
Now it was time to give it a whirl.
[[email protected]~]# vgcfgrestore vg_happydata
Initially this output more messages about duplicate entries.
[[email protected] ~]# vgcfgrestore vg_happydata Duplicate of PV f79a9b05-5f41-439a-840b-49cb596cb1bf dev /dev/mapper/8382938a1001c4683dbcae6523420ba3d5 exists on unknown device 8:5 Duplicate of PV f79a9b05-5f41-439a-840b-49cb596cb1bf dev /dev/mapper/8382938a1001c4683dbcae6523420ba3d5 exists on unknown device 8:5 Restored volume group vg_happydata
Even though I probably shouldn’t have, I immediately ran the command again (what could it hurt? right?). To my amazement, the second run, the error messages were no longer appearing.
Next I ran vgscan and pvscan to see if anything had changed there. To my luck, and slight amazement, not only did they not error out, but the pvscan command now showed the correctly associated volume group which had not been the case previously.
[[email protected] ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "centos_debianisbetter" using metadata type lvm2 Found volume group "vg_happydata" using metadata type lvm2
[[email protected] ~]# pvscan PV /dev/mapper/8382938a1001c4683dbcae6523420ba3d5 VG vg_happydata lvm2 [1.36 TiB / 0 free] PV /dev/sdb2 VG centos_debianisbetter lvm2 [14.34 GiB / 40.00 MiB free]
After this, naturally, a reboot can’t hurt to just make sure all is in order… can it?