idopaster.blogg.se

How to run a disk check on a raid array
How to run a disk check on a raid array













how to run a disk check on a raid array

E.g., I'd expect it actually to give you a choice of what to do when part of a file's data is in the lost area (replace with 0s, delete file). Run fsck without -y and actually go through all those messages. Unmount it, and again let e2fsck -y do its thing again. Then it'll likely mount (do so read-only). Go ahead and do so, copy files off.Įxtend your copy back to the original size (should be OK for it to be sparse truncate -s can do this). Probably it'll produce something you can mount. Let e2fsck -y /path/to/copy do it's thing.

how to run a disk check on a raid array

Note that most these will need to be done on a fresh copy-the steps are destructive, this is one of the many reasons to only work on a copy. Now you can try recovery steps on a copy. You will also need additional disks to copy recovered files on to, including possibly multiple copies of partially damaged files. If there is any question of the reliability of the original disks, make a second copy and only work on one of the copies. The first, and most important step, is to purchase a 4TB disk and use that to make a complete copy (image) of the filesystem. Limited recovery of snippets may be possible from the 4th disk (which was probably untouched by the grow, since it was failed at the time.) Importantly, if any part of the file data or metadata was in that lost third, the file will be inaccessible. So you've lost a bunch of metadata as well. In addition, ext4 does not keep all the metadata at the start of the filesystem it's spread out throughout the filesystem. (The actual sectors contain a mix of parity and data which was moved from other disks, where those sectors now contain parity.) That part of the data is gone for good absent (possibly theoretical) high-cost approaches capable of reading the previous contents of sectors it's not recoverable. What used to be the 3rd terabyte of your filesystem has been overwritten essentially that space is now used for the parity. mdadm -grow is not a normal part of dealing with disk failures. PS: The normal way an array is managed is if a disk fails, you replace that disk with one of the same capacity or larger. If your backups aren't good, you can probably recover files that happened to be on the 1st 2TB of the filesystem. Then mkfs the array and restore from backup. To recover from this, you first confirm your backups are good. The order you did is correct for enlarging a filesystem, but backwards for shrinking one. To shrink, you first shrink the filesystem ( resize2fs) then second shrink the block device ( mdadm). Does anybody have some ideas on what I should do to have a valid RAID-5 array of 3 x 1 Tb disks? That leaves some hope that not all my data is lost. The physical size of the device is 488315648 blocksĮither the superblock or the partition table is likely to be corrupt!

how to run a disk check on a raid array how to run a disk check on a raid array

Now that the reshape and recovery is done, I cannot access my /dev/md0 (it does not mount), resize2fs /dev/md0 tells to run e2fsck first, and e2fsck tells: The filesystem size (according to the superblock) is 732473472 blocks mdadm -grow /dev/md0 -raid-devices=3 -backup-file=/root/grow_md1.bak.Use -grow -array-size first to truncate array.Į.g. Mdadm: this change will reduce the size of the array. Tried to run: mdadm -grow /dev/md0 -raid-devices=3 I had a 4 x 1 Tb RAID-5, and one of the disks started to show lots of Reallocated_Sector_Ct, so I decided to remove it. I have tried to remove 1 HDD from a RAID-5 but something went wrong, but I still hope I can recover my data (in fact, I have all the backups so it is just a question on mdadm possibilities)















How to run a disk check on a raid array