Recovering data from a RAID NAS drive using CnW Recovery software
The following are two examples of failed drives we have come across.
Case 1 - RAID 1
The problem to solve was a NAS storage device (Lacie 2 Big network) that was not responding any longer. It is a device that is accessed over a network and contains 2 disk drives in RAID 1 format. Being RAID1 means each is an identical image of the other. This means that one can fail, and all data will be retained on the second.
The two disks were examined and neither had any physical problems, they also appears to have the same data. Therefore the problem was not due to disk failure. Although the drive had been used on a PC network (XP, Vista etc) as it is a NAS device it means that it can store data in any way it likes and this unit was Unix based. Something had probably failed in the controller and so the data could not be accessed.
The drive in fact had 7 partitions of data, but the largest one was XFS, smaller were Linux swap and Ext2 format. In theory it should have been possible to read the data from a Linux system with XFS loaded, but the drive did not appear to have a mount point set. Although it was probably possible to configure Linux to read the disk, the solution taken was to use CnW Recovery software to select and read the XFS partition. By scanning for all iNodes, all files were found and recovered
One feature of Unix disks is that often the superblock keeps track of the number of iNodes used and allocated. CnW software will read these values and store them in the forensic part of the log, which helps verify that all files have been read, or in the case of a badly damaged disk, the number of files that are actually missing.
Case 2 - RAID 0
Another case was the same type of hardware as above, but this time the disk was RAID-0, and about 1TB in total on the drive. As before, the disk was in seven partitions, with the data partition being the final one, in XFS format. The device was showing no data, but a disk scan showed there was about 980GB of used sectors. Both disks were still OK, and showed no physical errors. The problem was probably caused by the controller, or a software error somewhere.
The RAID-0 problem is handled nicely by CnW software, allowing for the fact that the raid does not start until the beginning of the 7th partition. The earlier partitions are in fact mirrored (or RAID 1)
The drive had become corrupted, and many of the files had been deleted. Many people consider that a deleted file on XFS is unrecoverable, but fortunately some elements of the iNodes do remain, and the new XFS routine within CnW allows such disks to be read, and files recovered. A very high percentage of the disk (at least 98%) was recovered with valid file names, and largely the original directory structure still intact.
The free demo does not enable the RAID options by default, please contact CnW firstname.lastname@example.org to obtain a free RAID evaluation option for the demo.