0

Apologies for the title. To be more specific, I recall reading this somewhere in the last year when I was trying to repair data damage to a hard drive.

I have a windows XP hard drive that suffered a head crash and caused unrecoverable bad sectors on certain files. I'm cloning the drive to a duplicate drive and replacing the files that were damaged using a backup I had from long ago.

What I've done already is... I booted into Parted Magic from the Ultimate Boot CD and used linux's ddrescue tools to clone the damaged drive into a disk image, then used the logfile.txt to write DEADBEEF to all sectors it couldn't read/clone to the disk image file and make a complete image. I then used one of the linux grep commands, I believe, to try to search the entire file system for the string DEADBEEF and list all files containing that string, though had some trouble with it quitting hours into the search due to some odd error.

I also manually repaired the $MFT's (Master File Table) errors, everything except a single picture file's data which is unimportant, so that I could scan the entire file system properly and see all files (some were not showing due to the damage).

What I need to do is this:

I want to fully scan the entire drive down to the byte level (as if looking at the disk image in a hex editor), every sector, for the string DEADBEEF, then have it list every file that the bad sector overwritten with DEADBEEF belongs to according to the filesystem. I remember reading about this somewhere, being able to scan the drive and once it finds the string, it lists the offset/location/sector of the DEADBEEF string and what file owns the data in that sector. That, or every bad sector and lists what file the bad sector belongs to.

The ddrescue logfile lists every sector that was detected as unreadable (about 1000 200-byte sectors), where it wrote DEADBEEF to. If I know what files own those bad sectors, I can replace them using my old backup.

Before you ask, I can't just use the old backup alone because it's about 3 years old. The old backup is actually the original drive from this computer, which I cloned to this drive that I'm trying to rescue. Most of the bad sectors from the hard drive crash were in a part of the drive which only had files that were present from the original drive. I can easily copy those files from the original to the new drive to fix all the DEADBEEF bad sectors, but I need to know what every file is that those bad sectors belong to.

Again, I recall reading something about scanning all sectors of a drive and having it list any file that a certain sector belonged to. So how do I do this from Parted Magic? I have to mount it in there so it is mounted as read only.

  • 2
    Parted Magic is not Ubuntu therefore off-topic here. And you should have stopped using XP years ago. –  Apr 14 '17 at 04:24
  • 1
    Look, I don't know where to ask this, but Ubuntu is linux and parted magic is linux. The info I'm asking about can be done in both from the command line, that's all I know.

    My question isn't about whether I should or shouldn't have XP. This is an old computer and all I can afford and has all my important programs and data with a lot of suspended sessions I want to have restored. It's easily possible if I get the damaged files replaced.

    – DChronosX Apr 14 '17 at 04:35
  • 1
    @CelticWarrior "the help you need is not technical, is the help provided by mental health professionals" ... seriously? That's pretty rude. There are legitimate reasons to want to recover data from an old XP drive, and even boot XP - maybe some old programs holding important data won't run on anything newer. But honestly nothing about XP is even relevant to the question - just "fat or ntfs" – Xen2050 Apr 14 '17 at 07:41
  • @DCX I think you may be doing this a hard way - if you can read most of the files on the image, and you've got the old backup readable too, then just compare their files (I'd use kdiff3 or another GUI folder diff program) and copy over whatever's missing or different, unless you know it's a newer data file you want to keep, or you search it for "DEADBEEF". Actually, mounting the image & searching all files for "DEADBEEF" would be much easier, I can almost see a find...grep command doing that now, it doesn't matter if free space was "corrupted" anyway – Xen2050 Apr 14 '17 at 07:46
  • Since you already have the list of bad sectors from ddrescue, isn't what you really want to know How to find out what file is on a particular sector? The sourceforge page for ddrutility says it can "Find what files are related to the bad sectors using a ddrescue logfile". – steeldriver Apr 14 '17 at 12:24
  • “I also manually repaired the $MFT's (Master File Table) errors” Woah, that's a lot of work when you could've used RecuperaBit. :P Anyway, it should be easier to extract all the files and then grep those extracted files for the string you mentioned. ;) – Andrea Lazzarotto Apr 14 '17 at 23:43
  • The computer was an old backup I was using while trying to get a new one built, and does have a lot of important stuff that I need to get running again. The fact that it was XP didn't have anything to do with my actual question.

    Fixing the MFT errors didn't take me too long. I was able to copy much of it from the original drive, and anything added after I changed drives thankfully was only in the second half of the MFT data sectors.

    – DChronosX Apr 15 '17 at 17:41
  • @steeldriver I didn't know ddrescue could do this, but yeah, finding which files occupy the bad sectors is what I needed to know.

    CelticWarrior is an h41 53 53.

    – DChronosX Apr 15 '17 at 17:56

1 Answers1

1

You could do like the helpful comment from steeldriver says and use ddrutility

It doesn't appear to be in the Ubuntu repos, but it's home page is https://sourceforge.net/projects/ddrutility/
Specifically use it's tool ddru_findbad
Here's a clip from it's wiki page:

ddru_findbad
It is a bash script that will try to find which files are related to bad sectors in a ddrescue log file.
It relies on 3rd party utilities for its functions. It may not work on all systems. It can be slow, and can be very slow if not unusable if there are a lot of bad sectors in the list (it does not work well with a large error size).


I'm tempted to forget about sector numbers and just mount the cloned image & search all files for "DEADBEEF", with find, xargs & grep in a Ubuntu (or Xubuntu, Lubuntu, or Debian, most any Linux).

Whether it's easier or faster than trying ddru_findbad or not probably depends on how big & fast your disk image is.

find /mnt/x -type f -print0 | xargs -0 grep --files-with-matches "DEADBEEF" >> list

Where the image is mounted to /mnt/x. Then the file list has all the filenames that match. Any free space that has DEADBEEF are ignored.

Xen2050
  • 8,705
  • I'll try that. Last time I was working on this, I got help from someone that had me try this:
    [[ "for i in find . -type d -maxdepth 3; do echo "Looking in $i..."; grep -r -l DEADBEEFDEADBEEF "$i" 2>&1; echo "done with $i"; echo ""; done | tee greplog.txt" ]]
    ...but it failed telling me "find: paths must precede expression: 3" ..... There are 2 ` in that code before find and after maxdepth 3. (how do I make this properly line break, it isn't working)
    – DChronosX Apr 15 '17 at 18:06
  • Yeah, that code you tried appears to be parsing find's output too, which would probably fail on names with newlines & maybe even spaces - find's -print0 and xargs -0 takes care of that, and also makes looping unnecessary. Would you really want a message saying what file's being searched anyway, and when done? Could be 50,000+ useless messages, and I think grep can output filenames with non-matches too. Actually grep's -r reads recursively under the directory too, so that could've been searching folders 3 times. And all in a test? Writing to the damaged fs? Weird, don't do all that ;-) – Xen2050 Apr 15 '17 at 19:36
  • I can't quite remember why it was set up that way, but I remember wanting to see it tell me what was currently being searched so I could tell if it was stil running or not. I think the last times I tried, it was frozen for 10 hours overnight and I didn't even know it stopped working since there was no line indicating it. I think I also wanted that to see when it hit a problem file, as there were 2 files it was mysteriously failing on, no reason I could find, but very annoying after trying to do the original search for 2-3 hours every time. Looks like ddrutility does exactly what I need, too. – DChronosX Apr 16 '17 at 00:50
  • Now that I think about it, it might have been so that it would not quit searching if it hit a problem file. As I said, it was extremely annoying doing the search for 2-3 hours, then having it quit, unfinished, for some unknown reason talking about the file it had a problem with and I was left without knowing what files and directories it had already scanned for DEADBEEF. I don't want to just delete the files, either. From what I could tell, there was no actual problem with them at all. I looked through them in a hex editor, no bad sectors or anything, & I could copy and read the file fine. – DChronosX Apr 16 '17 at 00:53
  • I see. I'd use a system monitor to see if there was any disk reads going on instead, but that wouldn't say what was being searched, some type of logging would help there, but with filename spaces & crazy stuff like newlines allowed parsing file lists can be tricky. That wasn't on the damaged drive itself right? The good copied info shouldn't have any read errors (after fsck), and trying to search a failing drive isn't likely to make it any healthier (more like to fail with each read). – Xen2050 Apr 16 '17 at 01:13
  • No, first thing I did with the damaged drive was to use GNUddrescue from parted magic to fully clone the drive to a USB drive for editing and fixing before cloning it back to a healthy drive. Do you think ddrutility will handle special characters like space and new line without problems, or will continue if it has a problem with some random file for an unknown reason? And will the script you supplied continue upon a file error and skip the problem file? I wish I would've saved what the errors said. – DChronosX Apr 16 '17 at 01:32
  • Wait, I did in a log. "Value too large for defined data type." That was one of the errors, anyway. Happens on the same file no matter what. Even scanning just that file or directory by itself with the original grep script i was using. no idea why that file causes a problem, but it quits the grep search instead of just continuing. – DChronosX Apr 16 '17 at 01:38
  • I don't know, but if that file gives problems why not copy/move/delete it. ddrutility shouldn't have problems with spaces, and XP was a lot more strict about filenames than linux so should be ok. I know find will continue if there's permission errors reading, I'm pretty sure grep will continue too, but sudo should handle those, and fsck if the filesystem's damaged, other errors would probably be read errors, again from a bad drive – Xen2050 Apr 16 '17 at 01:50
  • I thought about that, but didn't want it to happen again since it takes about 3 hours before it gets to that file. I think I'll try out the ddrutility's ntfsfindbad option. I'd like to find out if I can run it on the cloned image file somehow from this laptop i'm using (windows 10, ugh) instead of on the old computer through parted magic, should speed things up. Thanks for the information and help, though! if you have any other suggestions, let me know! Also, I would have used the "chat" option, but my rep of 1 won't allow me to. – DChronosX Apr 16 '17 at 02:23