As a result of semiregular backups and a hard drive failure, I'm the proud owner of many similar but far but identical files and directories that haven't been curated carefully in years. There are many duplicates but also likely many damaged files.
The graphical merge option takes forever and asks one, mid-process, to choose between two copies of a file, without being able to choose between them.
What I would like to do, instead, is to merge various directories in such a way that one copy of an item is retained if all copies are identical, but otherwise the conflicting copies are both preserved, with distinct names, in the target folder. There's no way to do this file-by-file in my natural lifespan, but I'm confident there's a tool that can analyze if files are the same apart from "last modified" or "created" metadata.
I can't imagine no one has been in my situation before, but the similar questions I find seem to want something slightly different. The answers to those questions tend to involve rsync, so I suspect a correct answer to mine might involve rsync as well, but I don't know exactly how to implement what I'm asking or just how thoroughly rsync checks files before determining they're identical.
hardlink
from thehardlink
package. It hard links identical files, and leaves unidentical files alone. – waltinator Jun 15 '18 at 19:27