3

Using Ununtu 12.04, and recently got a message that disk space is running out. Ran the Disk Usage Analyzer, which froze. After some research, I see a directory in ~ called "9fybsyiikg" which is 1065357312 bytes.

I tried opening that folder in the file manager, and nothing happens. I tried lsing in, and nothing happens.

And then I tried rm -rf 9fybsyiikg, and nothing happens.

Any ideas what this directory may be, and how to get rid of it?

Joe Z
  • 171
  • 1
  • 5
  • AFAIK, There's a file limit for rm to operate. If that is above the limit, then rm will display error. BTW, try whether ls -l | more lists it?? – AzkerM Mar 17 '14 at 16:22
  • This sounds like something is seriously messing up your system... – w4etwetewtwet Mar 17 '14 at 16:22
  • If you ever used BleachBit, see the duplicate. Otherwise, take into account that a million file dir could easily need one hour to be deleted, and extrapolate... – Rmano Mar 17 '14 at 16:25
  • Also make sure you know what that file is because that command is very dangerous it will delete everything in its path including your system. – Wild Man Mar 17 '14 at 16:27
  • @AzkerM, rm didn't display an error. Simply, nothing happened. @Rmano, maybe. @WildMan, that is a possibility. What do you suggest? – Joe Z Mar 17 '14 at 16:28
  • Well, rm -r will not follow symlinks (and you shouldn't have hardlink to directories), so in principle it will delete only the thing under the strange dir. BTW, better omit the -f generally... – Rmano Mar 17 '14 at 16:29
  • 2
    @AzkerM the limit you are thinking of is ARG_MAX (see here) and it shouldn't apply here unless the OP is trying rm ~/9fybsyiikg/* – terdon Mar 17 '14 at 16:30
  • I will try the rm/patience thing and let you know. First backing up the important stuff. – Joe Z Mar 17 '14 at 16:43
  • 1
    @Rmano why are symlinks relevant? The OP is attempting to rm -r the directory, everything in it (including symlinks, but not their targets) will be deleted. – terdon Mar 17 '14 at 16:44
  • It was in answer to @WildMan --- forgot the @. – Rmano Mar 17 '14 at 16:54
  • @terdon understood. I had similar issues before but not just generating files though. For my easiness, I had it archived in my blog as How-To: “rm” command to empty a directory with huge file list. I hope it would help in any instance. – AzkerM Mar 17 '14 at 17:23
  • 2
    @AzkerM no need for xargs there! Especially not that way, it will break on any weird file names. On GNU systems, use find -delete, or -exec rm {} +. If you really want to use xargs do it like this: find -print0 | xargs -0 -I {} rm -rf. – terdon Mar 17 '14 at 17:26
  • Waiting until important things are backed up, then will try rm with plenty of time. – Joe Z Mar 17 '14 at 20:38
  • Follow-up: Today, after deleting the folder, again Ubuntu complained about running out of space. – Joe Z Mar 19 '14 at 17:02

1 Answers1

5

The rm command will take some time; if you're not getting any errors, just let it run. If you do get errors, try some of these solutions:

  1. find

     find ~/ -maxdepth 1 -name 9fybsyiikg -delete
    
  2. rm and wait, this might take a while (yes, I know you tried it but it might help others)

     rm -rf ~/9fybsyiikg
    
  3. You might just have too many files, try this

     find  ~/9fybsyiikg -delete && rmdir ~/9fybsyiikg
    
  4. If all else fails, use some Perl magic:

     perl -e 'use File::Path; rmtree "$ARGV[0]"' ~/9fybsyiikg
    

    Explanation

    • -e : run the script passed on the command line

    • rmtree : a command from the File::PAth module that deletes whole directory trees

terdon
  • 100,812
  • If I understood correctly and the problem is the BleachBit ones (as it seems) --- that program created a file for every little available space on the disk to overwrite it and the delete it for security reason. It probably crashed in the middle and left back millions of tiny files mapped all over the disk in that directory... so it is basically an exercise in patience ;-). It is even almost impossible to ls of find it, given the size... – Rmano Mar 17 '14 at 16:34
  • @terdon why would perl or find be faster than rm? This guy is pretty convincing: http://askubuntu.com/a/114970/334825... – Ohad Schneider Feb 24 '17 at 17:16
  • 1
    @OhadSchneider I was thinking of cases where rm was failing (I guess?). To tell you the truth, 3 years down the line, that comment doesn't make much sense to me either. I can't honestly say I remember what I was thinking of and it does seem like I was just wrong, so I'll delete. Thanks. – terdon Feb 24 '17 at 17:36
  • @terdon I can totally relate, when I read some of my posts from a few years back I think "who is that idiot" :) IMHO you should just delete options (3) and (4) above from your post, as they seem to suggest those might work in cases where rm -rf doesn't (specifically when a folder contains too may files) and I did not encounter any evidence to support that (including recent personal experience, unfortunately). – Ohad Schneider Feb 25 '17 at 13:21
  • Ah, no I wasn't that wrong ;) Both 3 and 4 nicely avoid the ARG_MAX problem. Think of them as alternatives to rm foo/* not rm - rf foo. – terdon Feb 25 '17 at 13:28
  • @terdon sorry for the late response but AFAIK (2) doesn't have the ARG_MAX problem, as the latter comes from shell file name expansion (tupically of "*", like you wrote yourself here: http://askubuntu.com/questions/435525/rm-rf-hangs-on-large-directory/435532?noredirect=1#comment567021_435525). – Ohad Schneider Mar 21 '17 at 22:45