18

I'm looking for a tool that can give me a visual representation of my ext4 fragmentation. Something similar to how Defraggler, Puran Defrag, and many others (UltraDefrag being the best) display your disk... (most good UIs display the files in the block you're hovering your mouse over)

Is there anything related for Linux?

I want to watch my disk and see just how "unneeded" defragmentation really is.

I don't want to use e4defrag, because I'm not sure it can show me what exactly it's doing to my disk.

EDIT (2021): it would seem a similar, less popular, question has been asked 3 years before this question with suggested tools: https://unix.stackexchange.com/questions/30743/is-there-a-tool-to-visualize-a-filesystem-allocation-map-on-linux

Tcll
  • 735
  • 1
  • 7
  • 15
  • 1
    There is absolutely no need to defrag a linux machine. – ZaxLofful Sep 16 '15 at 21:12
  • 11
    That is nonsense. The "need" of defragmentation depends on the FS type and usage pattern. In some cases defragmentation makes a huge difference. (I guess this is one of the linux myths) – David Balažic Sep 16 '15 at 22:05
  • 2
    @DavidBalažic - reference please ? I see no need to defragment a a disk / file system with 0.2 % fragmentation. Please provide benchmarks to show defragmenting a ext2/3/4 / XFS / or any linux native file system significantly affects performance. – Panther Sep 17 '15 at 15:19
  • 2
    @bodhi.zazen --> "depends on the FS type and usage pattern". In some cases it has a big effect, in others none. If that 0.2% is a 2GB database file that is in heavy use, then it will have a big effect. It is a few small rarely used files, then of course not. – David Balažic Sep 17 '15 at 16:03
  • 2
    @DavidBalažic- Define "big effect" and post benchmarks. The i/o of the disk is going to hundreds of times more limiting then a small amount of fragmentation, it will take you a long time to write a 2 GB database to the disk and writing a part of a 2 Gb file is not going to be affected by 0.2% fragmentation, still have to re-write the data to disk. Any modern databse that is in heavy use will be in RAM and if you are writing to disk the limiting factor is RAM, not fragmentation. – Panther Sep 17 '15 at 16:31
  • Running servers and desktops on Ubuntu at the univ, around sixty of them, never ever needed defragmenting and only on SSDs I have TRIM enabled via CRON. – Arup Roy Chowdhury Oct 22 '15 at 01:19
  • 1
    Defraggler people: don't try to run it through wine - you can "run" it but you will only see the virtual wine disks and usually read-only :) – jave.web Apr 17 '20 at 11:43
  • @ArupRoyChowdhury You shouldn't defrag SSDs anyway. SMH – Ken Sharp Dec 20 '23 at 14:06

6 Answers6

25

The question is not if there is fragmentation. All file systems have some fragmentation.

The question is if the fragmentation is enough to affect performance.

On Linux file systems, fragmentation is typically less then 5%, often 1 or 2% unless the disk is 99% full. In the event of a full disk, you can see significant fragmentation, but in that case the problem is a full disk.

$ sudo fsck.ext2 -fn /dev/sda1
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Ubuntu_Rescue: 291595/1222992 files (**0.2% non-contiguous**), 1927790/4882432 blocks

So yes, there is 0.2 % fragmentation, but this is well below the 85% threshold to affect performance.

See the blog post Why doesn't Linux need defragmenting?.

On Windows, it is not uncommon to see 50% or higher rates of fragmentation (I have seen 200% plus). Thus windows needs defragmentation tools.

On Windows they advise defragmentation at thresholds of about 85%.

See:

So, bottom line, defragmentation is not a large enough problem on Linux to affect performance, so there are no significant defragmentation tools and you are sort of wasting your time worrying about it.

Dɑvïd
  • 1,936
  • 4
  • 24
  • 53
Panther
  • 102,067
  • well I've got time to waste to clear up another issue regarding this, so thanks, also, it's always good to be a skeptic ;) – Tcll Sep 16 '15 at 15:55
  • I'm intrigued: what is meant by fragmentation of > 100%? Files are fragmented multiple times? – Doddy Sep 17 '15 at 11:46
  • @doddy apparently so, or the fragments are fragmented. I am not familiar enough with either windows or NTFS to know how the fragmentation is measured . – Panther Sep 17 '15 at 15:15
  • FWIW - Just today (been running this system with upgrades from Fedora 18 -> several updates -> now Fedora 22 ) fsck.ext2 -fn /dev/mapper/fedora-root /dev/mapper/fedora-root: 525176/3276800 files (0.6% non-contiguous) and /dev/mapper/fedora-home: 149180/11149312 files (1.1% non-contiguous) so not much fragmentation over years and several upgrades – Panther Oct 09 '15 at 21:24
  • I also want to add to the discussion that of course no system lasts forever and no matter if HDD or SSD, every write access is a consumption with possible loss. A defragmentation process is maybe the most challenging process for a HDD with all parts moving maximal (reading and writing at different sectors). So if the performance is not heavily affected, please do NOT try to defrag forcely. – zulu34sx Oct 21 '15 at 23:59
  • All these people assuming you only work with ext partitions on linux.................................... – jave.web Apr 14 '20 at 16:01
  • @jave.web in most cases you should be if you use Linux, ZFS is only a solution if you want software RAID, or if you use BSD with Linux, but you should NEVER use NTFS, ExFAT, or other Windows-based FSs without a good reason on Linux, but there is also the niche BTRFS for ReactOS or negligible performance gains. – Tcll Apr 17 '20 at 14:25
  • I've removed this as the accepted answer (yes I know it's years later, but I've grown) because it doesn't actually answer the question CLI is not GUI, and yes there are needs, like for example NTFS on Linux. – Tcll Apr 17 '20 at 14:32
  • 1
    @Tcll I totally agree you should use Ext or similarly behaving FSs for Linux itself and it doesn't make much sense why it wouldn't be so - that however does not in any way affect the real other disks & partitions you have to deal with which originate from e.g. Windows and other OSs or have heavy I/O at nearly full capacities. I won't go futher here, but I think we are on the same page :) – jave.web Apr 20 '20 at 11:25
4

Let's keep it simple...

1) If you use EXT4, there is no need to defrag unless your disk is ~90% full and under heavy IO (Delete, Read, Write).

2) If you find yourself with a ~90% full disk that is heavily fragmented, then your problem is (IMHO) insufficient disk space and not fragmentation. Get a larger disk!

3) If you can't get a larger disk for any valid reason, then simply copy the whole lot (or by large chunks) to another disk, then copy it back. The advanced EXT4 FS writes it back contiguously eliminating fragmentation. This can be scheduled as a cron.daily job using Gnome Scheduler for the converts coming from Windows.

BEST FIX if you have the problem from point 2 above, get a larger disk!

DanglingPointer
  • 1,123
  • 1
  • 13
  • 23
  • I would still like to watch the fragmentation and take care of things when it does happen (not through the terminal like most spud-head developers prefer) ;) – Tcll Sep 17 '15 at 13:48
  • hence why I said use Gnome Scheduler for the "copy away and back pattern" – DanglingPointer Sep 17 '15 at 23:08
  • 1
    @DanglingPointer what about those NTFSs though.... hmmm ? :) or a heavy I/O? hmmm ? :) people who simply don!t or can't afford "another disk" hmmm ? :) – jave.web Apr 17 '20 at 11:47
  • @jave.web use btrfs with autodefrag mount option in fstab and never worry about defrag again. – DanglingPointer Oct 31 '21 at 04:45
  • 1
    I can't eat apples if I only have oranges. People have NTFSs from win with no place nor time nor money to pour them through to another disk. – jave.web Nov 01 '21 at 17:08
  • Then perhaps stick to your Windows oranges and good luck saving time with it on all the problems that comes with Windows and NTFS that breaks, fails, slows, down, or gets hacked! See how much time you save from that when it gets crypto-locked, or you get fragmented from your inferior filesystem for lack of guts to take the challenge on how to move to a better linux native filesystem. This is "AskUbuntu" not "AskWindows" or "AskNTFS". – DanglingPointer Nov 02 '21 at 15:33
  • I offered you a solution that 100% solves your almost-imaginary-problem (fragmentation is just not that common or easy with linux native file-systems) and all you can say or express is your anxiety, lack of drive (laziness), or lack of resources (hddisks and SSDs are now dirt cheap per GB). So unfortunately, I don't think any answer in this Linux distro QA website will help you. – DanglingPointer Nov 02 '21 at 15:37
2

Fragview does what I needed and more.

Screenshot of Fragview

As you can see if I click on an area of the map it shows which files are in that area, and how fragmented they are.

I can now use btrfs filesystem defrag should I wish to do so.

Running on Ubuntu 18.04 amd64 with a BTRFS filesystem.

Ken Sharp
  • 991
  • 8
  • 31
  • 1
    with some further research, I found this, which seems to be older than this question: https://unix.stackexchange.com/questions/30743/is-there-a-tool-to-visualize-a-filesystem-allocation-map-on-linux – Tcll Oct 29 '21 at 13:59
  • That's btrfs and not OP's EXT4 question. Also for btrfs you should consider autodefrag instead as a mount option in fstab. – DanglingPointer Oct 31 '21 at 04:43
  • The filesystem is irrelevant - the app does what is asked. And I'm aware of autodefrag, which definitely doesn't answer the OP's question. – Ken Sharp Nov 01 '21 at 14:27
1

There is no need for defragmentation on Linux systems.
So that is why there are not many defrag tools available.

cl-netbox
  • 31,163
  • 7
  • 94
  • 131
  • 6
    I'm not convinced, show me the fragmentation of a full ext4 HDD after 2 weeks of use with constant deletion and addition of files, there's bound to be fragmentation. – Tcll Sep 16 '15 at 15:21
  • btw, no h8, I've killed about 15 NTFS HDDs back when I ran WinXP64 with the work I do which involves just that, and with EXT4, the drives don't even get hot ;) but I'd bet you and all linux users that defragmentation still needs to be performed regardless, the only exception being it's already automated. – Tcll Sep 16 '15 at 15:30
  • 1
    There are many explanations online - one of them explains it quite precise : http://www.howtogeek.com/115229/htg-explains-why-linux-doesnt-need-defragmenting/ – cl-netbox Sep 16 '15 at 15:31
  • as I suspected, it's dynamic, which is why I mentioned the fragmentation of a full HDD... however, defragmentation is automated whenever it occurs, so it just reduces the need for manual defragmentation. but that doesn't mean it obliterates it entirely, there's still a small percent chance of it happening. – Tcll Sep 16 '15 at 15:45
  • 1
    I posted the documentation you requested , 0.2 % fragmentation is trivial and not enough to affect performance. – Panther Sep 16 '15 at 15:49
  • 1
    To me, the obvious next question is then: why is manual defragmentation still required on Windows after all these years? But that's clearly off-topic for this site. ;-) – Oliphaunt Sep 16 '15 at 20:06
  • The reason is because the old format that Windows was using, NTFS, is flawed in several ways. EXT4 does not need to defragment because the system itself manages the fragmentation, by using journaling. The newest FS that Windows is using for Windows 10 and Server 2012 R2 have very similar features that make it so manual defrag is not necessary. – ZaxLofful Sep 16 '15 at 21:15
  • 2
    Two sentences, both wrong. One defrag tool is mentioned right in the question. Another is defragfs. Some related articles: Defragmenting-Linux (linux-magazine.com), How to defrag your Linux system(howtoforge). Also see this question: http://unix.stackexchange.com/questions/75652/what-doesnt-need-defragmentation-linux-or-the-ext2-ext3-fs – David Balažic Sep 16 '15 at 22:16
  • Some more linux defrag tools: xfs_fsr, defrag (kolivas), ShAkE. – David Balažic Sep 16 '15 at 22:23
  • @Tcll, it's five years rather than two weeks, but /dev/mapper/data1-portage: 176007/2621440 files (12.3% non-contiguous) – Mark Sep 17 '15 at 01:14
  • @Mark: time doesn't matter, though I'd asked for a display of the fragmentation, not a computed estimate in text... which sectors are those files in, and how badly are they fragmented? (a GUI can easily display this much better than text)... show me an image of the fragmentation and then I'll be happy. ;) – Tcll Sep 17 '15 at 01:41
  • also, if I assume correctly, with the fact that the FS is dynamic and defragmentation happens automatically, then that means the removal of files from a full HDD with a minor amount of fragmentation should be fixed right up when there's enough space available... I'm convinced alright ;) – Tcll Sep 17 '15 at 01:44
  • I've had disk fragmentation problems on ext3/ext4 before (with almost full partitions), so this answer shouldn't be the top answer!!!! I don't remember which tools I had to use though, or whether they were any help... – Shautieh Sep 17 '15 at 16:09
  • fixed the top answer, since yes, defragmentation IS necessary depending on the FS used, although with EXT4, just because it auto-defrags, with the fact that fragmentation still occures and can be significant enough to cause corruption, it's still wize to remove files defrag anyways when it gets to be at that level. (but the problem is knowing when it's at that level, since there's no GUI available to show you, spud-heads prefer the terminal) – Tcll Sep 17 '15 at 17:14
  • sorry, downvote, I usually don't down-vote but this is intentionally not an answer. To see answer that is still not a real answer but I did not downvote it see @DanglingPointer answer here on this page – jave.web Apr 17 '20 at 11:47
  • I actually have to update, now that I've grown up and am much more aware, @DavidBalažic the tools you reference are CLI, not GUI as was asked in the original question... yes there are tools, but there are no GUI tools. you're kinda screwed if you want informative ease of use. – Tcll Apr 17 '20 at 14:41
0

People seem to forget that a good modern defragger is not just defrager, but optimiser as well. Different areas of the hard disk platter read at different speeds. The closer to the centre of the disk, the slower the file read is. A modern defragger will analyse file usage and place frequently read files towards the outer edge of the platter and less frequently used files are moved towards the centre. Some even allow files that are flagged as archives to be pull as close to the centre as possible. I have seen large files on my Linux system broken into 1000's of segments.

I run defrags on my server every month. My temp storage drive where I download my torrents is really bad.

$ xfs_db -r /dev/md6 -c frag

actual 462546, ideal 636, fragmentation factor 99.86%
Note, this number is largely meaningless.
Files on this filesystem average 727.27 extents per file

People that claim linux file systems don't need defragging are just regurgitating what someone else said and have never actually checked.

John
  • 29
  • sadly this is terminal-based, so not a solution unfortunately, but points given because it appears extremely versatile and well worth using ;) – Tcll Mar 15 '21 at 03:05
  • also, I think you got a bit confused, the end of the disk is the most dangerous to the heads (so they say) so less frequently used files are stored at the end, while more frequently used files are stored near the center, where there's the least wear ;) – Tcll Mar 15 '21 at 03:07
  • For the partition where you torrents are, use btrfs with autodefrag mount option in fstab then forget about the problem! Alternatively run your torrent server/client in a VBOX VM and set your incomplete and complete folders as host-shared folders in the VBOX guest setup. When the torrent finishes, the VM will cut the file from the incomplete shared folder and "flush" it to the complete folder thus re-writing all the blocks contiguously with no fragmentation. This will happen even if it is writing it back to the exact same disk as VBOX forces the re-flush of the file-output-stream. – DanglingPointer Oct 31 '21 at 04:54
-1

Currently there are no GUI utilities that offer ease of use

If you use NTFS, ExFAT, or other filesystems without any auto-defrag solution on Linux, you are stuck with cumbersome non-intuitive CLI tools.


The current solution is to use EXT4 or ZFS, which automatically does the brunt work for you to keep your HDDs fast.
Just avoid using more than 90% of your drive

if you use an SSD, fragmentation doesn't matter too much unless a file somehow gets so fragmented, the redirections start degrading performance.

EXT4 can discern an HDD from an SSD and won't kill your write cycles with auto-defragmentation.
just make sure you don't use swap on an SSD ;)

Tcll
  • 735
  • 1
  • 7
  • 15