I have two SSD disks on my system and three filesystems:
$ lsblk | grep -v '^loop'
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk /home
sdb 8:16 0 238.5G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
└─sdb2 8:18 0 238G 0 part /var/snap/firefox/common/host-hunspell
/
and their utilization is like this:
$ df -BM / /home /boot/efi
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/sdb2 238777M 34465M 192112M 16% /
/dev/sda 468357M 203036M 241459M 46% /home
/dev/sdb1 511M 7M 505M 2% /boot/efi
The fstrim
service runs once a week, but reports some "unnatural" values:
$ journalctl | grep "fstrim.*/home"
Oct 23 12:42:02 x fstrim[23187]: /home: 249.6 GiB (267956338688 bytes) trimmed on /dev/sda
Oct 30 19:19:38 x fstrim[22436]: /home: 248.9 GiB (267243692032 bytes) trimmed on /dev/sda
Nov 06 13:35:55 x fstrim[31818]: /home: 243.7 GiB (261722529792 bytes) trimmed on /dev/sda
Nov 13 11:53:32 x fstrim[11380]: /home: 242.9 GiB (260790439936 bytes) trimmed on /dev/sda
Nov 20 11:36:39 x fstrim[8200]: /home: 257.8 GiB (276775620608 bytes) trimmed on /dev/sda
$ journalctl | grep "fstrim.*/:"
Oct 23 12:42:02 x fstrim[23187]: /: 197.1 GiB (211671953408 bytes) trimmed on /dev/sdb2
Oct 30 19:19:38 x fstrim[22436]: /: 197.5 GiB (212089090048 bytes) trimmed on /dev/sdb2
Nov 06 13:35:55 x fstrim[31818]: /: 197.5 GiB (212091011072 bytes) trimmed on /dev/sdb2
Nov 13 11:53:32 x fstrim[11380]: /: 198.9 GiB (213588897792 bytes) trimmed on /dev/sdb2
Nov 20 11:36:39 x fstrim[8200]: /: 199.1 GiB (213827371008 bytes) trimmed on /dev/sdb2
$ journalctl | grep "fstrim.*/efi:"
Oct 23 12:42:02 x fstrim[23187]: /boot/efi: 504.9 MiB (529436672 bytes) trimmed on /dev/sdb1
Oct 30 19:19:38 x fstrim[22436]: /boot/efi: 504.9 MiB (529436672 bytes) trimmed on /dev/sdb1
Nov 06 13:35:55 x fstrim[31818]: /boot/efi: 504.9 MiB (529436672 bytes) trimmed on /dev/sdb1
Nov 13 11:53:32 x fstrim[11380]: /boot/efi: 504.9 MiB (529424384 bytes) trimmed on /dev/sdb1
Nov 20 11:36:39 x fstrim[8200]: /boot/efi: 504.9 MiB (529424384 bytes) trimmed on /dev/sdb1
My weekly usage is not uniform. For example, I have done a 23.04 → 23.10 upgrade on November 8th. Taking into consideration my usage and disk utilization, the numbers reported by fstrim
do not make sense.
Can somebody help me interpret these numbers? Can somebody make similar checks on their system and see if it is also the case on that systems too? I thought that the numbers reported are supposed to be the number of block or bytes recovered after the trim operation, but it looks like they are the number of free block or bytes as known by the OS. How can I see the actual number of blocks recovered?
Note that this question may be related to How to know if my NVMe SSD needs TRIM