7

My computer (which is running Ubuntu Server 16.04) is currently using 13.4 GB out of 15.4 GB of RAM (according to htop) but I'm struggling to understand what is using that memory.

free -m reports:

              total        used        free      shared  buff/cache   available
Mem:          15733       13781        1083          22         868        1592
Swap:         71524         430       71094

top shows the highest memory using process as taking 6.8% of memory and the next largest taking 0.4% of memory.

If I use ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n, it shows the (same) highest-memory-using process as taking 1104 MB of RAM, which sounds about right compared to top.

If I sum all the values of every process reported by ps:

ps aux | awk '{sum=sum+$6}; END {print sum/1024 " MB"}'

it reports a total of 1.8 GB RAM used.

So ps reckons I'm using 1.8 GB RAM, but free and htop both reckon I'm using over 13 GB of RAM. The available column in the free output is too small to account for this difference.

What am I missing?

Edit 2017-01-20 13:27 Z

/usr/bin/free -h reports:

total used free shared buff/cache available Mem: 15G 13G 417M 22M 1.1G 1.2G Swap: 69G 432M 69G

slabtop output:

$ sudo slabtop -s c -o | head -n 20
 Active / Total Objects (% used)    : 16552394 / 17903627 (92.5%)
 Active / Total Slabs (% used)      : 841391 / 841391 (100.0%)
 Active / Total Caches (% used)     : 109 / 155 (70.3%)
 Active / Total Size (% used)       : 9510904.12K / 9753117.86K (97.5%)
 Minimum / Average / Maximum Object : 0.01K / 0.54K / 18.56K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
1764956 1764890   0%    1.08K 120388       29   3852416K zio_cache
126780 126308   0%   16.00K  68205        2   2182560K zio_buf_16384
1797996 1797996 100%    0.85K 100920       18   1614720K dnode_t
1952240 1833842   0%    0.50K 122015       16    976120K kmalloc-512
 62255  61308   0%    8.00K  20096        4    643072K kmalloc-8192
1999648 1968319   0%    0.28K  71416       28    571328K dmu_buf_impl_t
1764892 1764892 100%    0.26K  56932       31    455456K sa_cache
2028978 1981994   0%    0.19K  96618       21    386472K dentry
 23113  23021   0%   12.00K  11557        2    369824K zio_buf_12288
694975 647514   0%    0.31K  27799       25    222392K bio-1
1660096 1592262   0%    0.12K  51878       32    207512K kmalloc-128
131376  91798   0%    1.00K   8211       16    131376K ecryptfs_inode_cache
 90888  89352   0%    1.05K   3035       30     97120K ext4_inode_cache

$ sudo slabtop -s c -o | tail -n +8 | awk '{sum=sum+$7}; END {print sum/1024 " MB"}'` reports:
11484.9 MB

$ sudo slabtop -s c -o | tail -n +8 | grep zio | awk '{sum=sum+$7}; END {print sum/1024 " MB"}'
6222.28 MB

So it looks like it's something to do ZFS from what I can tell - ZFS is taking over 6 GB of RAM and there's about 5 GB used by non-zio stuff in the slabtop output.

DrAl
  • 854

2 Answers2

1

In my case, some memory is reserved for hugepage. Considering the hugepages reserved memory, it adds up.

controller-0:/home/wrsroot# grep -i huge /proc/meminfo
HugePages_Total:    1000
HugePages_Free:      488
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
  • That's the size of your hugepages, not the amount of memory they're taking up... I think – Kevin Jun 18 '22 at 07:12
1

I don't have enough reputation to comment, hence posting that as an answer.

I had similar problem and rootcause was undervolting processor too hard. My PC is Lenovo T440P laptop with Intel(R) Core(TM) i7-4810MQ CPU on Debian 11. I use only ext4 and gocryptfs filesystems.

smem -tw had high kernel dynamic memory occupation.