Ever since I upgraded to 15.10, fdisk -l
reports 16 ram disks (/dev/ram0
... /dev/ram15
).
I'm a bit unsure what those are needed for. Is it safe to delete them? If not, how can I get rid of that fdisk output?

- 71,754

- 293
6 Answers
This is perfectly normal on Linux systems. It's kind of a preparatory action for the case that the RAM disks should be required. Each of them has a size of 64 MiB, a very low value. If necessary, the size will be increased automatically.
Why suddenly 16 RAM disks are available in Wily, can be explained only with difficulty.
I have tested the default RAM disks on:
- CentOS 7 – No RAM disks
- Fedora 23 – No RAM disks
- Ubuntu 14.04 – No RAM disks
- Raspbian Jessie – 16 RAM disks (4MiB)
The RAM disk driver is a way to use main system memory as a block device. It is required for initrd, an initial filesystem used if you need to load modules in order to access the root filesystem (see Documentation/initrd.txt). It can also be used for a temporary filesystem for crypto work, since the contents are erased on reboot.
The RAM disk dynamically grows as more space is required. It does this by using RAM from the buffer cache. The driver marks the buffers it is using as dirty so that the VM subsystem does not try to reclaim them later.
The RAM disk supports up to 16 RAM disks by default, and can be reconfigured to support an unlimited number of RAM disks (at your own risk). Just change the configuration symbol BLK_DEV_RAM_COUNT in the Block drivers config menu and (re)build the kernel.

- 90,397
-
And this changed from 15.04 to 15.10? – RudiC Nov 28 '15 at 20:18
-
4Note that they do not use up any memory if you never write anything to them. What seems to have changed is that the kernel did not used to list ramdisks in /proc/partitions, but now does, so fdisk -l reports on them. – psusi Nov 28 '15 at 22:17
-
@RudiC: As you're a reputation 6 user: If this answer helped you, don't forget to click the grey ☑ at the left of this text, which means Yes, this answer is valid! ;-) – Fabby Dec 01 '15 at 14:20
-
1Thanks for reminding me. The answers were enlightening, explaining the situation, so thanks for those. Unfortunately, I still don't have a clue how to suppress that messy disturbing output. – RudiC Dec 02 '15 at 16:31
-
Just FYI - checked vanilla Debian Jessie, and it's the same result you got for Raspian Jessie. – UpTheCreek Jan 10 '18 at 13:56
No idea why fdisk is suddenly reporting /dev/ram.
You can however tell fdisk to only report specific devices.
fdisk -l /dev/sd*
Will list real drives.
Alternatively you could also use parted and lsblk.
Parted output for one drive here.
Model: ATA Samsung SSD 840 (scsi)
Disk /dev/sda: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 2096kB 120GB 120GB extended boot
7 2097kB 26.2GB 26.2GB logical ext4
5 26.2GB 36.7GB 10.5GB logical ext4
6 36.7GB 47.2GB 10.5GB logical ext4
Corresponding lsblk output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 1K 0 part
├─sda5 8:5 0 9.8G 0 part /mnt/Links
├─sda6 8:6 0 9.8G 0 part
└─sda7 8:7 0 24.4G 0 part /
-
1note that on some boxes (ACPI versions?) physical devices are *hd, not *sd. – cat Dec 25 '15 at 13:30
-
And NVMe devices are /dev/nvme, and MMC devices (some Chromebooks use these) are /dev/mmcblk – 0xLogN Apr 15 '21 at 02:39
I know this thread is old, but I came across it only recently.
After installing Slackware 14.2 I got the same 16 RAM disks in the output of
fdisk -l
. I investigated a little further and found that in the 'util-linux'
package, which fdisk (among others) is part of, the selcetion of what fdisk considers as block device changed substantially. In the util-linux package version 2.21 this decision is based on the reported disk geometry while in the current version 2.72 the output of /proc/partitions is parsed.
According to my searches on the internet the ramdisks have been there in Linux since kernel 2.4, fdisk did just not show them. Since I am annoyed by the listing of many "disks", which are no real disks, I made a patch for fdisk:
diff -Nur util-linux-2.27.1_ori/disk-utils/fdisk-list.c util-linux-2.27.1_fdisk-no-ram-disks/disk-utils/fdisk-list.c
--- util-linux-2.27.1_ori/disk-utils/fdisk-list.c 2015-10-06 08:59:51.572589724 +0200
+++ util-linux-2.27.1_fdisk-no-ram-disks/disk-utils/fdisk-list.c 2016-08-16 15:55:14.840952091 +0200
@@ -312,6 +312,10 @@
if (devno <= 0)
continue;
+ /* dont list RAM disks */
+ if (strstr(line, "ram") && devno >= 256)
+ continue;
+
if (sysfs_devno_is_lvm_private(devno) ||
sysfs_devno_is_wholedisk(devno) <= 0)
continue;
Maybe this helps some others...

- 91
The post by Johannes is correct. The ram-disks have been in the kernel for a long time, it is the behavior of fdisk that changed. Instead of patching fdisk, I wrote a simple perl script (5 lines of code, 6 comment lines) to handle the issue. I put it into ~/bin/fdisk-l
, and now I just remember not to put a space between fdisk
and -l
.
#! /usr/bin/perl -w
# Run fdisk -l and filter out the 16 /dev/ram devices.
# Sun Mar 5 16:13:45 2017. Jeff Norden, jeff(at)math.tntech.edu
$_=`sudo fdisk -l`; #include sudo we don't have to be root
# weed out ram disks. The seemingly contradictory s (single) and m (multiline)
# flags allow "." to match "\n" and "^" to match at all beginning-of-lines.
s|^Disk /dev/ram.*?\n\n\n||smg;
# Do better than blank lines separating devices. Handle odd cases when there
# are more than two blank lines between devices or none at the end.
$hrule= '='x60 . "\n";
s/(\n\n\n+)|(\n+$)/\n$hrule/g;
print($hrule, $_);
As of April 2017, the ram disks no longer appear by default with the current Ubuntu kernel, so this issue is resolved. See: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1593293

- 197,895
- 55
- 485
- 740

- 151
I've been bugged by this for a few years now. Jeff Norden made a cool perl script. It lost the colors of fdisk though so I added them and named it lsdisk. I added --color=always and removed the ^ that broke the regex because of the newly displayed color codes:
#! /usr/bin/perl -w
# Run fdisk -l and filter out the 16 /dev/ram devices.
# Sun Mar 5 16:13:45 2017. Jeff Norden, jeff(at)math.tntech.edu
$_=sudo fdisk -l --color=always
; #include sudo we don't have to be root
weed out ram disks. The seemingly contradictory s (single) and m (multiline)
flags allow "." to match "\n" and "^" to match at all beginning-of-lines.
s|Disk /dev/ram.*?\n\n\n||smg;
Do better than blank lines separating devices. Handle odd cases when there
are more than two blank lines between devices or none at the end.
$hrule= '='x60 . "\n";
s/(\n\n\n+)|(\n+$)/\n$hrule/g;
print($hrule, $_);

- 1
This behaviour is governed by kernel options that you can only change by recompiling a custom kernel. You can change the size of the ram* devices using a GRUB parameter ramdisk_size but not the count. This is useless, because even if you have lots of memory every ramdisk will increase to whatever size you set. So for instance if you want an 8GB ramdisk--which I do, see below--you'll get 16x 8GB instances. I don't know whether this is harmless if you don't use most of them, but I'm reluctant to brick my system if it isn't.
I want to use a 8GB /dev/ram device to mirror with an 8GB hard disk partition for the specific purpose of putting a hot disk area on it. My application will automatically write the blocks out to regular storage based on the free space, so it doesn't matter that it's small.
With write-behind under mdadm,this should have the effect of making writes blazing fast if they are bursty, with the HDD side of the mirror catching up when things are quieter to provide at least some data protection. I have used this setup with Solaris, but it doesn't seem to be possible with Linux as it comes out of the box.
Since RAM is orders of magnitude faster than SSD, this should be a win, but I can't try it. As others have noticed, if you build a RAID1 with tmpfs, it won't reassemble at boot because the step that initialises tmpfs is far too late in the boot process--at mountall. Your mds are well and truly built by then, so it fails, and you have to rebuild it manually.
OTOH /dev/ram* devices would be perfect for this--if you could configure them. They are the very first thing that gets set up, and ram0 is the initial / filesystem.

- 31
-
This is a good answer. Please remove the "enhancement request" however, as it is not appropriate for an answer. – Feb 02 '17 at 08:40
-