What is a quick and easy way to make a file that is, say, 2 GB in size?
Asked
Active
Viewed 2.5k times
3 Answers
40
The zero-fill method (here modified to avoid potential memory bottlenecks) took 17 seconds to create a 10 GB file on an SSD and caused Ubuntu's graphical interface to become unresponsive.
$ time sh -c 'dd if=/dev/zero iflag=count_bytes count=10G bs=1M of=large; sync'
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 17.2003 s, 624 MB/s
real 0m17.642s
user 0m0.008s
sys 0m9.404s
$ du -B 1 --apparent-size large
10737418240 large
$ du -B 1 large
10737422336 large
fallocate creates large files instantly by directly manipulating the file's allocated disk space:
$ time sh -c 'fallocate -l 10G large; sync'
real 0m0.038s
user 0m0.000s
sys 0m0.016s
$ du -B 1 --apparent-size large
10737418240 large
$ du -B 1 large
10737422336 large
truncate also works instantly, and creates sparse files which don't use up actual disk space until data is written later on:
$ time sh -c 'truncate -s 10G large; sync'
real 0m0.014s
user 0m0.000s
sys 0m0.004s
$ du -B 1 --apparent-size large
10737418240 large
$ du -B 1 large
0 large

ændrük
- 76,794
22
An easy way would be to use the dd
command to write a file full of zeros.
dd if=/dev/zero of=outputFile bs=2G count=1
- if = input file
- of = output file
- bs = bytes
Use G in the size argument if you want computer (1024*1024*1024) gigabytes, or GB if you want human (1000*1000*1000) gigabytes.

Ward Muylaert
- 3,882

MikeVB
- 393
-
1i will just add that if you dont want all zeros, you can choose if=/dev/random – Denwerko Jun 10 '11 at 06:28
-
-
10Using /dev/random will take an awful lot of time. Use
/dev/urandom
in that case (it's non-blocking, but not guaranteed to have the same level of randomness). Drawing 2 GB from either one will almost certainly completely exhaust your system's entropy, so don't do anything cryptographic for a while afterwards. – user Jun 10 '11 at 08:39
1
ftp://ftp.fsf.hu/testfiles/maketestfiles.sh
or Seek is the size of the file you want in bytes - 1.
dd if=/dev/zero of=filename.big bs=1 count=1 seek=1048575 # 1 MByte
dd if=/dev/zero of=filename.big bs=1 count=1 seek=10485759 # 10 MByte
dd if=/dev/zero of=filename.big bs=1 count=1 seek=104857599 # 100 MByte
dd if=/dev/zero of=filename.big bs=1 count=1 seek=1073741823 # 1024 MByte
dd if=/dev/zero of=filename.big bs=1 count=1 seek=42949672959 # 40960 MByte

LanceBaynes
- 1,055
dd ... bs=2G count=1
reads 2 GB into memory (in oneread(2)
call). If you've got memory pressure that's probably not the way to go. More, smaller blocks may be faster if it means less paging. – claymation Apr 04 '17 at 05:34