I have two SSDs in my laptop:
- Crucial MX300 725GB --> /dev/sda
- SanDisk SSD Plus 240GB --> /dev/sdb
Their performance reads on Linux and Windows like this:
Crucial MX300 --> same on both OSs
sudo hdparm -tT /dev/sda # Crucial
Timing cached reads: 13700 MB in 2.00 seconds = 6854.30 MB/sec
Timing buffered disk reads: 1440 MB in 3.00 seconds = 479.58 MB/sec
SanDisk Plus --> way faster on Windows!
sudo hdparm -tT /dev/sdb # SanDisk
Timing cached reads: 7668 MB in 2.00 seconds = 3834.92 MB/sec
Timing buffered disk reads: 798 MB in 3.00 seconds = 265.78 MB/sec # TOO LOW !!
The sequential read performance of the SanDisk on Linux is about half of its performance on Windows!
My Question is of course: Why and can that be fixed? Is this due to the SanDisk SSD Plus being handled as a SCSI drive?
From syslog:
~$ grep SDSSD /var/log/syslog
systemd[1]: Found device SanDisk_SDSSDA240G
kernel: [ 2.152138] ata2.00: ATA-9: SanDisk SDSSDA240G, Z32070RL, max UDMA/133
kernel: [ 2.174689] scsi 1:0:0:0: Direct-Access ATA SanDisk SDSSDA24 70RL PQ: 0 ANSI: 5
smartd[1035]: Device: /dev/sdb [SAT], SanDisk SDSSDA240G, S/N:162783441004, WWN:5-001b44-4a404e4f0, FW:Z32070RL, 240 GB
smartd[1035]: Device: /dev/sdb [SAT], state read from /var/lib/smartmontools/smartd.SanDisk_SDSSDA240G-162783441004.ata.state
smartd[1035]: Device: /dev/sdb [SAT], state written to /var/lib/smartmontools/smartd.SanDisk_SDSSDA240G-162783441004.ata.state
Compared to the Crucial MX300 which has on linux almost the same performance as on Windows:
~$ grep MX300 /var/log/syslog
systemd[1]: Found device Crucial_CT750MX300SSD1
kernel: [ 1.775520] ata1.00: ATA-10: Crucial_CT750MX300SSD1, M0CR050, max UDMA/133
smartd[1035]: Device: /dev/sda [SAT], Crucial_CT750MX300SSD1, S/N:16251486AC40, WWN:5-00a075-11486ac40, FW:M0CR050, 750 GB
smartd[1035]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.Crucial_CT750MX300SSD1-16251486AC40.ata.state
smartd[1035]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.Crucial_CT750MX300SSD1-16251486AC40.ata.state
Any help is very welcome!
Edit:
The difference that hdparm is showing on Linux is very real. I created two identical directories, one in each of the two drives, each directory containing about 25Gb of files (36395 files), and ran the exact same hashdeep checksum creation script on both dirs (the script just creates a md5-checksum for every file in the test dirs and stores all the checksums in one single file). These are the results:
test-sandisk# time create-file-integrity-md5sums.sh .
real 1m49.000s
user 1m24.868s
sys 0m15.808s
test-mx300# time create-file-integrity-md5sums.sh .
real 0m54.180s
user 1m4.628s
sys 0m11.640s
Same test with a single 7Gb file:
test-sandisk# time create-file-integrity-md5sums.sh .
real 0m26.986s
user 0m19.168s
sys 0m3.232s
test-mx300# time create-file-integrity-md5sums.sh .
real 0m17.285s
user 0m16.248s
sys 0m1.368s
Edit 2:
The partitions are "optimal" aligned and the only difference in /sys/block/$disk/queue is discard_zeroes_data (1 on Crucial, 0 on SanDisk). File system and mount options used: type ext4 (rw,nosuid,nodev,relatime,data=ordered,uhelper=udisks2)
dmesg | grep -i sata | grep 'link up'
[ 1.936764] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 2.304548] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Do,
$ sudo parted /dev/$disk_partition align-check minimal 1' and
sudo parted/dev/$disk_partition align-check optimal 1
return1 aligned
?There are a lot of variables here, but you may be running into differences on alignment or enabled features. You can check cache types, and other statistics by looking in '/sys/block/$device_name/queue/'
Unfortunately I am using NVMe and cannot provide examples but you can use
– gdahlm May 24 '17 at 01:31cat
on most of the files to see how the disk is configured.dmesg | grep -i sata | grep 'link up'
reveal both are operating on Sata III channel (6 Gbps)? – WinEunuuchs2Unix May 25 '17 at 23:52