5

I have the issue that sometimes when I burn an iso image to a CD-R with:

sudo wodim -v driveropts=burnfree -data dev=/dev/scd0 input.iso

And then read it back out again with:

sudo dd if=/dev/cdrom of=output.iso
dd: reading `/dev/cdrom': Input/output error
...

That I end up with two iso images that are not identical, namely the output.iso is missing 2048 bytes at the end. When I however mount the iso image or CD-R and compare the actual files on the mountpoint, both are identical.

Is that expected behavior or is that an actually incorrect burn of the data? And if its expected, how can I verify that the burn process was successful?

The reason why I ask in the first place is that it seems to be reproducible behavior, certain iso images come out 2048 bytes short, even on repeated burns, but all burned CD-Rs are under themselves identical.

Also what is the reason behind the:

dd: reading `/dev/cdrom': Input/output error

As it happens always, I assume it is normal, but what is the technical reason behind it? I assume CDs don't allow the device to detect the size directly, so dd reads till it encounters the end the hard way.

Edit: User karol on superusers.com mentioned that both the size issue and the read error are the result of using -tao (default) in wodim instead of -dao mode. I couldn't yet test it, but it sounds like the most plausible explanation so far.

Braiam
  • 67,791
  • 32
  • 179
  • 269
Grumbel
  • 4,729

2 Answers2

3

Indeed, it's probably padding. Check the file sizes, is output.iso slightly larger?

Look at the very end of output.iso:

dd if=output.iso bs=1 seek=658562000 count=1071 | hexdump -C

I'm guessing zeros?

You can try running ls -l input.iso to get its exact size, then:

dd if=output.iso bs=1 count=<INPUT.ISO SIZE> | md5sum

Note that this will be pretty slow since you're reading one byte at a time. If the size is evenly divisible by an integer, substitute that integer for the 1 in bs=1 and divide the count by that number. Even 2 bytes at a time will be much faster!

As to your second question, the Input/output error happens when dd hits the end of the device. Nothing to worry about.

  • Actually output.iso is sligthly small, not larger (by 2048 bytes it seems), so this isn't just dd adding some padding on the rip, but wodim missing some bytes on the writing. – Grumbel Nov 14 '10 at 05:21
  • @Grumbel: 2048 is exactly the amount of space available for data in a mode-1 CD sector. Makes me think input.iso was padded unnecessarily. If you compare the images incrementally, where do they start differing? – Nicholas Knight Nov 14 '10 at 05:29
  • The images are identical, expect for the 2048 bytes missing at the end. I however don't have a large sample size of CD-Rs with that issue at hand, many CD-Rs come out fine. – Grumbel Nov 14 '10 at 05:42
  • dd does not have error checking so not recommended. Related: http://unix.stackexchange.com/q/311365/16920 – Léo Léopold Hertz 준영 Sep 21 '16 at 17:29
0

This problem might be related to your using dd. Try adding conv=direct when you use dd to read the disc, i.e.:

sudo dd if=/dev/cdrom of=output.iso conv=direct

That tells dd to use O_DIRECT for its I/O, bypassing the kernel block layer. (Normally the block layer reads in 4KB chunks even if the calling program requests less. Maybe that was the cause of the error half the time, for discs with an odd number of sectors???)