1

In the history, I can use iso_scan or loopback.cfg to boot:

loopback loop $isofile
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=${isofile} verbose
initrd (loop)/casper/initrd*

Or

loopback loop "$isofile"
root=(loop)
configfile /boot/grub/loopback.cfg
loopback --delete loop

The 20iso_scan report it cannot mount /dev/sr0 and cannot find the (hdx,gpt)/...iso but the iso apparently is there otherwise, we cannot load the initrd and vmlinuz.

The second method, try to use the loopback.cfg come from the cd itself. However, it yield /init. line49: can't open /dev/sr0: No medium found.

Any idea, what is the correct way to do it nowadays?

There should be a way to directly give the squash file paths already when I loop mount it? why it needs to scan the iso and mount again?

update

now I understood more about the livecd boot procedure.

  • After we call the linux and initrd, the kernel take the ownership. Thus all the loopback devices created inside the grub are invisible to kernel.
  • The kernel will reinitiate everything like it sees them the first time.
  • The squash mount happens inside the kernel. At this time kernel can see the actual iso file on the real disk. It can mount the iso file and unpack the squashfs then.

However, the bootloader should still be able to give the kernel a ram disk as the initrd command. Ideally a bootloader should create a ram disk from squashfs (loopback does not create ram disk). Then kernel can mount it as a device. Or the bootloader may inject the entire iso into the initramfs. Some tool can already partially do that: ipxe's initrd.magic module for example. grub so far does not have this feature nor plan to provide one.

Wang
  • 635

2 Answers2

2

I just used this. It has a data partition - data_nvme on NVMe drive with folder /ISO. Getting path correct is often a major issue. I now prefer to use labels over UUID or devices to find partition.

menuentry "Ubuntu 22.10 Kinetic amd64" {
set isofile="/ISO/kubuntu-22.10-desktop-amd64.iso"
insmod part_gpt
#rmmod tpm
search --set=root --label data_nvme --hint hd0,gpt5
loopback loop (${root})$isofile
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile toram
initrd (loop)/casper/initrd
}

Having ISO downloaded, / partition already defined and using toram with NVMe drive, let me do full install in less than 5 minutes. Installs to USB3 flash drives still take over 40 minutes.

oldfred
  • 12,100
  • The toram is a nice trick when we have more than 16GB memory. However the real problem here is the path I used was wrong. The iso-scan/filename is for casper script in kerenl environment. But I gave it the grub path with (dev, part) prefix. I just strip that then it progressed more. Though it still report file is broken. I figured out the casper need mount all partitions it sees and then find the file. By accident I has another partition contains a broken iso with same path name. After removing that, it works. If you can emphasize this part in your answer, I shall accept it. – Wang Oct 24 '22 at 13:31
  • And it is very unfortunate that the loopback.cfg does not work any more, we have to mount the iso 3 times with the iso-scan way: 1st the grub, 2nd the linux kernel, 3rd inside casper – Wang Oct 24 '22 at 13:32
0

I think this may be a duplicate

of: 20.04 booting .iso from GRUB menu

However, There are alternatives to the standard answer.

If you create a small FAT32 partition on your hard drive, say about 5 to 10 GB, You can extract the ISO to it and let UEFI boot Ubuntu without needing to mess with GRUB.

You may need to choose the option by pressing F9, F10, F12, or what ever combination your computer uses to select boot disk.

C.S.Cameron
  • 19,519
  • 1
    I do have 64GB spare partition for this. However, each disk can only have exactly 1 ESP partition. I do not want to mess my boot. There for I have to use a smarter way to handle this. But in other hand, it might be interesting if the iso's EFI can be contained inside a sub dir in UEFI partition. Then if the firmware support to boot from selected efi application, it might have chance to work if we modify the grub.conf of the iso correctly. The whole procedure is rather annoying and surprisingly not all recent machines support boot from specific efi file ... – Wang Oct 24 '22 at 13:18
  • On my external SSD, I did create a FAT32 partition of 6GB, moved boot flag to that partition, extracted ISO and booted that ISO. I moved boot, esp flags back to ESP, but had to unount internal drive's ESP to allow mount of external drive's ESP, so Ubiquity would install grub to it. Posted work around to manually unmount & mount correct ESP during install #55 or( #23 & #26) https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1396379 Many other work arounds in bug report. – oldfred Oct 24 '22 at 14:39
  • I am trying to understand why booting an ISO file from a UEFI drive is any better than booting an extracted ISO is for the drive. – C.S.Cameron Oct 24 '22 at 14:52
  • I do not just have 1 iso. I have isos. So even I have external SSD that just for this purpose. all the ISOs' extraction will compete in EFI/BOOT. This won't work at all, as most ISO obey the UEFI specs to put real bootloader in EFI/BOOT/ for removable media (which is CD). Moreover, I do not want to use external device. I want to put the iso on my internal partition. So you won't expect anyone overwritten their ESP with the ISO content. also we do not want to constantly changing the partition flags. It need to work stably and survive grub updates. – Wang Oct 24 '22 at 20:48
  • also I do not think the issue you posted is a bug. This behaviour is expected from UEFI specs. Supporting multiple ESP is neither required nor recommended. – Wang Oct 24 '22 at 20:53
  • Did you read the link I posted above, "20.04 booting .iso from GRUB menu" about booting ISO's located on the internal Ubuntu drive? – C.S.Cameron Oct 26 '22 at 06:04