1

Background Problems

I have a Dedicated Server (Ubuntu 22.04.1 LTS) with five Public IPs as below:

  1. xxx.xxx.51.20 (Main Ip)
  2. xxx.xxx.198.104
  3. xxx.xxx.198.105
  4. xxx.xxx.198.106
  5. xxx.xxx.198.107

I want to hosts several KVM VM inside this server and assign some of the VM a public IP from the host, or simply imagine creating several Virtual Private Servers (VPS) with each having Public IP.

If I am not wrong, I need to create a bridge network. I already did and have br0 bridges with all public IPs assigned there.

Currently, here is the Hosts network configuration:

cat /etc/netplan/50-cloud-init.yaml:

network:
    version: 2
    renderer: networkd
    ethernets:
        eno1:
            dhcp4: false
            dhcp6: false
            match:
                macaddress: xx:xx:xx:2a:19:d0
            set-name: eno1
    bridges:
        br0:
            interfaces: [eno1]
            addresses:
            - xxx.xxx.51.20/32
            - xxx.xxx.198.104/32
            - xxx.xxx.198.105/32
            - xxx.xxx.198.106/32
            - xxx.xxx.198.107/32
            routes:
            - to: default
              via: xxx.xxx.51.1
              metric: 100
              on-link: true
            mtu: 1500
            nameservers:
                addresses: [8.8.8.8]
            parameters:
                stp: true
                forward-delay: 4
            dhcp4: no
            dhcp6: no

With this configuration, all of the IP is pointing to the hosts, and can reach the hosts successfully.

Here is the hosts ip a record:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether xx:xx:xx:2a:19:d0 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:2a:19:d1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:59:36:78 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.51.20/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.104/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.105/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.106/32 scope global br0
       valid_lft forever preferred_lft forever
    inet xxx.xxx.198.107/32 scope global br0
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:3678/64 scope link 
       valid_lft forever preferred_lft forever
5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:e2:3e:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether xx:xx:xx:ff:4c:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
9: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
    link/ether xx:xx:xx:6b:01:05 brd ff:ff:ff:ff:ff:ff
    inet6 xxxx::xxxx:ff:fe6b:105/64 scope link 
       valid_lft forever preferred_lft forever
10: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether xx:xx:xx:16:07:56 brd ff:ff:ff:ff:ff:ff
    inet6 xxxx::xxxx:ff:fe16:756/64 scope link 
       valid_lft forever preferred_lft forever

Then I create a KVM VM with two network attached (one for the br0 and the other is NAT based bridge)

enter image description here

And from the VM, I configure the netplan to be like this: cat /etc/netplan/00-installer-config.yaml/:

network:
    version: 2
    ethernets:
        enp1s0:
            addresses:
            - xxx.xxx.198.104/32
            routes:
            - to: default
              via: xxx.xxx.198.1
              metric: 100
              on-link: true
            nameservers:
                addresses:
                - 1.1.1.1
                - 1.1.0.0
                - 8.8.8.8
                - 8.8.4.4
            search: []
    enp7s0:
        dhcp4: true
        dhcp6: true
        match:
            macaddress: xx:xx:xx:16:07:56

Here the VM use enp1s0 for the static Public IP (xxx.xxx.198.104) and enp7s0 for the NAT from hosts (192.168.122.xxx).

From VM ip a command, it shown that the VM get correct IP.

The problem:

  1. When I try to ssh directly from my laptop to the VM public IP (xxx.xxx.198.104), it seems that I still connected to the Host, not to the VM.
  2. From the VM, if I disconnect the NAT network (enp7s0), and only use the (enp1s0) network with public IP, it seems the VM cannot connect to the internet.

Is there something I missed?

Update 1

I contacted the DC provider, they have documentation to add Public IP to VM here https://docs.ovh.com/gb/en/dedicated/network-bridging/ but only applicable if I use Proxmox as the host. I need to create a bridge network with the specified Mac Address attached to that bridge, then from VM we need to apply the Public IP.

How to do that on Ubuntu?

Update 2

I managed to create the bridge network with my Public IP on the hosts side with the following command:

sudo ip link add name test-bridge link eth0 type macvlan
sudo ip link set dev test-bridge address MAC_ADDRESS
sudo ip link set test-bridge up
sudo ip addr add ADDITIONAL_IP/32 dev test-bridge

And I repeated it 4 times to add all my public IP to host.

Now my hosts configuration is below:

cat /etc/netplan/50-cloud-init.yaml

network:
    version: 2
    ethernets:
        eno1:
            dhcp4: true
            match:
                macaddress: xx:xx:xx:2a:19:d0
            set-name: eno1

And ip a:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:2a:19:d0 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
    inet xxx.xxx.51.20/24 metric 100 brd xx.xx.51.255 scope global dynamic eno1
       valid_lft 84658sec preferred_lft 84658sec
    inet6 xxxx::xxxx:xxxx:fe2a:19d0/64 scope link 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:2a:19:d1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:e2:3e:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether xx:xx:xx:e1:2b:ce brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: vmbr1@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:79:26:12 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.104/32 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:2612/64 scope link 
       valid_lft forever preferred_lft forever
7: vmbr2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:f3:e2:85 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.105/32 scope global vmbr2
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:e285/64 scope link 
       valid_lft forever preferred_lft forever
8: vmbr3@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:48:a8:c9 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.106/32 scope global vmbr3
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:a8c9/64 scope link 
       valid_lft forever preferred_lft forever
9: vmbr4@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:eb:29:a1 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.107/32 scope global vmbr4
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:29a1/64 scope link 
       valid_lft forever preferred_lft forever

I tested to ping all the IP Addressed from my laptop and it works.

On the VM side, I edit the networking to be like this:

sudo virsh edit vm_name and look for network interface:

    <interface type='network'>
      <mac address='xx:xx:xx:16:07:56'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='xx:xx:xx:79:26:12'/>
      <source bridge='vmbr1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

The problem now, I cannot start the VM:

$ sudo virsh start lab
error: Failed to start domain 'vm_name'
error: Unable to add bridge vmbr1 port vnet8: Operation not supported

Is there something missed again?

Update 3

I found out that using sudo ip link add ... command only works temporary, it will be lost after server reboot.

Please guide me to make the proper Host configuration and VM side configuration.

Thanks

Update 4

I read Proxmox configuration reference here (https://pve.proxmox.com/wiki/Network_Configuration) and I tried to implement the Routed configuration on the Hosts.

Thus, I installed ifupdown packages, and create configuration below:

auto lo
iface lo inet loopback

auto eno0 iface eno0 inet static address xxx.xxx.51.20/24 gateway xxx.xxx.51.254 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp

auto vmbr0 iface vmbr0 inet static address xxx.xxx.198.104/24 bridge-ports none bridge-stp off bridge-fd 0

auto vmbr1 iface vmbr1 inet static address xxx.xxx.198.105/24 bridge-ports none bridge-stp off bridge-fd 0

auto vmbr2 iface vmbr2 inet static address xxx.xxx.198.106/24 bridge-ports none bridge-stp off bridge-fd 0

auto vmbr3 iface vmbr3 inet static address xxx.xxx.198.107/24 bridge-ports none bridge-stp off bridge-fd 0

I also disable systemd networking as suggested here (https://askubuntu.com/a/1052023) with the following command:

sudo systemctl unmask networking
sudo systemctl enable networking
sudo systemctl restart networking
sudo journalctl -xeu networking.service
sudo systemctl stop systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online
sudo systemctl disable systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online
sudo systemctl mask systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

With this configuration, from the VM perspective, it can read that the VM get 2 IP (one private and one public).

However, I got another problem:

  1. sudo systemctl status networking always resulted in failed. With log like this:
$ sudo systemctl status networking.service
× networking.service - Raise network interfaces
     Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2022-12-13 18:51:27 UTC; 7min ago
       Docs: man:interfaces(5)
   Main PID: 1000 (code=exited, status=1/FAILURE)
        CPU: 631ms

Dec 13 18:51:25 hostname ifup[1054]: /etc/network/if-up.d/resolved: 12: mystatedir: not found Dec 13 18:51:25 hostname ifup[1066]: RTNETLINK answers: File exists Dec 13 18:51:25 hostname ifup[1000]: ifup: failed to bring up eno1 Dec 13 18:51:26 hostname ifup[1132]: /etc/network/if-up.d/resolved: 12: mystatedir: not found Dec 13 18:51:26 hostname ifup[1215]: /etc/network/if-up.d/resolved: 12: mystatedir: not found Dec 13 18:51:27 hostname ifup[1298]: /etc/network/if-up.d/resolved: 12: mystatedir: not found Dec 13 18:51:27 hostname ifup[1381]: /etc/network/if-up.d/resolved: 12: mystatedir: not found Dec 13 18:51:27 hostname systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE Dec 13 18:51:27 hostname systemd[1]: networking.service: Failed with result 'exit-code'. Dec 13 18:51:27 hostname systemd[1]: Failed to start Raise network interfaces.

and journalctl:

$ sudo journalctl -xeu networking.service
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 13 18:49:08 hostname systemd[1]: networking.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit networking.service has entered the 'failed' state with result 'exit-code'.
Dec 13 18:49:08 hostname systemd[1]: Failed to start Raise network interfaces.
░░ Subject: A start job for unit networking.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit networking.service has finished with a failure.
░░ 
░░ The job identifier is 3407 and the job result is failed.
-- Boot 50161a44ec43452692ce64fca20cce9d --
Dec 13 18:51:25 hostname systemd[1]: Starting Raise network interfaces...
░░ Subject: A start job for unit networking.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit networking.service has begun execution.
░░ 
░░ The job identifier is 49.
Dec 13 18:51:25 hostname ifup[1054]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:25 hostname ifup[1066]: RTNETLINK answers: File exists
Dec 13 18:51:25 hostname ifup[1000]: ifup: failed to bring up eno1
Dec 13 18:51:26 hostname ifup[1132]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:26 hostname ifup[1215]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname ifup[1298]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname ifup[1381]: /etc/network/if-up.d/resolved: 12: mystatedir: not found
Dec 13 18:51:27 hostname systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ An ExecStart= process belonging to unit networking.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 13 18:51:27 hostname systemd[1]: networking.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ The unit networking.service has entered the 'failed' state with result 'exit-code'.
Dec 13 18:51:27 hostname systemd[1]: Failed to start Raise network interfaces.
░░ Subject: A start job for unit networking.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░ 
░░ A start job for unit networking.service has finished with a failure.
░░ 
░░ The job identifier is 49 and the job result is failed.

ip a results on hosts:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether xx:xx:xx:2a:19:d0 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
    inet xxx.xxx.51.20/24 brd xx.xxx.51.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:19d0/64 scope link 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether xx:xx:xx:2a:19:d1 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
4: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:b3:96:06 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.104/24 brd xxx.xxx.198.255 scope global vmbr0
       valid_lft forever preferred_lft forever
5: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:a8:1a:49 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.105/24 brd xxx.xxx.198.255 scope global vmbr1
       valid_lft forever preferred_lft forever
6: vmbr2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:d8:82:25 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.106/24 brd xxx.xxx.198.255 scope global vmbr2
       valid_lft forever preferred_lft forever
7: vmbr3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:4d:aa:31 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.198.107/24 brd xxx.xxx.198.255 scope global vmbr3
       valid_lft forever preferred_lft forever
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:e2:3e:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
9: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:8c:f2:63:f5 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
  1. Although the VM can read 2 interfaces (private and public IP), I can't reach the VM public IP from my laptop, it seems on the hosts side, the connection is not forwarded to the VM.

  2. If all above problem is caused by networking service that failing to start, is there any fix for this?

  3. If, netplan / systemd networking is the proper way to configure networking on Ubuntu 22.04, how is the correct netplan configuration for bridging like this case?

$ cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eno0 iface eno0 inet static address xxx.xxx.51.20/24 gateway xxx.xxx.51.254 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up echo 1 > /proc/sys/net/ipv4/conf/eno0/proxy_arp

auto vmbr0 iface vmbr0 inet static address xxx.xxx.198.104/24 bridge-ports none bridge-stp off bridge-fd 0

auto vmbr1 iface vmbr1 inet static address xxx.xxx.198.105/24 bridge-ports none bridge-stp off bridge-fd 0

auto vmbr2 iface vmbr2 inet static address xxx.xxx.198.106/24 bridge-ports none bridge-stp off bridge-fd 0

auto vmbr3 iface vmbr3 inet static address xxx.xxx.198.107/24 bridge-ports none bridge-stp off bridge-fd 0

user68186
  • 33,360
  • Please don't put SOLVED in the question title. This is not a discussion forum. The green check mark ✅ is sufficient. – user68186 Dec 23 '22 at 10:10

2 Answers2

0

the command ip link add ... adds an interface. If you add the equivalent config to the netplan it will be persistent. Example:

ip link add xxx0 type  bridge
ip addr add 192.168.1.20/32 dev xxx0 

looks in netplan:

network:
  version: 2
  ethernets:
    enp2s0:
      critical: true
      dhcp4: false

bridges: xxx0: addresses: - 192.168.1.20/26

  interfaces:
    - enp2s0

Additional Comment:

I do not know your use case, but I’m new to linux with only 25 years of experience and I would never expose a VM via a bridged interface. If you want to expose individual services only you may find better (and more save) solutions using KVM with a privte Bridge and iptables portforwarding of individual Ports. Alternative you may look at docker and get rid of the VMs at all. (highly recommended)

dummyuser
  • 963
  • 1
  • 3
  • 10
  • Hi, my use case is that I want to create virtual private server with KVM from my Dedicated server. Thus, I need the VM to have dedicated public IP from hosts. In this case, docker is not feasible because some software need to be installed inside VM/VPS or dedicated server (eg. cpanel). If you have better approach for this case, I would be happy to learn. Thanks – khgasd652k Dec 10 '22 at 15:51
  • this sounds like you want to expose dedicated Services on th VM and do not want to use docker. my setup would be a standard KVM VM with a private bridge defined in netplan of the host and fixed IPs defined in netplan of the guest and some additional iptables rules. Example: you have a private bridge with the IP Range 192.168.1.1/24, the first VM has the IP 192.168.1.10 and you want to acces Port 443 via your public IP xxx.xxx.198.104 you need a persitant iptables rule like iptables -A FORARD -d xxx.xxx.198.104/32 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.1.10 – dummyuser Dec 10 '22 at 16:27
  • with this setup you have a real VM which is reachable on a single Port from the internet. You can install any SW in the VM. you will need some iptables MASQUERADE ruels for outgoing traffic. – dummyuser Dec 10 '22 at 16:30
  • I am sorry for my lack of understanding, but, with your configuration, doesn't it means that every connection to HOST port 443 will be redirected to the first VM, or it's only applied to the first public IP (xxx.xxx.198.104) port 443? If it's the latter, is it possible for me to redirect every connection to first public IP (xxx.xxx.198.104) so all incoming connection to that IP will be redirected to the first VM? – khgasd652k Dec 10 '22 at 16:45
  • all incomming connections to the fist public IP on port 443 will be redirected to th first VM. iIn Detail: all connections to xxx.xxx.198.104/32 match this rule. Seconed Part: all connections heading to xxx.xxx.198.104 redirecting to the first VM: if you use the rule without the --dport 443 option, this should work for tcp (udp to be handled separately) but I never used this. – dummyuser Dec 10 '22 at 17:18
  • Hi @dummyuser, your using your method means that from VM perspective, it's only know that they have 192.168.1.1/24 IP addresses. What I want to achieve is that the VM also know their public IP if I run ip a command.

    Is there any additional configuration for it?

    – khgasd652k Dec 13 '22 at 09:29
  • not that I'm aware of. – dummyuser Dec 13 '22 at 11:26
  • Hi @dummyuser, sorry for pinging you again, I have updated the question with my current situation. I tried to follow the Proxmox configuration, but it uses legacy ifupdown and networking services. Somehow my configuration failed to start the networking daemon, I am afraid it's because there is netplan and networking service, even when I already disabled the netplan and systemd-networking. Do you have any ideas on what's wrong with the networking daemon? or maybe you can create the proper netplan configuration that resemble the legacy proxmox configuration?

    Thank you

    – khgasd652k Dec 13 '22 at 19:20
0

After a few weeks of trial errors, for this cases, I found a working solution.

To understand how the bridge should be, first we need to know which bridge type we can use. Normally there is 3 (three) types as described here:

  1. Default Bridge
  2. Routed Bridge
  3. Masqueraded/ NAT based Bridge

Full explanation for each one below:

1. Default Bridge

Default bridge means that the VM worked like they have direct attachment to the network.

Default Bridge

In my cases, after careful cross check with the legacy ifupdown configuration and ip link add commands, I found that in OVH dedicated server environment (not the eco one), we are allowed to have default bridge with pre-defined MAC address.

If you use legacy ifupdown:

Network configuration for hosts is like this:

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0 iface vmbr0 inet static address xxx.xxx.51.20/24 gateway xxx.xxx.51.1 bridge-ports eno1 bridge-stp off bridge-fd 0

Then run:

sudo systemctl restart networking

As you can see, the dedicated IP address for the host is included in the bridge configuration.

And for VM/client configuration, it's same with the documented here.

If you use netplan

Network configuration for hosts is like this:

$ sudo cat /etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    renderer: networkd
    ethernets:
        eno1:
            dhcp4: false
            dhcp6: false
bridges:
    brname:
        addresses: [ xxx.xxx.51.20/24 ]
        interfaces: [ eno1 ]
        routes:
        - to: default
          via: xxx.xxx.51.1
          metric: 100
          on-link: true
        mtu: 1500
        nameservers:
            addresses: [ 213.186.33.99 ]
        parameters:
            stp: true
            forward-delay: 4
        dhcp4: no
        dhcp6: no

Then run:

sudo netplan generate
sudo netplan --debug apply

And for VM/client configuration, it's same with the documented here.

2. Routed Bridge

Most hosting providers do not support the default bridge network. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.

Some providers allow you to register additional MACs through their management interface. This avoids the problem, but can be clumsy to configure because you need to register a MAC for each of your VMs. You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.

Routed Network

If you use legacy ifupdown:

Network configuration for hosts is like this:

auto lo
iface lo inet loopback

auto eno1 iface eno1 inet static address xxx.xxx.51.20/24 gateway xxx.xxx.51.1 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp

auto vmbr0 iface vmbr0 inet static address xxx.xxx.198.104/24 bridge-ports none bridge-stp off bridge-fd 0

And for VM/client configuration, it's same with the documented here.

If you use netplan

Because as this case arise, netplan doesn't support macvlan bridge yet, we need a workaround and alter systemd network:

  • We need to create a few files on /etc/systemd/network, so navigate to the directory:
$ cd /etc/systemd/network
  • Define vlans for the bridge, please change cat to your editor
$ cat 00-vmvlan0.netdev 
[NetDev]
Name=vmvlan0
Kind=macvlan
# Optional MAC address, or other options
MACAddress=xx:xx:xx:xx:26:12

[MACVLAN] Mode=bridge

$ cat 00-vmvlan0.network [Match] Name=vmvlan0

[Network] Address=xxx.xxx.198.104/32 IPForward=yes ConfigureWithoutCarrier=yes

  • Define Bridge
$ cat 00-vmbr0.netdev 
[NetDev]
Name=vmbr0
Kind=bridge

$ cat 00-vmbr0.network [Match] Name=vmbr0

[Network] MACVLAN=vmvlan0

  • Link bridge, vlan, and interface:
$ cat 10-netplan-eno1.link 
[Match]
MACAddress=xx:xx:xx:xx:19:d0

[Link] Name=eno1 WakeOnLan=off

$ cat 10-netplan-eno1.network [Match] MACAddress=xx:xx:xx:xx:19:d0 Name=eno1

[Network] DHCP=ipv4 LinkLocalAddressing=ipv6 MACVLAN=vmvlan0

[DHCP] RouteMetric=100 UseMTU=true

  • and we can use default netplan configuration:
$ sudo cat /etc/netplan/50-cloud-init.yaml
network:
    version: 2
    ethernets:
        eno1:
            dhcp4: true
            match:
                macaddress: xx:xx:xx:2a:19:d0
            set-name: eno1

And for VM/client configuration, it's same with the documented here.

3. Using NAT bridge:

The default bridge that comes with libvirt installation is a NAT based bridge, so it should be working by default.

In this case, what should we do is to make sure the hosts know that it have more than 1 Public IP attached. So:

$ cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eno1:
            dhcp4: true
            addresses:
            - xxx.xxx.50.20/32
            - xxx.xxx.198.104/32
            - xxx.xxx.198.105/32
            - xxx.xxx.198.106/32
            - xxx.xxx.198.107/32
            match:
                macaddress: a4:bf:01:2a:19:d0
            set-name: eno1

Then, assume that the VM already got a static IP (eg: 192.168.1.10), then we only need to forward the connection to that IP as described in @dummyuser answer:

iptables -A FORARD -d xxx.xxx.198.104/32 -p tcp -m tcp -j DNAT --to-destination 192.168.1.10