please help me connect port 80 inside a vm app to the public url

Hello Balena fam,

So i’ve been trying to get this working for a month now, and although i’ve gained a lot of ground, i am stuck at the final most crucial bit.

Im running a ubuntu VM inside a libvirt stack ubuntu container.

when trying to access port 80 inside the ubuntu container, I can successfully see the webpage via the public URL, using a quick python webserver to test.

if i run the python webserver test inside the VM, I can successfully curl that URL via the container, but I cannot see the content on the public URL.

default via 192.168.0.1 dev eth0  metric 100
10.114.101.0/24 dev balena0 scope link  src 10.114.101.1
10.114.102.0/24 dev resin-dns scope link  src 10.114.102.1
10.114.104.0/25 dev supervisor0 scope link  src 10.114.104.1
52.4.252.97 dev resin-vpn scope link  src 10.245.88.200
172.17.0.0/16 dev br-86ecad319963 scope link  src 172.17.0.1
172.18.0.0/16 dev br-ed8e094b12ae scope link  src 172.18.0.1
172.19.0.0/16 dev br-a2fcbd972d90 scope link  src 172.19.0.1
192.168.0.0/24 dev eth0 scope link  src 192.168.0.100  metric 100
root@ed6f959:~#

^ from the host OS.

root@vm-mgt-b:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth1
10.114.101.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.114.101.1    0.0.0.0         255.255.255.255 UH    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
root@vm-mgt-b:~#

^^ from the container

root@vm-mgt-b:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:4f:92:be brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:03:38:83 brd ff:ff:ff:ff:ff:ff
66: eth0@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:0a:72:65:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.114.101.2/24 brd 10.114.101.255 scope global eth0
       valid_lft forever preferred_lft forever
68: eth1@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth1
       valid_lft forever preferred_lft forever

^^ also container

m@wpvm1:~$ ip r
default via 192.168.122.1 dev enp1s0 proto dhcp src 192.168.122.17 metric 100 
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.17 metric 100 
192.168.122.1 dev enp1s0 proto dhcp scope link src 192.168.122.17 metric 100 
m@wpvm1:~$ 

^^ from the VM.

Can someone please help me understand where I’m going wrong.

The container is running with network:bridge enabled.

Any help would be greatly appreciated.

Kindly,

Puc

ps. willing to change up anything, and ideally keep things as minimal as possible, but i’ve just tried so many different things.

Hi pucasso!
If I understand correctly, what you are doing should be possible. We have many examples running the local URL pointing to a container’s webserver. Then, if that webserver is running a VM, it should be a matter of networking configuration in the VM. Just to clarify, can you curl the VM’s webserver from a different container? That’s mean that the VM is proeprly setup.
Could you share your docker-compose.yml file?