Skip to content
This repository has been archived by the owner on Mar 16, 2024. It is now read-only.

MACVLAN Setup #53

Open
RobHofmann opened this issue Mar 8, 2022 · 5 comments
Open

MACVLAN Setup #53

RobHofmann opened this issue Mar 8, 2022 · 5 comments

Comments

@RobHofmann
Copy link

Hi! First of all: thanks for this container. It seems to be really amazing.

I've been fiddling around with this container and it seems to work fine whenever you use --net=container:vpn. However, in my real setup i use multiple docker hosts which, for various reasons, have containers that use a macvlan network setup. So in short, i have something like this:

docker network create -d macvlan --subnet=192.168.0.0/19 --ip-range=192.168.7.0/25 --gateway=192.168.0.1 --aux-address 'host=192.168.7.127' -o parent=ens160 eth1macvlan
docker network create -d macvlan --subnet=192.168.0.0/19 --ip-range=192.168.7.128/25 --gateway=192.168.7.13 -o parent=ens192 eth1macvlanvpn
docker run --net=eth1macvlan -d --ip=192.168.7.13 --cap-add=NET_ADMIN --device /dev/net/tun --name="vpn" -e VPN_CONFIG_FILE="myconfigfile.ovpn" -v "/path/to/my/vpn/data":/data/vpn ghcr.io/wfg/openvpn-client

then after that i want to spin up my containers like this:

docker run --rm -it --network=eth1macvlanvpn alpine /bin/sh

This should be routed through the VPN container. Somehow the VPN container doesn't seem to accept traffic from this IP range (assumption). If i, at this point, create another container using the --net=container:vpn flag, this seems to work fine (traffic from the last container is being routed through the VPN container).

Is there anything needed to whitelist any additional incoming IP's?

@wfg
Copy link
Owner

wfg commented Mar 8, 2022

To troubleshoot, I would see if it works with the kill switch disabled.

If that doesn't fix it, try fiddling with the SUBNETS variable. My first guess there would be to add 192.168.0.0/19.

@RobHofmann
Copy link
Author

adding -e KILL_SWITCH=off allows me to route through the container correctly.

I dont fully understand what is happening, but is there a way to keep the KILL_SWITCH enabled and make it work?

@RobHofmann
Copy link
Author

Additional information:
WORKING WITHOUT KILLSWITCH:

Chain INPUT (policy ACCEPT 1785 packets, 1220K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 115 packets, 6900 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1363  185K ACCEPT     all  --  eth0   tun0    0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
 1687 1121K ACCEPT     all  --  tun0   eth0    0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 1545 packets, 276K bytes)
 pkts bytes target     prot opt in     out     source               destination  

NOT WORKING WITH KILLSWITCH:

 Chain INPUT (policy DROP 1 packets, 104 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   10  6118 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    2   123 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
    2   456 ACCEPT     all  --  *      *       192.168.0.0/21       0.0.0.0/0           
    0     0 ACCEPT     all  --  tun0   *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy DROP 8 packets, 480 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  eth0   tun0    0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  tun0   eth0    0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    3   237 ACCEPT     all  --  *      lo      0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            192.168.0.0/21      
   12  2325 ACCEPT     udp  --  *      eth0    0.0.0.0/0            81.17.29.2           udp dpt:1194
    0     0 ACCEPT     udp  --  *      eth0    0.0.0.0/0            31.7.57.242          udp dpt:1194
    0     0 ACCEPT     all  --  *      tun0    0.0.0.0/0            0.0.0.0/0    

@wfg
Copy link
Owner

wfg commented Mar 9, 2022

Hmm... Those chains look fine to me. I would expect this line:

    2   456 ACCEPT     all  --  *      *       192.168.0.0/21       0.0.0.0/0           

would be enough to allow the traffic in.

Can you also add the output of ip r for both as well?

@RobHofmann
Copy link
Author

Had a similar issue trying out with another container. Got a fix/workaround here:
qdm12/gluetun#738

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants