I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since your clients lose internet access (except to the local network, since there’s a separate route for that on the Wireguard “server” container") if any of the following happens:
- “Client” container is spun down
- The Wireguard interface inside the “client” container is spun down (you can try this out by execing
wg-quick down wg0
inside the container) - or even if the interface is up but the VPN connection is down (try changing the endpoint IP to a random one instead of the correct one provided by your VPN service provider)
I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself got restarted/updated.
But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest versions of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:
- A
MASQUERADE
rule on the tunnel, meaning thetun0
interface. - Gluetun is configured to drop all
FORWARD
packets (filter table) by default. You’ll have to change that chain rule toACCEPT
. Again, I’m not a networking expert, so I’m not sure whether or not this compromises the kill-switch in any way, at least in any way that’s relevant to the desired setup/behavior. You could potentially set a more restrictive rule to only allow traffic coming in from<wireguard_container_IP>
, but I’ll leave that up to you. You’ll also need to figure out the best way to persist the rules through container restarts.
First, here’s the docker compose setup I used:
networks:
wghomenet:
name: wghomenet
ipam:
config:
- subnet: 172.22.0.0/24
gateway: 172.22.0.1
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
volumes:
- ./config:/gluetun
environment:
- VPN_SERVICE_PROVIDER=<your stuff here>
- VPN_TYPE=wireguard
# - WIREGUARD_PRIVATE_KEY=<your stuff here>
# - WIREGUARD_PRESHARED_KEY=<your stuff here>
# - WIREGUARD_ADDRESSES=<your stuff here>
# - SERVER_COUNTRIES=<your stuff here>
# Timezone for accurate log times
- TZ= <your stuff here>
# Server list updater
# See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
- UPDATER_PERIOD=24h
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
networks:
wghomenet:
ipv4_address: 172.22.0.101
wireguard-server:
image: lscr.io/linuxserver/wireguard
container_name: wireguard-server
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1001
- TZ=<your stuff here>
- INTERNAL_SUBNET=10.13.13.0
- PEERS=chromebook
volumes:
- ./config/wg-server:/config
- /lib/modules:/lib/modules #optional
restart: always
ports:
- 51820:51820/udp
networks:
wghomenet:
ipv4_address: 172.22.0.5
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
You already have your “server” container properly configured. Now for Gluetun:
I exec into the container docker exec -it gluetun sh
.
Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE
.
And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT
.
Note on the last command: In my case I did iptables-legacy
because all the rules were defined there already (iptables
gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior on the testing container I spun up on the VPS compared to the one I have running on my homelab.
Good luck, and let me know if you run into any issues!
EDIT: The rules look like this afterwards:
Output of iptables-legacy -vL -t filter
:
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10710 788K ACCEPT all -- lo any anywhere anywhere
16698 14M ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
1 40 ACCEPT all -- eth0 any anywhere 172.22.0.0/24
# note the ACCEPT policy here
Chain FORWARD (policy ACCEPT 3593 packets, 1681K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10710 788K ACCEPT all -- any lo anywhere anywhere
13394 1518K ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- any eth0 dac4b9c06987 172.22.0.0/24
1 176 ACCEPT udp -- any eth0 anywhere connected-by.global-layer.com udp dpt:1637
916 55072 ACCEPT all -- any tun0 anywhere anywhere
And the output of iptables -vL -t nat
:
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- any any anywhere 127.0.0.11
# note the MASQUERADE rule here
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- any any anywhere 127.0.0.11
312 18936 MASQUERADE all -- any tun+ anywhere anywhere
Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- any any anywhere 127.0.0.11 tcp dpt:domain to:127.0.0.11:39905
0 0 DNAT udp -- any any anywhere 127.0.0.11 udp dpt:domain to:127.0.0.11:56734
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- any any 127.0.0.11 anywhere tcp spt:39905 to::53
0 0 SNAT udp -- any any 127.0.0.11 anywhere udp spt:56734 to::53
Hmmm
https://www.samandfuzzy.com