2018-01-28 15:24:53 -0500 EST

Selectively VPN Services with Linux Network Namespaces

My little home server does two things, mainly:

After the recent FCC repeal of net neutrality regulations, I signed up for a VPN service as a small protest. Naturally, I’d like to make it more difficult for Comcast to harvest my DNS queries by sending the DNS resolver traffic over the VPN. But if I run OpenVPN, I can’t SSH in for remote access. I need to run some services over the VPN, but leave others out.

Unfortunately, routing doesn’t work that way. There’s no easy way to tell the OS: route most traffic out the default route, but send DNS traffic out the VPN. Routing operates on IP addresses, not ports.

My first thought was to use a virtual machine or LXC container. That would work, but it’s a relatively heavy solution that requires additional ongoing administration (OS updates in the container, etc.).

Then I remembered Linux’s network namespaces. The kernel can provide isolation that gives a process its own network interfaces and its own routing table. Perfect!

My server already has a bridge named br0, which will help connect the isolated network namespace to the outside:

$ ip addr sh br0
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:07:25:9d:05:36 brd ff:ff:ff:ff:ff:ff
    inet brd scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::f64d:30ff:fe6f:129c/64 scope link
       valid_lft forever preferred_lft forever

Creation of a new network namespace is simple:

# ip netns add myns

Start any process in the namespace using the exec subcommand of ip-netns, even a Bash shell:

# ip netns exec myns bash

This isolated namespace does not share the interfaces or routing table of the host. Any process executed in the namespace can’t get out to the network. To run Unbound, our DNS resolver, in the namespace, we’ll need a connection to the host’s bridge.

Linux offers “veth” devices that operate in pairs, like virtual patch cables, to connect software bridges or network namespaces. We create the pair of devices, assign one end to the myns network namespace, and plug the other end into the host’s br0 bridge. Here’s a complete shell script to do so:

set -uf
# Create a private network namespace for OpenVPN and things that need to
# use the VPN, like Unbound.
ip netns add "$ns"
ip netns exec "$ns" ip li set dev lo up
ip link add veth0 type veth peer name veth1
ip link set veth1 netns "$ns"
ip li set up dev veth0
ip netns exec "$ns" ip addr add dev veth1
ip netns exec "$ns" ip li set up dev veth1
ip netns exec "$ns" ip ro add default via dev veth1
ip link set dev veth0 master br0

After that, the interface should be visible on our LAN, and a ping run from inside the myns namespace should reach the outside:

# ip netns exec myns ping -c3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=60 time=39.0 ms
64 bytes from icmp_seq=2 ttl=60 time=43.4 ms
64 bytes from icmp_seq=3 ttl=60 time=42.2 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 39.013/41.583/43.440/1.891 ms

Assuming we’ve already installed and configured OpenVPN and Unbound, modify their systemd unit files to put them in the new network namespace. Both unit files need only two minor changes. Call the shell script to set up the network namespace using systemd’s ExecStartPre. (It’s not a problem if muliple services call this script, or if its runs again when the namespace has already been set up.) The other change happens to ExecStart; prefix the existing command with ip netns exec.

$ cat /etc/systemd/system/vpn.service

ExecStart=-/bin/ip netns exec vpn /usr/sbin/openvpn --config /etc/openvpn/ovpn_udp/



That’s it. Unbound and OpenVPN run in the myns namespace. Everything else, including remote SSH access, happens outside the VPN. To later run another command over the VPN, do ip netns exec myns mycommand.

#vpn #dns #linux #systemd

⬅ Older Post Newer Post ➡