EXOTIC SILICON
“IPSEC here and routing domains there, it's all action in our dual stacked tunnel!”
Running a VPN between two OpenBSD hosts
The OpenBSD base system contains all of the tools that we need to configure and run an IPSEC VPN between two hosts.
Let's take a look at configuring IKED, IPv6, routing domains, NDP and more to bring the benefits of the datacenter to our own desktop.
Introduction
Today we'll see how to set up a VPN between two OpenBSD hosts, with the following design goals:
  • Implemented using functionality included in the base system
  • Routing of outbound connections via the VPN
  • Tunneling IPv6 over IPv4 to bring IPv6 connectivity to an IPv4-only host
  • Putting the tunnelled traffic in a separate routing domain
  • Providing inbound connectivity to hosts with a non-public IP
  • Use of IKEv2
Design goals for our VPN
The VPN server, (or responder in IKE parlance), used during the preparation of this article was implemented using a virtual machine with a single static IPv4 address and a /64 block of IPv6 addresses.
The local host only requires IPv4 connectivity. The IPSEC tunnel can just as easily be run over IPv6 and in fact this would usually be our preference, however since one of the goals of this article is specifically to show how to bring IPv6 connectivity to a host that lacks it, the examples assume that the tunnel will be configured between two IPv4 endpoints.
Specific examples presented in this article are based on OpenBSD 7.5-release, but should be portable to other versions with minimal changes.
Hostnames and IPs
In the examples, we'll be using the following hostnames and IP ranges:
Responder, (IKED server), hostnameresponder.lan
Public IPv4, (static)203.0.113.123
Public IPv6, (static)2001:db8:1234:1234::/64
Initiator, (IKED client), hostnameinitiator.lan
Public/NAT'ed IPv4, (dynamic) [2]198.51.100.1/32
Private IPv4 range, (tunnel)192.168.69.0/24
Tunnelled IPv6 range [1]2001:db8:1234:1234:69:/80
Footnotes:
  1. The tunnelled IPv6 range is a /80 subnet of the /64 allocated to the server.
  2. The ISP assigned dynamic IP won't be needed during configuration, but might appear in log files and diagnostic output.
Key generation and iked authentication
Since we'll be running the IPSEC tunnel between two OpenBSD machines we can use modern EC key pairs and avoid the need to use older RSA keys, (which may be required when connecting to IKED with other less capable IKEv2 clients).
The simplest approach to getting authentication working is just to copy the pre-generated 256-bit public keys from one host to the other:
initiator.lan # scp -p /etc/iked/local.pub root@responder.lan:/etc/iked/pubkeys/fqdn/initiator.lan
responder.lan # scp -p /etc/iked/local.pub root@initiator.lan:/etc/iked/pubkeys/fqdn/responder.lan
Copying the public keys between the two machines.
More sophisticated authentication methods using certificates are also available.
For further discussion about key algorithms and the use of certificates for authentication with IKED, see TLS keys and certificates, part of our reckless guide to OpenBSD series.
Configuration of the server
Packet forwarding
Forwarding of both IPv4 and IPv6 traffic needs to be enabled if it's not already.
The following options can be added to /etc/sysctl.conf to enable forwarding at boot time:
net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1
Enable forwarding of IPv4 and IPv6 packets
Configuration of the server
iked.conf
The main configuration file for IKED is /etc/iked.conf. On responder.lan it needs to look something like this:
ikev2 passive esp from any to dynamic local 203.0.113.123 peer any srcid responder.lan dstid initiator.lan ecdsa384 config address 2001:db8:1234:1234:69::1/128 config address 192.168.69.1/32
IKED configuration file on the responder
Here we're describing a flow that will accept packets from any internet host, (from any), to the automatically configured address at the other end of the tunnel, (to dynamic).
The tunnel endpoint on the server is fixed, (local 203.0.113.123), and the client is allowed to connect from any address, (peer any).
Source and destination IDs are set, (srcid responder.lan dstid initiator.lan), which will need to match those used in the configuration on the client, (although obviously inverted).
If you are using certificates for authentication, the ID used as srcid here should be present in the certificate's SAN field.
Note that in diagnostic output IKED identifies certificates it has loaded and validated by their common name field, but the value there is not used for authentication.
We're configuring IKED to assign single fixed addresses to initiator.lan for both IPv4 and IPv6, (config address 2001:db8:1234:1234:69::1/128 config address 192.168.69.1/32).
Although IKED is capable of assigning an address dynamically from a pool, doing so for the IPv6 address assignment makes it more cumbersome to ensure that an appropriate entry is added to the NDP address table, (see below).
Configuration of the server
Firewall rules and NAT
To establish the tunnel with IKED we need to open udp port 500.
pass in on vio0 inet proto udp to 203.0.113.123 port isakmp
Key exchange uses UDP port 500 by default
We also allow esp packets, which can either be directly over IP or encapsulated in udp using port 4500.
pass in on vio0 inet proto udp to 203.0.113.123 port ipsec-nat-t
pass in on vio0 inet proto esp
If you are sure that you will either always or never require UDP encapsulation, the opposite rule can be omited.
Outbound IPv4 traffic will have NAT applied, (specifically one to many NAT, or IP masquerading):
pass out on vio0 from 192.168.69.0/24 nat-to 203.0.113.123
Re-write the addresses from the 192.168.69.0/24 subnet to the public IP 203.0.113.123.
Outbound IPv6 traffic can be passed directly without NAT:
pass out on vio0 from 2001:db8:1234:1234:69::/80
IPv6 traffic is passed directly, (switched).
Allow inbound traffic from initiator.lan over enc0:
pass in on enc0 from 2001:db8:1234:1234:69::/80
pass in on enc0 from 192.168.69.0/24
Decrypted packets leaving the IPSEC tunnel appear via the enc0 interface.
Configuration of the server
Routing and neighbour discovery
In this configuration, outbound IPv4 packets from initiator.lan have their source address re-written to be the global IPv4 address of the responder. This makes inbound routing fairly straightforward, as it places no additional requirements on the configuration of the upstream network equipment.
On the other hand, with IPv6 we have a /64 subnet and packets sent to the 2001:db8:1234:1234:69::/80 subnet need to be routed over the IPSEC tunnel.
If we have a way to configure the upstream network equipment to route the 2001:db8:1234:1234:69::/80 subnet via responder.lan then this would usually be the preferred way of setting things up.
However this is not always possible, especially with low-end vps services which expect the whole IPv6 allocation to be used directly by a single host.
IPv6 uses the neighbour discovery protocol, (NDP), to direct network frames to the correct physical device.
Upstream network equipment that expects to communicate with the global IPv6 address assigned to initiator.lan without further routing will typically send a neighbour solicitation via ICMP and await a neighbour advertisement response before sending the actual data.
When an IPv6 address is directly configured on a network interface, neighbour advertisements are automatically handled by the kernel.
However in the configuration described in the previous section, responder.lan will be answering requests for IPs in the /80 subnet which are not directly configured on the interface itself but which will be forwarded over the IPSEC tunnel.
Without specific further configuration, packets from upstream destined for initiator.lan won't be sent to the responder.
To fix this problem, we can manually add an entry to the NDP mapping table on responder.lan and set the proxy flag. This will cause responder.lan to answer the neighbour solicitations on behalf of initiator.lan, and allow user data to flow.
This is known as proxy NDP, and is analogous to proxy ARP in IPv4 networking.
The required entry can be added using /usr/sbin/ndp:
# ndp -s 2001:db8:1234:1234:69::1 xx:xx:xx:xx:xx:xx proxy
Replace xx:xx:xx:xx:xx:xx with the MAC address of the network interface that will receive the incoming packets.
Fun fact!
Extracting the MAC address from ifconfig output
When configuring things manually it's easy enough to find the MAC address for an interface by looking at the output of ifconfig, but in case you want to do it in a script one of the following pieces of shell magic can be used to extract it:
# ifconfig rge0 | grep "^.lladdr" | cut -d ' ' -f 2
# ifconfig rge0 | sed -n s/^.lladdr\ //p
Configuration of the server
Proxy ARP
Of course, the issue described above is not exclusive to IPv6.
If we actually had a block of global IPv4 addresses to route via the tunnel and were therefore not applying NAT, then we would need to add ARP entires for the required IPv4 addresses in the same way:
# arp -s 203.0.113.124 xx:xx:xx:xx:xx:xx permanent pub
Note the slightly different syntax compared with ndp, s/proxy/permanent pub/
Fun fact!
NAT with IPv6
For completeness, it's worth mentioning that we could also have avoided the NDP proxying issue with IPv6 by applying one to many NAT to the outbound IPv6 traffic.
Assuming that a global IPv6 address 2001:db8:1234:1234::1 is actively configured on the responder, a typical rule using the fd69::1/80 subnet reserved for private allocation might look like this:
pass out on vio0 from fd69::1/80 nat-to 2001:db8:1234:1234::1
At this point the responder is essentially ready for use and we can turn our attention to configuring the initiator.
Configuration of the client
Virtual interfaces
To start with, we'll create two new virtual network interfaces on initiator.lan and put both of them in routing domain 1:
# ifconfig vether1 create rdomain 1
# ifconfig enc1 create rdomain 1
  • The vether device will be configured with the IPs that are assigned by IKED.
  • The enc device will make the tunnel traffic visible after decryption, and won't itself be assigned an IP address.
Technically there is no need to create a separate enc1 device and we could just as easily use the default enc0 interface which is available by default. However from an administrative viewpoint it's convenient to have the device number match the routing domain that the traffic is in.
Configuration of the client
iked.conf
The configuration file for IKED on initiator.lan is effectively the opposite of that on responder.lan:
ikev2 active esp rdomain 1 from dynamic to any peer responder.lan srcid initiator.lan dstid responder.lan ecdsa384 request address any iface vether1 tap enc1
IKED configuration file on the initiator
This time we're describing a flow that will accept packets from the address assigned to us by the responder, (from dynamic), to any internet host, (to any).
The local tunnel endpoint is not specified here, (no local argument), as it is just our ISP assigned dynamic IP. The remote tunnel endpoint is specified though, as responder.lan, (peer responder.lan).
Source and destination IDs are inverted compared to the configuration on the responder, (srcid initiator.lan dstid responder.lan).
The request address any option configures IKED to automatically set up the assigned address on the network adaptor specified with the iface option, in this case vether1.
Finally, we ensure that inbound traffic from the tunnel appears on it's own dedicated enc interface with tap enc1.
Configuration of the client
Firewall rules
No special firewall configuration is necessary on the initiator except to allow outbound traffic on enc1, unless we want to allow inbound traffic from internet hosts to listening server processes.
Testing the setup
For production use it's usually expected to configure IKED to start at boot time via an entry in /etc/rc.conf.local.
However for testing purposes we can start IKED in debug mode on each machine at the console:
initiator.lan # iked -d
responder.lan # iked -d
Start IKED in debug mode, with diagnostic output sent to the console.
If authentication and negotiation of the tunnel completes successfully, we should see output similar to the following on the responder:
spi=0x7a1aa8f7cb2687fe: recv IKE_SA_INIT req 0 peer 198.51.100.1:17153 local 203.0.113.123:500, 518 bytes, policy 'policy1'
spi=0x7a1aa8f7cb2687fe: send IKE_SA_INIT res 0 peer 198.51.100.1:17153 local 203.0.113.123:500, 235 bytes
spi=0x7a1aa8f7cb2687fe: recv IKE_AUTH req 1 peer 198.51.100.1:17183 local 203.0.113.123:4500, 619 bytes, policy 'policy1'
spi=0x7a1aa8f7cb2687fe: assigned address 192.168.69.1 to FQDN/initiator.lan
spi=0x7a1aa8f7cb2687fe: assigned address 2001:db8:1234:1234:69::1 to FQDN/initiator.lan
spi=0x7a1aa8f7cb2687fe: send IKE_AUTH res 1 peer 198.51.100.1:17183 local 203.0.113.123:4500, 533 bytes, NAT-T
spi=0x7a1aa8f7cb2687fe: ikev2_childsa_enable: loaded SPIs: 0x6bc4d865, 0x167382e3 (enc aes-128-gcm esn)
spi=0x7a1aa8f7cb2687fe: ikev2_childsa_enable: loaded flows: ESP-0.0.0.0/0=192.168.69.1/32(0), ESP-::/0=2001:db8:1234:1234:69::1/128(0)
spi=0x7a1aa8f7cb2687fe: established peer 198.51.100.1:17183[FQDN/initiator.lan] local 203.0.113.123:4500[FQDN/responder.lan] assigned 192.168.69.1 assigned 2001:db8:1234:1234:69::1 policy 'policy1' as responder (enc aes-128-gcm group curve25519 prf hmac-sha2-256)
Debug output from IKED on responder.lan
And on the initiator:
ikev2_init_ike_sa: initiating "policy1"
spi=0x7a1aa8f7cb2687fe: send IKE_SA_INIT req 0 peer 203.0.113.123:500 local 0.0.0.0:500, xxx bytes
spi=0x7a1aa8f7cb2687fe: recv IKE_SA_INIT res 0 peer 203.0.113.123:500 local 10.183.149.104:500, xxx bytes, policy 'policy1'
spi=0x7a1aa8f7cb2687fe: send IKE_AUTH req 1 peer 203.0.113.123:4500 local 10.183.149.104:4500, xxx bytes, NAT-T
spi=0x7a1aa8f7cb2687fe: recv IKE_AUTH res 1 peer 203.0.113.123:4500 local 10.183.149.104:4500, xxx bytes, policy 'policy1'
spi=0x7a1aa8f7cb2687fe: ikev2_ike_auth_recv: obtained lease: 192.168.69.1
spi=0x7a1aa8f7cb2687fe: ikev2_ike_auth_recv: obtained lease: 2001:db8:1234:1234:69::1
spi=0x7a1aa8f7cb2687fe: ikev2_childsa_enable: loaded SPIs: 0x6bc4d865, 0x167382e3 (enc aes-128-gcm esn)
spi=0x7a1aa8f7cb2687fe: ikev2_childsa_enable: loaded flows: ESP-192.168.69.1/32=0.0.0.0/0(0), ESP-2001:db8:1234:1234:69::1/128=::/0(0)
spi=0x7a1aa8f7cb2687fe: established peer 203.0.113.123:4500[FQDN/responder.lan] local 10.183.149.104:4500[FQDN/initiator.lan] policy 'policy1' as initiator (enc aes-128-gcm group curve25519 prf hmac-sha2-256)
Debug output from IKED on initiator.lan
Now we can check that the routing table in routing domain 1 does indeed route all traffic via the IPSEC tunnel:
initiator.lan # route -n -T 1 show
Routing tables
Internet:
Destination Gateway Flags Refs Use Mtu Prio Iface
default 192.168.69.1 UGS 0 0 - 6 vether1
192.168.69.1 fe:e1:ba:d2:66:29 UHLhl 1 2 - 1 vether1
192.168.69.1/32 192.168.69.1 UCn 0 0 - 4 vether1
Internet6:
Destination Gateway Flags Refs Use Mtu Prio Iface
default 2001:db8:1234:1234:69::1 UGS 0 0 - 6 vether1
2001:db8:1234:1234:69::1 fe:e1:ba:d2:66:29 UHLhl 1 2 - 1 vether1
fe80::%vether1/64 fe80::fce1:baff:fed2:6629%vether1 UCn 0 0 - 4 vether1
fe80::fce1:baff:fed2:6629%vether1 fe:e1:ba:d2:66:29 UHLl 0 0 - 1 vether1
ff01::%vether1/32 fe80::fce1:baff:fed2:6629%vether1 Um 0 2 - 4 vether1
ff02::%vether1/32 fe80::fce1:baff:fed2:6629%vether1 Um 0 5 - 4 vether1
Routing table in rdomain 1
Caveat!
Nameserver accessibility
It's important to note that the configuration defined in /etc/resolv.conf is common between routing domains.
If the nameservers listed in this file for regular usage won't accept connections via the VPN, then other arrangements for name resolution will need to be made.
One typical situation in which this might happen is using an ISP hosted nameserver for regular use which blocks connections from IPs which are not on that ISP's network.
If you want to continue using such a resolver for regular use outside of the VPN rather than change resolv.conf to point to one which is accessible from behind the VPN as well, there are at least two possible ways to do that.
One workaround, if your regular non-VPN connectivity is IPv4 only, is simply to add an additional IPv6 nameserver. This works, but it's a simple approach which may result in delays as the attempts to access the IPv4 resolver time out.
Alternatively, it's possible to use the rdr-to feature of pf to redirect requests that are sent to the ISP resolver elsewhere. This can be implemented on a per-routing-domain basis.
If the ISP resolver is at IPv4 address aaa.bbb.ccc.ddd, and we have another resolver at IPv4 address eee.fff.ggg.hhh which is accessible via the VPN, then we can add a pass rule on the enc1 interface with a line in pf.conf such as this:
pass out on enc1 proto udp to aaa.bbb.ccc.ddd rdr-to eee.fff.ggg.hhh
Client programs can be run in the alternative routing domain in order to access the internet via the VPN:
initiator.lan # route -T 1 exec ping6 -c 2 exoticsilicon.com
PING www.exoticsilicon.com (2a03:6000:6f64:639::8): 56 data bytes
64 bytes from 2a03:6000:6f64:639::8: icmp_seq=0 hlim=60 time=40.055 ms
64 bytes from 2a03:6000:6f64:639::8: icmp_seq=1 hlim=60 time=38.403 ms
--- www.exoticsilicon.com ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 38.403/39.229/40.055/0.826 ms
ICMP echo to a remote host over the VPN
It's also possible to start a shell running in routing domain 1. This allows us to invoke other commands without special parameters and still have the network access done via the VPN.
initiator.lan # route -T 1 exec /bin/sh
Start a shell in routing domain 1
Caveat!
IPSEC protocol overhead, MTU, and fragmentation
Since the IPSEC protocol encapsulates regular IP packets within encrypted IP packets, there is obviously an overhead in terms of packet length for each inner unencrypted packet that is transported across the tunnel.
As a result of this, the size of the largest packet that we can send through the tunnel without fragmentation is lower than it would be sending data directly across the same higher-level link that the tunnel itself runs across.
To reduce excessive fragmentation as well as sporadic communication problems on links that don't properly support fragmentation anyway, we can lower the MTU from it's default value.
Typically, setting an MTU of 1400 on the client's vether interface is sufficient to avoid problems.
initiator.lan # ifconfig vether1 mtu 1400
Running servers on the initiator
Server processes can also be run in routing domain 1 on initiator.lan and configured to listen on it's global IPv6 address.
Servers configured in this way can accept inbound connections from arbitrary hosts on the wider internet.
For example, to run an instance of httpd in routing domain 1 with an alternative configuration file we might use:
initiator.lan # route -T 1 exec /usr/sbin/httpd -f /etc/httpd.conf.rd1
Start httpd in routing domain 1
The alternative configuration file for running httpd in rdomain 1 doesn't require any special entries, other than a listen directive on the assigned IP address.
server "rd1_example.lan" {
listen on 2001:db8:1234:1234:69::1 port 80
directory auto index
root "/htdocs_rd1"
}
No special configuration of httpd is necessary, except for listening on the VPN provided IP address.
Summary
Today we've seen how to configure a dual-stacked IPSEC tunnel between two OpenBSD hosts using the modern IKEv2 protocol and ECDSA keys.
One of the features of this set up is that it allows us to bring IPv6 connectivity to machines which are otherwise behind an IPv4-only internet connection.
We've also seen how to place the VPN traffic in a separate routing domain on the connecting client, so that use of the VPN can easily be switched in and out for different client programs.
Need help?
The setup described above is a starting point for building more complex systems. This can include multiple clients, the use of certificates instead of keys for authentication, and handling connections from IKEv2 implementations other than iked.
If you need assistance for enterprise VPN projects, an overview of commercial services available from Exotic Silicon can be found on our commercial services pages.