“Inbound SMTP without a static IP”
Tunnelling SMTP over wireguard for traditional mail delivery from a VM
Did our earlier SMTP over IPSEC article blow your mind?
Do you prefer wireguard to IPSEC?
But you still want inbound SMTP over a dynamic IP?
We hear your pain again!
But no worries, because we're here with an update!
In this guide, Jay updates his original article and shows us how to setup a tunnel using wireguard between a local machine running OpenBSD, and a remote VM hosted at OpenBSD Amsterdam. The idea behind this is to allow inbound SMTP connections from the VM to the local machine, for real, ‘traditional’, SMTP-based mail delivery all the way to your workstation, banishing tedious POP mailboxes and messy IMAP sessions to the bitbucket forever.
As we mentioned in our previous publication about running SMTP over an IPSEC tunnel, running your own SMTP server to collect and relay mail for your domain has some nice advantages over using a third party or ISP mail server.
Not yet running your own SMTP?
Fair enough, it's easy to just claim that running our own SMTP has numerous advantages, but what exactly are they?
Well, aside from becoming independent of large corporate email service providers, (which may in itself be enough of a reason for some users), some of the key technical advantages are:
  • Support for the latest standards, including IPv6, and TLSv1.3
  • Speed, flexibility, and local control of security settings
  • Dropping known junk connections before receiving junk message bodies
  • Advanced filtering based on connecting IP and remote user behaviour
  • Free choice of platform and SMTP software
  • Ability to set up custom configurations
If that's not enough, it's also an invaluable learning experience.
However, reliable mail delivery via SMTP requires a machine connected 24 hours per day to a high quality internet connection with guaranteed uptime, as well as a publicly accessible static IP address which is not listed on any widely used blacklists. For this, and other reasons, running your own SMTP server on a typical residential broadband connection is usually not a good idea. Since virtual servers are now available at low cost, running your own SMTP server in a VM makes a lot of sense, as the prerequisites we've just mentioned are much easier to obtain.
Setting up an SMTP server in a VM is not particularly difficult, although first time users might encounter a few difficulties along the way. Outbound mail doesn't really pose a problem, as you can easily connect to your own remote SMTP server from any IP to submit mail for relaying.
Inbound mail via SMTP, without a static IP?
Yes, it can be done!
The real problem, and the main issue we're going to solve here today, is how to collect inbound mail that is waiting for you on the VM.
If you've been used to using a desktop or mobile client program to access an IMAP or POP mailbox at your ISP, or even accessed all of your mail via a web-based interface, then you might be satisfied with setting up an IMAP or webmail server on the VM. At least initially.
However, many advanced and experienced users of BSD or Linux machines, as well as IT professionals, will desire a real SMTP connection all the way to their own workstation. This is, of course, impossible if that workstation doesn't have a fixed IP, since the VM will have no way to reliably initiate an inbound connection to it when mail arrives for onward delivery.
Although an SMTP connection can be simulated to a degree using programs such as fetchmail to download mail from a POP mailbox and then submit it into the local mail delivery system, it just isn't the same as having a setup with a real SMTP connection at every step.
If you have a static IP address on your broadband connection, then configuring the SMTP server running in your VM to deliver mail to an SMTP server running locally will usually work, (unless your ISP filters or blocks inbound connections). Even if the local IP is listed on public blacklists or your connection isn't solid 24 hours per day, mail delivery from the VM will be possible, as you are in full control of both machines and therefore able to configure them arbitrarily.
However there is still a need for a static IP address which allows inbound connections, and this remains a barrier for many users who simply cannot obtain a static IP at a reasonable cost.
Proposed solution - a tunnel using wireguard
The general solution to this whole problem is to set up some kind of IP tunneling between the local machine which is using a dynamic IP and the remote VM which has a static IP.
Once established by the local machine, the tunnel stays open and packets can flow back from the VM to the local machine at any time, even establishing new inbound connections. If the outer tunnel connection goes down, it will be quickly re-established once connectivity is restored, and data can once again flow across it in both directions. This process should work even if the dynamic IP changes. In fact, it can be set up to fall back to using a completely different internet connection such as a 4G cellular link as a backup in place of the main broadband link, and the tunnel should be promptly re-established without much difficulty.
Why not just use POP3 from the VM?
(and feed it into the local mailspool?)
Well, of course you can do. Programs such as fetchmail allow exactly this, and many desktop clients even have POP and IMAP functionality built right in. So you can indeed download your mail from a remote server, (either your own server, or somebody else's), and save it in your local mailbox, completely bypassing your local system's mailspooling service.
But then you can't, (easily), use any of the features of the system's own mail spooler, either. You'll have created a non-standard system which, whilst it might work, is arguably more cumbersome than it needs to be.
On top of that, many of the popular POP and IMAP clients have frustrating arbitrary limitations. For example, whilst there might be an option to automatically delete collected mail from the server once it has been collected via POP, often no such option is available for IMAP based connections. There seems to be an assumption that the server is the final canonical resting place for received mail, and that via IMAP we are only supposed to be making local copies of it.
Connecting via POP is tedious, because whereas an IMAP session can be left open for an extended period of time, (hours), and supports pushing of new mail received during that time, POP does not. To simulate this using POP, we need to poll the server at short intervals, perhaps as short as 60 seconds. This is quite simply, messy.
So whilst mail collection via POP or IMAP might be convenient for a new and inexperienced user who just wants to get things up and running as quickly as possible, it has limitations.
It's also quite useless for anybody who wants to learn about and get experience of how SMTP mail delivery really works.
The whole process of bringing up and maintaining the tunnel should be opaque to the SMTP server, which simply delivers to the assigned IP address regardless of the fact that it is actually a private IP address rather than a publicly accessible one. If mail delivery is attempted whilst the tunnel is down, the VM just sees a network error, and mail is held for the next delivery attempt.
In this guide, we'll be setting up a tunnel using the wireguard protocol. This will be between two machines both running OpenBSD 7.0, a local machine, and a VM hosted at OpenBSD Amsterdam. The principles should be easily applied to other hosting services and even other operating systems, but the specific examples given here are primarily aimed at users of OpenBSD.
Everything we need is included in a full installation of the OpenBSD base packages. That's to say, nothing whatsoever from the ports tree is necessary to set up SMTP delivery over a wireguard tunnel.
Of course, SMTP isn't the only useful protocol that we can tunnel from the remote VM using wireguard, and many of the principles that we're explaining here are equally applicable to other protocols.
A quick introduction to wireguard
Since many readers will not be particularly familiar with wireguard, a short introduction seems useful.
More information can be found in the manual page for wg(4) on an OpenBSD machine.
Wireguard is a UDP-based, IP-tunneling protocol that can be used to set up peer-to-peer virtual private networks, (VPNs), between two or more peers.
Compared with alternative VPN protocols such as IPSEC, wireguard requires very little in the way of configuration and as such has a lower barrier to entry for users who do not already have experience with setting up VPNs.
Support for wireguard in OpenBSD is in the form of a kernel driver, so there are no associated user-space daemons to run. We simply create and configure the wg interface directly using ifconfig.
Why use wireguard?
(Can't we use IPSEC instead?)
If you prefer to use IPSEC in place of wireguard, please see the original version of this guide for more information about SMTP over IPSEC tunnels.
Note that wireguard networks always operate in tunnel mode, in other words we assign separate and unique IP addresses to the peers to use as the endpoints for their encrypted communications.
Readers who have used other VPN technologies that operate in transport mode, where IP data is transparently encrypted and decrypted at the endpoints but routed using the original IP headers will need to be aware of this difference.
Wireguard supports both both IPv4 and IPv6, and can happily tunnel one protocol over the other where necessary or desired.
Whilst it can support moderately complex networking setups with multiple peers connecting to a single server, the simple point-to-point link we will be looking at today is a particularly good example of the benefits of the wireguard protocol in terms of quick setup, low maintenance, and easy administration.
Block diagram of the proposed solution
Private IPv6 block for VPN: Public IPv6 block: 2001:db8:1234:1234::1/64
2001:db8:c1da::1/48 2001:db8:1234:1234::c155 namorar.example.com (Main IPv6 connectivity)
--------- 2001:db8:1234:1234::b17e smtp.example.com (IPv6 outbound mail to internet)
|NAMORAR| 2001:db8:1234:1234::6969 mx1.example.com (IPv6 inbound mail from internet)
--------- Static IPv4 smtp.example.com (IPv4 outbound mail to internet)
| | Static IPv4 mx2.example.com (IPv4 inbound mail from internet)
VPN endpoint vpn1.example.com / '-------------> INTERNET
2001:db8:c1da::2 \
V \ V
P / P
N \ N
VPN endpoint vpn0.example.com /
2001:db8:c1da::1 \ ,-------------> INTERNET
| | Dynamic IPv4
--------- --------- Dynamic IPv6 2001:db8:dd::dd
|MIMANDO| -------------------------->|CARINHO|
--------- ---------
2001:db8:6969::2 2001:db8:6969::1 Private IPv6 block for LAN: 2001:db8:6969::1/48
mimando, mimando.lan carinho, carinho.lan
The diagram above shows the desired arrangement of our three machines, two physical machines locally and a remote VM. The hostnames corresponding to the assigned IP addresses are also given.
Note that only the five entries listed at the top right need to be included in the public DNS. All of the other hostnames can simply be included in the local hosts file, as they are only used locally.
Hostnames in this example:
is our VM at OpenBSD Amsterdam.
is a machine on our local network with internet access.
is our workstation.
Configuring virtual network interfaces for the tunnel endpoints
With the preamble out of the way, we can now concentrate on configuring the new setup.
Since the IPs for the VPN endpoints are going to be bound to virtual wg devices rather than physical network adaptors, the first step is to create these new wg devices. Eventually we will do this automatically on each boot by creating a configuration file /etc/hostname.wg0, but for the initial setup we'll use ifconfig interactively because when we generate and set each private key, we'll need to obtain the corresponding public key in order to configure the other peer.
Wireguard lacks an automatic mechanism for key exchange, so exchanging the keys between the two peers is a manual process. Although this does help to keep the core protocol simple, and isn't much of an issue when establishing short-lived ad-hoc encrypted tunnels between hosts, there are a few upshots to this approach.
Since the public key installed on the remote peer is derived from the private key set on the local peer, we need to ensure that we set the same private key each time we initialize the interface, (in other words, on each boot). This means that we can't just directly use the syntax given in the examples section of the wg(4) manual page, which uses a shell back-tick escape to supply a random value directly to the wgkey parameter of ifconfig, because with this simple approach we have no way to repeatedly set the same value.
It also means that the keys that we generate will probably end up being in use for an extended period of time. This is not a problem in itself, but we should be aware of it.
As well as all this, we'll also choose and manually configure a specific UDP port to use for our wireguard traffic on both peers. A fixed, known port is obviously necessary on the remote server, as it needs to accept the initial inbound connection from our local machine which is on a dynamic IP. Whilst we could leave the local machine to choose a random port number, by setting a known value we can then easily restrict access to the wireguard protocol port on the server so that only packets originating from the known port number are processed by the wireguard code.
carinho# openssl rand -base64 32 > /root/KEY_PRIVATE
carinho# ifconfig wg0 create wgport 6900 wgkey `cat /root/KEY_PRIVATE`
carinho# ifconfig wg0 | grep wgpubkey | cut -d ' ' -f 2 > /root/KEY_PUBLIC
carinho# chmod 600 /root/KEY_PRIVATE /root/KEY_PUBLIC
carinho# scp -p /root/KEY_PUBLIC root@namorar:/root/KEY_PUBLIC_REMOTE
namorar# openssl rand -base64 32 > /root/KEY_PRIVATE
namorar# ifconfig wg0 create wgport 6900 wgkey `cat /root/KEY_PRIVATE`
namorar# ifconfig wg0 | grep wgpubkey | cut -d ' ' -f 2 > /root/KEY_PUBLIC
namorar# chmod 600 /root/KEY_PRIVATE /root/KEY_PUBLIC
namorar# scp -p /root/KEY_PUBLIC root@carinho:/root/KEY_PUBLIC_REMOTE
Style notes
To make this guide easier to follow, we've styled the console text output according to which of the three machines it relates to:
carinho# This is our local internet-connected machine, or LAN mailserver
namorar# This is our remote VM, or internet facing mailserver
mimando# This is our local workstation, which may or may not have internet connectivity
# This is generic output that is not specific to one machine
At this point, we've generated the local public and private key-pair on each machine, and stored those keys in /root/KEY_PUBLIC and /root/KEY_PRIVATE respectively. We've also copied the public key to the remote machine as /root/KEY_PUBLIC_REMOTE. Since all of these files have been created in /root, we don't need to concern ourselves with the fact that they were momentarily world-readable between creation and running the chmod command, since the /root directory has 0700 access permissions anyway.
We also have the wg0 device partially configured on each host.
The next step is to configure each wg device with the other peer's public key, as well as the address range that we will allow it to use to communicate with us. These can be set with another invocation of the ifconfig command:
carinho# ifconfig wg0 wgpeer `cat KEY_PUBLIC_REMOTE` wgendpoint namorar.example.com 6900 wgaip 2001:db8:c1da::2
namorar# ifconfig wg0 wgpeer `cat KEY_PUBLIC_REMOTE` wgaip 2001:db8:c1da::1
Note that we only set the wgendpoint option on the local peer. We can't set it on the remote VM because the connecting machine has a dynamic IP. This means that the remote VM cannot establish an inbound SMTP connection to the local peer until the local peer has first brought up the tunnel and informed the remote VM of it's current dynamic address.
Finally, we just need to set the local peer IP address for each wg interface. This is again done using ifconfig in the same way as an address is set for a physical interface.
carinho# ifconfig wg0 inet6 vpn0.example.com
namorar# ifconfig wg0 inet6 vpn1.example.com
At this point, the wireguard interfaces on both peers are configured. Now we can move on to testing the connection, and making the configuration more permanent so that it will survive over re-boots.
Note: In testing, we have occasionally observed an ifconfig process become stuck in the wf_ifq state when a wireguard virtual interface is destroyed, which has required a forced re-boot to terminate it.
Firewalling the tunnelled traffic
(To avoid opening various security holes)
Be aware that by default, a wireguard tunnel will allow TCP, UDP, and ICMP traffic to flow from any port, to any port between the configured peers' endpoint IPs. This is quite different to IPSEC, where flows can be restricted to specific protocols and ports within the tunneling protocol itself.
This unrestricted forwarding may be a particular issue for programs which arbitrarily bind to a certain port on all interfaces, (such as the default configuration of SSH), as this would obviously include any newly created wg interfaces.
When using wireguard, firewall rules can be applied to the wg interface to filter the permitted traffic and achieve a similar effect to creating restricted flows in an IPSEC setup.
It is strongly suggested to configure firewalls to deny all traffic by default, then only allow the desired traffic through, to minimise the possibly of access to local services being unintentionally granted to a remote peer connecting via a wireguard tunnel.
Testing connectivity over the tunnel
We can bring the tunnel up and do an initial test of connectivity via it simply by pinging the remote VM from the local peer:
carinho# ping6 -c 1 vpn1.example.com
PING vpn1.example.com (2001:db8:c1da::2): 56 data bytes
64 bytes from 2001:db8:c1da::2: icmp_seq=0 hlim=64 time=0.730 ms
--- vpn1.example.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.730/0.730/0.730/0.000 ms
Here we can see that the ping request was successfully sent, and a ping reply was received.
Monitoring the wg0 interface, as well as the physical interface over which the tunnel is running, we can observe the actual traffic:
carinho# tcpdump -n -i wg0
07:41:33.300761 2001:db8:c1da::1 > 2001:db8:c1da::2: icmp6: echo request
07:41:33.303411 2001:db8:c1da::2 > 2001:db8:c1da::1: icmp6: echo reply
Traffic on the wireguard virtual interface
carinho# tcpdump -n -i if0
07:41:33.301011 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0xfa469361
07:41:33.302577 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] response from 0x6599d6b7 to 0xfa469361
07:41:33.302832 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] data length 112 to 0x6599d6b7 nonce 0
07:41:33.303263 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0xfa469361 nonce 0
07:41:33.303402 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] data length 112 to 0xfa469361 nonce 1
Traffic on the physical interface
Of course, we can also ping the local peer from the remote VM:
namorar# ping6 -c 1 vpn0.example.com
PING vpn0.example.com (2001:db8:c1da::1): 56 data bytes
64 bytes from 2001:db8:c1da::1: icmp_seq=0 hlim=64 time=0.785 ms
--- vpn0.example.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 0.785/0.785/0.785/0.000 ms
However, this obviously only works once the local peer has sent at least one packet first, to inform the remote VM of it's IP address.
If we tried to ping the local peer first, we'd see a "Destination address required" error:
namorar# ping -c 1 vpn0.example.com
PING vpn0.example.com (2001:db8:c1da::1): 56 data bytes
ping6: sendmsg: Destination address required
ping: wrote vpn0.example.com 64 chars, ret=-1
--- vpn0.example.com ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss
Sending data before the tunnel has been brought up
Generating new keypairs semi-automatically
If we wanted to change the keys on a regular or at least semi-frequent basis, we could use a simple shell script to write new values to the files /root/KEY_PRIVATE and /root/KEY_PUBLIC, and copy the public key to the remote peer ready for manual insertion in it's /etc/hostname.wg0 file.
# Sample script to create and exchange new keypairs for wireguard devices
ifconfig wg9 destroy
openssl rand -base64 32 > /root/KEY_PRIVATE
ifconfig wg9 create wgkey `cat root/KEY_PRIVATE`
ifconfig wg9 | grep wgpubkey | cut -d ' ' -f 2 > /root/KEY_PUBLIC
chmod 600 /root/KEY_PRIVATE /root/KEY_PUBLIC
ifconfig wg9 destroy
In this case, since wg devices don't have to be created in sequential numerical order, it's useful to make the script use a highly numbered interface to reduce the risk of a clash with an already configured one.
Although wireguard is technically a stateless protocol, in our particular use-case there is a certain sense of the tunnel having been brought ‘up’, or actively being on-line, and it helps to think in these terms.
There are several reasons for this. Firstly, due to the dynamic IP of the local peer, the initial connection will always have to be made from local peer to the remote server, as the remote VM has no way of knowing what the current dynamic IP is when it changes. Additionally, it is very likely that the local peer is accessing at least the IPv4 internet from behind a NAT firewall at the ISP.
This means that even if the same dynamic IP address remains in use for days or even weeks, it's very likely that with no traffic flowing across it, a stateful firewall at the ISP will expire any state that was created by an outbound packet from the local peer after a few minutes. This in turn would prevent the remote VM from sending wireguard packets, even though it knows the destination IP, as those packets will no longer successfully traverse the ISP's NAT.
The issue with NAT can obviously be mitigated to some extent by using IPv6, since most ISPs that provide IPv6 connectivity, (and shame on those who do not!), will provide a real /64 subnet which is not subject to NAT. However, the /64 address block will in many cases still be dynamically allocated, and there is also no guarantee that inbound connections will be completely unfiltered. As a result, when connecting via residential or low-end business ISPs, even using IPv6 is not guaranteed to resolve this issue.
Additionally, when the dynamic IP changes, we would ideally like to recognise this situation and try to bring the tunnel up again automatically as soon as possible. Otherwise, a change in IP address and subsequent loss of connectivity via the wireguard tunnel will result in inbound mail being queued on the remote VM until the local peer happens to send data out over the link, which will most likely be when it has outbound mail to send.
The solution to this problem is to ensure that at least some data is send periodically over the wireguard tunnel, which should be enough to prompt any NAT devices at the ISP to keep the link open.
carinho# ifconfig wg0 wgpeer `cat /root/KEY_PUBLIC_REMOTE` wgpka 10
namorar# ifconfig wg0 wgpeer `cat /root/KEY_PUBLIC_REMOTE` wgpka 10
The ability to send keep-alive packets at regular intervals is built in to the wireguard driver, although it is disabled by default. We can enable it by setting the required interval in seconds using the wgpka option to ifconfig.
With this option set, we can observe keepalive packets being sent on the physical interface:
07:55:05.580987 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0xcc7658fd
07:55:05.582596 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] response from 0x1a8fb576 to 0xcc7658fd
07:55:05.582847 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] keepalive to 0x1a8fb576 nonce 0
07:55:05.583282 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0xcc7658fd nonce 0
07:55:15.574392 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0xcc7658fd nonce 1
07:55:25.568244 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] keepalive to 0x1a8fb576 nonce 1
07:55:35.564557 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0xcc7658fd nonce 2
07:55:45.558613 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] keepalive to 0x1a8fb576 nonce 2
07:55:55.554651 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0xcc7658fd nonce 3
07:56:05.548979 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] keepalive to 0x1a8fb576 nonce 3
07:56:15.544802 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0xcc7658fd nonce 4
07:56:25.539347 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] keepalive to 0x1a8fb576 nonce 4
Keepalive packets being sent across the physical interfaces
Note that since the keepalive packets are a kind of out-of-band communication, they don't generate any traffic on the wg interface itself.
But it doesn't work for me!
If there is no connectivity via the tunnel, in other words the ping test fails, we can usually quickly diagnose the problem using tcpdump.
A common cause of failure might be a firewall blocking access to UDP port 6900 on the remote VM.
In this case, all we would see on the wg interface on the local peer is a single attempt to send the echo request packet, but the physical interface would show repeated attempts to bring up the wireguard tunnel, which were responded to with ICMP port unreachable messages:
07:22:43.223961 2001:db8:c1da::1 > 2001:db8:c1da::2: icmp6: echo request
ICMP echo request packet being sent out over the wg interface
07:22:43.614211 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0x52502bb7
07:22:43.614712 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:22:48.814304 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0xce928477
07:22:48.814895 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:22:53.964395 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0x2279de96
07:22:53.964885 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:22:59.274488 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0x86231b0a
07:22:59.275081 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:23:04.604587 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0xa45db3b9
07:23:04.605079 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:23:09.914680 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0xf424b90c
07:23:09.915173 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:23:15.004771 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0xdb06d1db
07:23:15.005243 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
07:23:20.264864 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] initiation from 0x85729bfc
07:23:20.265360 2001:db8:1234:1234::c155 > 2001:db8:dd::dd: icmp6: 2001:db8:ffff:1000::2 udp port 6900 unreachable
Repeated attempts to bring up the wireguard link over the physical interfaces
The effect is similar, but subtly different, if the wireguard link is actually working fine, but ICMP traffic is being blocked on the wg interface itself.
This can easily happen if one, (or both), of the peers has an initial ‘block return’ statement in it's firewall ruleset, in other words the firewall is being run in deny-by-default mode. In this case, even if port 6900 is opened to UDP traffic on the physical interface, without any explicit rules for traffic over the wg interface, the ping test will fail.
For example, if the local peer has blocked ICMP echo requests, and a ping is issued on the remote VM, we would get output similar to the following:
namorar# ping vpn0.example.com
PING vpn0.example.com (2001:db8:c1da::1): 56 data bytes
--- vpn0.example.com ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss
ICMP ECHO requests are blocked on the local peer
However in this case, examining the traffic on the physical interface suggests that the wireguard link is indeed working:
16:26:31.946532 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] keepalive to 0xa5a32c54 nonce 4
16:26:33.827258 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] data length 112 to 0x2aa6069e nonce 4
16:26:33.827805 2001:db8:dd::dd.6900 > 2001:db8:1234:1234::c155.6900: [wg] data length 160 to 0xa5a32c54 nonce 5
16:26:43.826473 2001:db8:1234:1234::c155.6900 > 2001:db8:dd::dd.6900: [wg] keepalive to 0x2aa6069e nonce 5
The wireguard tunnel is successfully up and running
Looking at the wg interface itself, though, shows that although the echo request packets are being sent, we are receiving back port unreachable messages instead of the expected echo replies:
16:26:33.827205 2001:db8:c1da::2 > 2001:db8:c1da::1: icmp6: echo request
16:26:33.827826 2001:db8:c1da::1 > 2001:db8:c1da::2: icmp6: 2001:db8:c1da::1 protocol 58 port 21191 unreachable
The firewall on the local peer is blocking ICMP echo requests in on it's wg interface
Making the configuration persistent
It would obviously be useful to have the wireguard virtual interface configured automatically on each boot. This can be done by creating a configuration file /etc/hostname.wg0 containing the desired parameters.
The parser for the hostname files does not interpret backtick characters in the way that the shell does, so we can't directly reference the files we created in /root/KEY_PRIVATE and /root/KEY_PUBLIC from /etc/hostname.wg0.
Instead, we can write a script to create /etc/hostname.wg0 based on the content of these files, and simply re-run this generating script if we ever change the keys.
On the remote VM, we don't set a value for wgendpoint
echo "wgport 6900 wgkey `cat /root/KEY_PRIVATE`" > /etc/hostname.wg0
echo "wgpeer `cat /root/KEY_PUBLIC_REMOTE` wgaip 2001:db8:c1da::1 wgpka 15" >> /etc/hostname.wg0
echo "inet6 vpn1.example.com" >> /etc/hostname.wg0
On the local peer, we set wgendpoint to the address of the remote VM
echo "wgport 6900 wgkey `cat /root/KEY_PRIVATE`" > /etc/hostname.wg0
echo "wgpeer `cat /root/KEY_PUBLIC_REMOTE` wgendpoint namorar.example.com 6900 wgaip 2001:db8:c1da::2 wgpka 15" >> /etc/hostname.wg0
echo "inet6 vpn0.example.com" >> /etc/hostname.wg0
With this hostname.wg0 file in place, the wg virtual interface will be created and correctly configured automatically at boot.
Enabling keepalives with the wgpka option ensures that the link is brought up immediately, allowing the remote VM to make the inbound connection it needs to deliver any mail that has been queued on it since the link last went down.
Configuring smtpd
Now that our wireguard tunnel is, (hopefully), working correctly, we can move on to configuring smtpd.
Since our smtp connections will be encrypted by the wireguard protocol, we could simply run the plain text smtp protocol over the VPN and still avoid third parties from intercepting the data. An smtp over TLS configuration is more complicated, so if you want to start by configuring plain text smtp over the tunnel just to see it all working, fair enough.
However, there are several reasons to prefer not leaving this as a production setup, and to set up a proper smtp over TLS configuration instead. Firstly, if the opportunity to obtain a publicly accessible static IP for the local machine becomes available in the future and therefore using a wireguard tunnel becomes un-necessary, it's convenient to have a TLS-based setup already fully tested and working so that the switch can be made immediately. Secondly, future mis-configurations or even as yet undiscovered bugs in the wireguard kernel code could theoretically result in smtp traffic being sent over the internet in such a way that the plaintext is recoverable. If our SMTP session uses TLS within the tunnel, the chances of this happening are greatly reduced.
Configuring smtpd on the remote VM
The following configuration file will allow the relay of outbound email to other internet hosts via our remote VM, namorar.example.com, from any user as long as the sending machine, our local mail router carinho.lan, presents any valid TLS certificate and uses the credentials stored in /etc/mail/secrets.
No user-level authentication is done. Of course, the only way to connect to the IP address that listens for these submissions is via the wireguard tunnel, so further machine-level authentication will also have been done by this point, in the form of verifying the wireguard keys.
Example configuration file for smtpd on namorar.example.com
# Aliases to expand for local mail delivery only, E.G. automatic messages from daemons.
table aliases file:/etc/mail/aliases
# Permitted sending IPs. Should be smtp.example.com for IPv6, and our only IPv4 address.
table ips_out { 2001:db8:1234:1234::b17e, 2001:db8:1234:1234::b17e, 2001:db8:1234:1234::b17e, 2001:db8:1234:1234::b17e, 2001:db8:1234:1234::b17e, }
# Users that accept inbound email
table valid_users_example_com { test, postmaster }
# Credentials for authentication with carinho.lan over the VPN
table outbound_auth file:/etc/mail/secrets
# Define pki names, certificate and key paths
# mx1.example.com and mx2.example.com are presented to external clients connecting to us to send mail to us
pki mx1.example.com cert "/etc/ssl/private/mx1.example.com.crt"
pki mx1.example.com key "/etc/ssl/private/mx1.example.com.key"
pki mx2.example.com cert "/etc/ssl/private/mx2.example.com.crt"
pki mx2.example.com key "/etc/ssl/private/mx2.example.com.key"
# vpn1.example.com is presented to carinho.lan as our identity when we connect to deliver inbound mail from the internet
# This is what carinho.lan knows the remote end of the VPN as, according to it's hosts file
pki vpn1.example.com cert "/etc/ssl/private/vpn1.example.com.ssc"
pki vpn1.example.com key "/etc/ssl/private/vpn1.example.com.key"
# smtp.example.com is presented to carinho.lan as our identity when carinho.lan connects to us to send mail to the internet
# carinho.lan is actually connecting on the IP that is listed in it's hosts file as vpn1.example.com
pki smtp.example.com cert "/etc/ssl/private/smtp.example.com.crt"
pki smtp.example.com key "/etc/ssl/private/smtp.example.com.key"
# Set queue expiry time to 20 days
queue ttl 20d
# Listen on various interfaces. Listen on socket is also done by default.
# Outbound mail from carinho.lan via the VPN
# Present the hostname smtp.example.com, even though we are really listening on the VPN IP address for vpn1.example.com
# This avoids the VPN details appearing in Received: headers on outbound email. The next relaying step onwards will use
# smtp.example.com as the source IP, and this makes the headers look consistent. However, it does mean that we need to
# present the certificate for smtp.example.com rather than the certificate for vpn1.example.com.
listen on 2001:db8:c1da::2 pki smtp.example.com tls-require verify hostname smtp.example.com mask-src auth <outbound_auth>
# Inbound external mail from the internet via IPv4. DSN disabled to avoid showing details of the internal network.
listen on pki mx2.example.com hostname mx2.example.com no-dsn
# Inbound external mail from the internet via IPv6. DSN disabled to avoid showing details of the internal network.
listen on 2001:db8:1234:1234::6969 pki mx1.example.com hostname mx1.example.com no-dsn
# Delivery actions
# Local mail, E.G. from daemons
action "local_mail" maildir "%{user.directory}/mailspools/in/" alias <aliases>
# Inbound mail from the internet
action "relay_in" relay pki vpn1.example.com host smtp://vpn0.example.com
# Outbound mail from carinho.lan via the VPN
action "relay_out" relay helo smtp.example.com src <ips_out>
# Match rules
# Allow pure local mail for delivery via maildir
match from local for local action "local_mail"
# Inbound mail for example.com to relay via carinho.lan
match from any for domain "example.com" rcpt-to <valid_users_example_com> action "relay_in"
# Outbound mail from carinho.lan for any destination is accepted for relay by us as smtp.example.com
match from src 2001:db8:c1da::1 for any action "relay_out"
We specify our IPv6 address multiple times in ips_out, because otherwise smtpd will frequently fall back to using IPv4 when talking to a dual-stacked host.
Since there is no obvious way to force a preference for IPv6 in these cases, specifying the IPv6 address multiple times manually at least helps to increase the frequency with which IPv6 is used.
We set the queue ttl to a much larger value than the default of four days, to avoid mail being permanently deleted if for some reason an outage prevents it being relayed from the remote VM to the local mail server. Unlike an IMAP or POP server which will usually keep stored mail indefinitely, as it is considered to have reached it's final destination and therefore have been delivered, an SMTP server considers it's queued mail to be ‘in transit’. If none of the specified relays can be contacted before the queue ttl expires, the mail will be bounced back to the sender.
With the configuration above, the bounce mails will intentionally fail, with the result that the incoming message is simply deleted without informing the sender. The failed bounce mails will appear in /var/log/maillog with a message such as the following:
Nov 7 16:38:02 namorar smtpd[8493]: warn: PermFail injecting failure report on message 52866920 to <sender@other.host.example.com> for 1 envelope: 550 Invalid recipient: <sender@other.host.example.com>
Warning messages generated as per the configuration of bounce warn-interval will also intentionally fail with a similar error.
If the link between the local smtp server on the lan and the remote VM is not up 24 hours per day, this behaviour with regards to not sending bounce messages is probably desirable. The reason they fail is simply because we don't have a match rule that allows locally generated mail to be relayed to other internet hosts.
If we really did want such bounce messages to be sent to external hosts, this could be done with a single line in smtpd.conf:
match from local for any action "relay_out"
In this case, we would probably also want to check that bounce warn-interval is set to a reasonable value based on the expected level of connectivity between the two peers, to avoid sending excessive warning messages just because the VPN link is down at certain times of the day.
Creating the credentials file
The credentials file on the remote VM, namorar.example.com, simply consists of a username and an encrypted password:
namorar# cat /etc/mail/secrets
mailuser $2b$10$TMCTeP.xVdU8YLGJkyJrTOz6VU0.xWRXFvCp90vGfNGFmvBDLC9P6
Credentials file
The encrypted password string used above can be generated using smtpctl:
namorar# smtpctl encrypt
Generating the encrypted password string
The same file on the local mail relay, carinho.lan, contains the username and unencrypted password, prefixed by an arbitrary label, which in this case is the same as the username:
carinho# cat /etc/mail/secrets
mailuser mailuser:foobar
This time the password is in plain text
The credentials file is stored in a world-readable directory, so it is important to create it with appropriate permissions.
Setting the shell's umask to 027 before creating /etc/mail/secrets will ensure that the file is created with 0640.
Creating the keys and certificates
The remote VM has four identities that we need to create keys and certificates for:
Remote VM identities
  • smtp.example.com
  • mx1.example.com
  • mx2.example.com
  • vpn1.example.com
For correct operation sending mail to and receiving mail from other internet hosts, the first three certificates should be real TLS certificates signed by a real certificate authority. If you're using a free certificate issuing service, that likely means that you will need to renew them every three months.
The certificate for vpn1.example.com will only be used between the local server and the remote VM, so we can simply create a self-signed certificate with a long validity period of ten years, and install it in the root certificate bundle on the local peer.
Once again, TLS key generation is covered in much more detail in our guide to encryption keys and TLS certificates, so refer there for more information.
namorar# cd /etc/ssl/private
namorar# openssl ecparam -out ec-secp384r1.pem -name secp384r1
namorar# openssl genpkey -paramfile ec-secp384r1.pem -out /etc/ssl/private/vpn1.example.com.key
Creating the encryption keys
The generated key will be written with permissions 0644, so we change this to 0640. Note that since it is created in a directory that is neither world nor group readable, we don't need to concern ourselves with the key file itself being momentarily world-readable between it's creation and the change of permissions.
namorar# chmod 640 /etc/ssl/private/vpn1.example.com.key
Setting suitable permissions
Now we need to create a self-signed certificate using the key we've just generated.
Traditionally, certificates for a single hostname supplied that hostname in the Common Name, or CN field. Modern systems often now require the Subject Alternative Name or SAN field of the certificate to be populated with the hostname too. This is easily done by passing some extra options to openssl, which can be stored in a very small configuration file:
namorar# echo "subjectAltName=DNS:vpn1.example.com" > vpn1.example.com.ext
Creating a configuration file to ensure that the certificate's SAN field is populated
Next, we generate a certificate signing request:
namorar# openssl req -key vpn1.example.com.key -new -out vpn1.example.com.csr
Generating a signing request
This command will prompt for various pieces of information interactively, which we can either fill in or leave blank as desired. The only really important field is the hostname, which in this case should be vpn1.example.com.
If a password is set in the password field, it will need to be entered every time the server is restarted. To avoid this requirement, and allow the server to be restarted automatically, the password field can be left blank.
With the certificate signing request, we can now generate the actual self-signed certificate:
namorar# openssl x509 -sha256 -req -days 3650 -in vpn1.example.com.csr -signkey vpn1.example.com.key -extfile vpn1.example.com.ext -out vpn1.example.com.ssc
Generating the actual certificate
To check the specifications of the certificate that we just created, we can display it in text format:
# openssl x509 -text -noout -in vpn1.example.com.ssc
Version: 3 (0x2)
Serial Number:
Signature Algorithm: ecdsa-with-SHA256
Issuer: CN=vpn1.example.com
Not Before: Nov 23 16:25:52 2021 GMT
Not After : Nov 21 16:25:52 2031 GMT
Subject: CN=vpn1.example.com
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (384 bit)
ASN1 OID: secp384r1
X509v3 extensions:
X509v3 Subject Alternative Name:
Signature Algorithm: ecdsa-with-SHA256
Displaying the new certificate in text format
If we intend to use this self-signed certificate as-is, which is perfectly reasonable for vpn1.example.com, we'll need to add the certificate to /etc/ssl/cert.pem on every machine that we want to accept it as valid:
# cat vpn1.example.com.ssc
# cat vpn1.example.com.ssc >> /etc/ssl/cert.pem
Adding our self-signed certificate to the local machine's certificate bundle
It would be good practice, although not actually necessary, to add the textual form of the self-signed certificate to /etc/ssl/cert.pem too, following the style of the existing entries in that file.
This whole process of key and certificate generation can be repeated to generate key/certificate pairs for the other identities, smtp.example.com, mx1.example.com, and mx2.example.com, if we're happy to use self-signed certificates for these too.
However, for maximum compatibility and interoperability with other internet mailservers, these identities will ideally use globally recognised CA-signed certificates usable on the internet at large. Such certificates can be obtained from various CAs without charge for the certificate itself, using tools that are included in the OpenBSD base installation, specifically acme-client and httpd.
Details on how to do this are in the above linked guide.
Configuring smtpd on the local mailserver
The configuration of the local mailserver is slightly simpler:
# Example configuration file for smtpd on carinho.example.com
# Set queue expiry time to 20 days
queue ttl 20d
# Define pki names, certificate and key paths
pki carinho.lan key "/etc/ssl/private/carinho.lan.key"
pki carinho.lan cert "/etc/ssl/private/carinho.lan.ssc"
pki vpn0.example.com key "/etc/ssl/private/vpn0.example.com.key"
pki vpn0.example.com cert "/etc/ssl/private/vpn0.example.com.ssc"
# Aliases to expand for local mail delivery only, E.G. automatic messages from daemons.
table aliases file:/etc/mail/aliases
# Credentials for authentication with carinho.lan over the VPN
table secrets file:/etc/mail/secrets
# Listen on various interfaces. Listen on socket is also done by default.
listen on 2001:db8:6969::1 tls-require verify pki carinho.lan mask-src
listen on 2001:db8:c1da::1 pki vpn0.example.com tls-require verify
# Local mail, E.G. from daemons
action "local_mail" maildir "%{user.directory}/mailspools/in/" alias <aliases>
# The local hosts file contains an entry for smtp.example.com directing it to vpn1.example.com
# Therefore, following rule really connects to vpn1.example.com, the VPN address, and not the address of smtp.example.com listed in the public DNS.
# This is done to avoid details of the VPN appearing in Received: headers.
action "outbound_example" relay tls pki vpn0.example.com host smtp+tls://mailuser@smtp.example.com auth <secrets>
action "forward_to_mimando" relay pki carinho.lan tls host smtp://mimando.lan
# Allow pure local mail for delivery via maildir
match from local for local action "local_mail"
match from local for domain "mimando.lan" action "forward_to_mimando"
match from src "[2001:db8:6969::2]" for local action "local_mail"
match from src "[2001:db8:6969::2]" for any mail-from regex ".*@example.com$" action "outbound_example"
match from any for domain "example.com" action "forward_to_mimando"
match from local for domain "example.com" action "forward_to_mimando"
Typical local mailserver configuration
The relevant keys and certificates can be created using the process described above for the remote VM. We only need to use self-signed certificates on the local server, as it is only speaking SMTP to our other machines and not to other hosts on the internet at large.
The hosts file on the local server should contain an entry for smtp.example.com with the IP address for vpn1.example.com:
carinho# grep smtp.example.com /etc/hosts
2001:db8:c1da::2 smtp.example.com
Adding the hostname for the remote end of the vpn to the local machine's hosts file
This is simply to avoid details of the VPN appearing in the Received: headers of outbound email.
If you only have one local machine...
If the local mailserver is in fact also your local workstation, where you want incoming mail to be finally delivered to, then you can simply change the match lines to:
match from local for local action "local_mail"
match from local for any mail-from regex ".*@example.com$" action "outbound_example"
match from any for domain "example.com" action "mail_from_example"
and add another delivery action:
action "mail_from_example" maildir "%{user.directory}/mailspools/in/" alias <aliases_example>
as well as create an appropriate aliases table, mapping usernames at example.com to corresponding local usernames that should receive the mail.
At this point, you should be finished, and both inbound and outbound email delivery using SMTP over the IPSEC tunnel should be working.
Configuring smtpd on the local workstation
The configuration of smtpd on the local workstation is very straightforward:
table aliases file:/etc/mail/aliases
table aliases_example file:/etc/mail/aliases_example
pki mimando.lan key "/etc/ssl/private/mimando.lan.key"
pki mimando.lan cert "/etc/ssl/private/mimando.lan.ssc"
listen on 2001:db8:6969::2 tls-require verify pki mimando.lan
action "local_mail" maildir "%{user.directory}/maildir/in/" alias <aliases>
action "local_mail_example" maildir "%{user.directory}/maildir/in/" alias <aliases_example>
action "outbound" relay host smtp://carinho.lan pki mimando.lan
match for local action "local_mail"
match for any action "outbound"
match from src "[2001:db8:6969::1]" for local action "local_mail"
match from src "[2001:db8:6969::1]" for domain "example.com" action "local_mail_example"
Example configuration file for smtpd on mimando.example.com
The /etc/mail/aliases_example file just needs to contain mappings between users at the example.com domain, and local users on mimando.lan.
For example, if there is a single local user with username jay, who should receive mail for test@example.com, and postmaster@example.com, we would create a simple aliases_example file with just two lines:
mimando# cat /etc/mail/aliases_example
test jay
postmaster jay
Mail for test@example.com and postmaster@example.com will be delivered to the local user jay
At this point, we should be done!
Let's enjoy real SMTP mail delivery without the expense of an internet connection with a static IP address!
Tweaking smtpd for SMTP connections that are not available 24/7
By default, smtpd waits an increasing amount of time between successive delivery attempts. This is reasonable behaviour when delivering to most SMTP servers, as publicly accessible SMTP servers on the internet are generally expected to have good network connectivity and good uptime. If you intend to keep your IPSEC tunnel open 24 hours a day, and your connectivity is reliable enough to do this, you might not need to change smtpd's scheduling behaviour at all.
However, if your IPSEC tunnel is likely to be more intermittent, it's useful to increase the frequency of delivery attempts so that queued mail comes through the tunnel promptly.
Intermittent connectivity could be due to reasons as simple as only bringing the link up during business hours, or the general un-reliability of home broadband connections.
To increase the frequency of delivery attempts, we can edit the file /usr/src/usr.sbin/smtpd/scheduler_ramqueue.c, then re-compile and restart smtpd:
# cd /usr/src/usr.sbin/smtpd
# vi scheduler_ramqueue.c
# make clean
# make obj
# make
# make install
# /etc/rc.d/smtpd restart
The changes you will probably want to make are to the define for BACKOFF_TRANSFER, which defaults to 400 seconds, and the function scheduler_backoff(), removing one of the multiplications by step from the returned value, so that re-delivery attempts are made at constant rather than increasing intervals.
Final notes and observations
We tested the previous incarnation of this setup, (using an IPSEC tunnel), between a local server and a VM at OpenBSD Amsterdam to allow mail delivery over inbound SMTP connections for over a year, and at least for us it has proven reliable handling hundreds of emails per day. Initial tests using wireguard suggest that it should be at least as reliable as the IPSEC tunnel method, but as of the time of writing this updated article, our tests are still on-going.
One of the advantages of this particular setup, is that the workstation, mimando.lan in our example, does not need direct internet access in order to send and receive internet email, as it only submits to and receives from carinho.lan.