FAQ: Add OpenVPN section authored by Ondřej Zajíček's avatar Ondřej Zajíček
And also reformat the document
...@@ -2,8 +2,9 @@ ...@@ -2,8 +2,9 @@
### Which OS distributions are supported? ### Which OS distributions are supported?
We test BIRD on CentOS 7, Debian 8, Debian 9, FreeBSD 11, OpenBSD 5.8 and NetBSD 7. We test BIRD on CentOS 7, Debian 8, Debian 9, FreeBSD 11, OpenBSD 5.8 and NetBSD
Sometimes we run also FreeBSD 10, CentOS 6 and Debian 7 and it also seems to work there well. 7. Sometimes we run also FreeBSD 10, CentOS 6 and Debian 7 and it also seems to
work there well.
Limited support is possible for Android (7.1) in Termux app; we don't test it much. Limited support is possible for Android (7.1) in Termux app; we don't test it much.
...@@ -12,60 +13,181 @@ We never tested BIRD at Mac OS X. If it works, let us know; if not, send a patch ...@@ -12,60 +13,181 @@ We never tested BIRD at Mac OS X. If it works, let us know; if not, send a patch
We currently don't support Windows. There were [some efforts to manage it](http://trubka.network.cz/pipermail/bird-users/2018-May/012342.html) but no patch has been sent yet. We currently don't support Windows. There were [some efforts to manage it](http://trubka.network.cz/pipermail/bird-users/2018-May/012342.html) but no patch has been sent yet.
### How BIRD handles IPv6? ### How BIRD handles IPv6?
In version 1.x, BIRD has an unusual way to handle IPv6. BIRD can be compiled to support either IPv4, or IPv6. Distributions have different packages for both versions of BIRD. To route both protocols, you have to use two completely separated bird processes, with two separate config files (usually _bird.conf_ and _bird6.conf_). Therefore, it is impossible to have one BGP session propagating both IPv4 and IPv6 prefixes. In version 1, BIRD has an unusual way to handle IPv6. BIRD can be compiled to
support either IPv4, or IPv6. Distributions have different packages for both
versions of BIRD. To route both protocols, you have to use two completely
separated bird processes, with two separate config files (usually _bird.conf_
and _bird6.conf_). Therefore, it is impossible to have one BGP session
propagating both IPv4 and IPv6 prefixes.
In version 2, BIRD can handle both IPv4 and IPv6 (and also other address
families / AFIs / SAFIs) in one process.
In version 2, BIRD can handle both IPv4 and IPv6 (and also other address families / AFIs / SAFIs) in one process.
### OSPF and multiple ptp addresses on one interface ### OSPF and multiple ptp addresses on one interface
OSPFv2 specification supposes that an interface has one IP address. BIRD handles multiple IPv4 addresses by emulating separate OSPF interfaces for each address range on one physical interface. Received packets are classified according to the source address. This works well if these address ranges are disjoint, which is usual. OSPFv2 specification supposes that an interface has one IP address. BIRD handles
multiple IPv4 addresses by emulating separate OSPF interfaces for each address
range on one physical interface. Received packets are classified according to
the source address. This works well if these address ranges are disjoint, which
is usual.
Unfortunately, there is one common case where this is not true - multiple ptp
addresses with the same local address on one (physical) interface. If such
interface is configured in the ptp mode (one OSPF ptp interface per one ptp
address pair) but is physically broadcast (like ethernet), each neighbor
receives packets intended for other neighbors, because packets are sent using
multicast. This mixed packets cause several kinds of strange behavior and error
messages.
The solution is to configure such interfaces in the ptmp (point-to-multipoint)
mode. This will work because in the ptmp mode packets are sent using unicast (as
the ptmp mode is usually used on physically non-broadcast networks). Note that
it is still one OSPF ptmp interface per one ptp address pair, not one common
ptmp interface, as it might be expected. In this case, it is not necessary to
configure neighbors (as usually needed on nbma or ptmp interfaces), because the
peer is known from the ptp address pair (opposite address).
### Dynamic routing issues with OpenVPN
OpenVPN is a popular VPN tool that allows to connect VPN clients to a VPN server
by tunneling using tun/tap interfaces. One would expect that it should be
possible to have additional networks behind VPN clients and propagate these
networks using BIRD to the VPN server and other clients. Unfortunately such
setting does not work due to way how OpenVPN is implemented.
There are three ways how L3 VPN server could generally work:
* One PtP interface per VPN client
* One common PtMP interface for all VPN clients
* Some other magic
Unfortunately, OpenVPN in the usual configuration in tun mode uses the third
option. There is only one tun interface on VPN server, but it is not transparent
as would be a PtMP interface (because gateway IP address is not passed in tun
interface and is therefore not available to OpenVPN). Instead it is a strange
hybrid that for most purposes looks like one hop between server and clients
(e.g. in traceroutes, or it is transparent to multicast so OSPF can associate
through it), but for some purposes it looks like a virtual router implemented
inside OpenVPN, which connects VPN server and all VPN clients.
The important result is that dispatch decision inside OpenVPN in tun mode is
done based on packet destination IP and OpenVPN apparently does not learn such
information from server routing tables, instead it depends on static
configuration of IP ranges.
There are several ways how to fix it:
Unfortunately, there is one common case where this is not true - multiple ptp addresses with the same local address on one (physical) interface. If such interface is configured in the ptp mode (one OSPF ptp interface per one ptp address pair) but is physically broadcast (like ethernet), each neighbor receives packets intended for other neighbors, because packets are sent using multicast. This mixed packets cause several kinds of strange behavior and error messages. * Use tap mode instead of tun mode, it behaves like one L2 network
* Use IPIP or GRE tunnels inside OpenVPN between clients and server and route
network traffic through these tunnels
* Use different VPN tool than OpenVPN
Perhaps there are some options unknown to us to make OpenVPN work transparently
in tun mode, if so, let us know.
The solution is to configure such interfaces in the ptmp (point-to-multipoint) mode. This will work because in the ptmp mode packets are sent using unicast (as the ptmp mode is usually used on physically non-broadcast networks). Note that it is still one OSPF ptmp interface per one ptp address pair, not one common ptmp interface, as it might be expected. In this case, it is not necessary to configure neighbors (as usually needed on nbma or ptmp interfaces), because the peer is known from the ptp address pair (opposite address).
### BIRD does not import some routers from kernel ### BIRD does not import some routers from kernel
First, _learn_ option of kernel protocol must be active. First, _learn_ option of kernel protocol must be active.
Second, 'device' routes related to interface addresses/prefixes added automatically by OS/kernel are never imported. You could add them using _direct_ protocol. Second, 'device' routes related to interface addresses/prefixes added
automatically by OS/kernel are never imported. You could add them using _direct_
protocol.
Third, for some obscure and historic reasons BIRD 1.3.x (or older) does not
import even some manually added device/host routes (i.e. ones without gateway).
There are two ways to fix this. Either add these routes to the kernel routing
table with _static_ protocol source (e.g. '@ip route add 10.20.30.0/24 dev eth0
proto static@' ), or recompile BIRD with attached patch (see the bottom of the
page) to fix this issue. Anyway, first try some newer version than 1.3 if
possible.
Third, for some obscure and historic reasons BIRD 1.3.x (or older) does not import even some manually added device/host routes (i.e. ones without gateway). There are two ways to fix this. Either add these routes to the kernel routing table with _static_ protocol source (e.g. '@ip route add 10.20.30.0/24 dev eth0 proto static@' ), or recompile BIRD with attached patch (see the bottom of the page) to fix this issue.
Anyway, first try some newer version than 1.3 if possible.
### IPv6 blackhole and prohibit routes do not work on Linux ### IPv6 blackhole and prohibit routes do not work on Linux
This is a limitation of older versions of the Linux kernel, which do not support that route targets for IPv6 routes. A commonly used alternative is to use unreachable route target. If you want to blackhole traffic without sending out ICMP errors on linux, you can use route to a dummy device. Just insert kernel module _dummy_, this will add a dummy0 interface to your system, so you can enable it and route traffic into it. In BIRD configuration this can be done using e.g. static route 2001:db8:1337::/48 via "dummy0". This is a limitation of older versions of the Linux kernel, which do not support
that route targets for IPv6 routes. A commonly used alternative is to use
unreachable route target. If you want to blackhole traffic without sending out
ICMP errors on linux, you can use route to a dummy device. Just insert kernel
module _dummy_, this will add a dummy0 interface to your system, so you can
enable it and route traffic into it. In BIRD configuration this can be done
using e.g. static route 2001:db8:1337::/48 via "dummy0".
### IBGP does not work after upgrade to BIRD 1.3 (or newer) ### IBGP does not work after upgrade to BIRD 1.3 (or newer)
In older versions, BIRD had non-standard IBGP, which was simpler, but very limited (addresses from NEXT_HOP BGP attributes were used as nexthops). BIRD 1.3 implements standard IBGP behavior (addresses from NEXT_HOP attributes are looked up in an IGP routing table). For more details, see the documentation of BGP option _gateway_. The change also affects multihop EBGP. The transition may disrupt working configs, the problem manifests by routes that are imported as unreachable routes. There are several ways to solve that: In older versions, BIRD had non-standard IBGP, which was simpler, but very
limited (addresses from NEXT_HOP BGP attributes were used as nexthops). BIRD 1.3
implements standard IBGP behavior (addresses from NEXT_HOP attributes are looked
up in an IGP routing table). For more details, see the documentation of BGP
option _gateway_. The change also affects multihop EBGP. The transition may
disrupt working configs, the problem manifests by routes that are imported as
unreachable routes. There are several ways to solve that:
* The old behavior can be activated by _gateway direct_ option. * The old behavior can be activated by _gateway direct_ option.
* If flat topology is used (one L2 network with attached border BGP routers, IBGP sessions are to direct neighbors - not multihop), it is possible to just add device routes using direct protocol, in that case it is also usually needed to add _next hop self_ option to IBGP protocols on BGP border routers (not to the route server), because without that NEXT_HOP contains IP address of the border router of neighbor AS, which is usually one hop away.
* If OSPF is used for the internal network, it should work just fine, but note that there should be in IGP table not only internal networks, but also the border networks connecting local and neighbor's border routers. * If flat topology is used (one L2 network with attached border BGP routers,
* Trivial but manual workaround is just to add a /32 static route for each address in NEXT_HOP attributes of IBGP routes. IBGP sessions are to direct neighbors - not multihop), it is possible to just
add device routes using direct protocol, in that case it is also usually
needed to add _next hop self_ option to IBGP protocols on BGP border routers
(not to the route server), because without that NEXT_HOP contains IP address
of the border router of neighbor AS, which is usually one hop away.
* If OSPF is used for the internal network, it should work just fine, but note
that there should be in IGP table not only internal networks, but also the
border networks connecting local and neighbor's border routers.
* Trivial but manual workaround is just to add a /32 static route for each
address in NEXT_HOP attributes of IBGP routes.
## Error messages in logfiles ## Error messages in logfiles
### <RMT> bgp1: Received: Malformed AS_PATH: ... ### <RMT> bgp1: Received: Malformed AS_PATH: ...
The BGP peer does not like AS_PATH we send. The most probable cause is that a local BGP protocol does not prepend its AS number to AS_PATH because it is configured as a route server (option _rs client_) and the BGP peer checks that (option _bgp enforce-first-as_ on Cisco routers). The BGP peer does not like AS_PATH we send. The most probable cause is that a
local BGP protocol does not prepend its AS number to AS_PATH because it is
configured as a route server (option _rs client_) and the BGP peer checks that
(option _bgp enforce-first-as_ on Cisco routers).
### <ERR> Pipe collision detected when sending W.X.Y.Z/24 to table ABC ### <ERR> Pipe collision detected when sending W.X.Y.Z/24 to table ABC
This error may happen if one route (from one protocol) arrives to one routing table via two (or more) different pipes. The original idea is that the graph of routes and pipes is a tree, but it should work even if it is a tree for each route after applying filters. This can be solved by adding proper filters to pipes. This error may happen if one route (from one protocol) arrives to one routing
table via two (or more) different pipes. The original idea is that the graph of
routes and pipes is a tree, but it should work even if it is a tree for each
route after applying filters. This can be solved by adding proper filters to
pipes.
### <WARN> Netlink: File exists ### <WARN> Netlink: File exists
A problem in BIRD-Linux kernel routing table synchronization when BIRD tries to overwrite an existing kernel route. There are two common causes: A problem in BIRD-Linux kernel routing table synchronization when BIRD tries to
overwrite an existing kernel route. There are two common causes:
First, there are some routes in the kernel routing table added by some other
tools (like _ip_ or _route_ commands). BIRD does not purge such routes during
its start. You could either do not add such routes, or configure BIRD kernel
protocol to learn such routes (option _learn_) and make them preferred (change
_preference_ of kernel protocol to higher value)
First, there are some routes in the kernel routing table added by some other tools (like _ip_ or _route_ commands). BIRD does not purge such routes during its start. You could either do not add such routes, or configure BIRD kernel protocol to learn such routes (option _learn_) and make them preferred (change _preference_ of kernel protocol to higher value) Second, BIRD tries to overwrite kernel's device route by its own (common,
non-device) route. This might happen unintentionally if OSPF is configured with
strongly asymetric costs for local interfaces - the path through another router
is cheaper than the direct connection. In that case you could either change OSPF
costs, or import device routes from _direct_ protocol with highest priority.
Second, BIRD tries to overwrite kernel's device route by its own (common, non-device) route. This might happen unintentionally if OSPF is configured with strongly asymetric costs for local interfaces - the path through another router is cheaper than the direct connection. In that case you could either change OSPF costs, or import device routes from _direct_ protocol with highest priority.
### <WARN> Netlink: Cannot allocate memory ### <WARN> Netlink: Cannot allocate memory
A problem in BIRD-Linux kernel routing table synchronization. For IPv6, one possible cause is a hard limit on Linux kernel routing table size, which is 4096 by default, which is too small for full BGP. This can be changed using /proc/sys/net/ipv6/route/max_size . A problem in BIRD-Linux kernel routing table synchronization. For IPv6, one
possible cause is a hard limit on Linux kernel routing table size, which is 4096
by default, which is too small for full BGP. This can be changed using
/proc/sys/net/ipv6/route/max_size .