I’ve been using OpenVZ containers in Proxmox for a while, and after upgrading to Proxmox 4 OpenVZ has been removed in favor of LXC containers. Although LXC containers have a lot of great features, the way they access to the network is not very good if you have untrusted users using them, because the network device in the container is attached directly to the bridge.
I liked a lot the venet devices in OpenVZ because with this devices the container has only access to the layer 3 of the network. In this post I tried to get venet like access with LXC containers, for that I used ebtables rules to limit this access.
In my case I created a new bridge (vmbr02) in a separate vlan (550) for the containers. The idea is that the proxmox node is going to be the gateway in layer 2 and layer 3 for the container and the MAC of the container is going to be masqueraded with the MAC of the bridge in the proxmox node. Because I have already a gateway configured in the proxmox node I’m going to create a new route table with the configuration of this network.
First we add the IP (10.10.10.10) to the new bridge in the proxmox node. This IP will be the gateway in the container.
sudo ip addr add 10.10.10.10/32 dev vmbr02v550 sudo ip route add 10.10.10.0/24 dev vmbr02v550 src 10.10.10.10
Create the new route table for the new network in vlan 550
sudo echo "550 vlan550" >> /etc/iproute2/rt_tables
Populate the new table with the real gateway of this network (10.10.10.1)
sudo ip route add throw 10.10.10.0/24 table 550 sudo ip route add default via 10.10.10.1 table 550
Add a rule for the traffic comming up from the bridge lookup the table 550:
sudo ip rule add from 31.193.227.0/24 iif vmbr02v550 lookup vlan550
Enable ip forwarding because the proxmox node will be the router for the LXC container. Because we only want ip forwarding in the bridge interface we can enable it only for this interface:
sudo systectl -w net.ipv4.conf.vmbr02v550.forwarding= 1
or
echo 1 > /proc/sys/net/ipv4/conf/vmbr02v550/forwarding
and to be permanent between reboots:
echo "net.ipv4.conf.vmbr02v550.forwarding= 1" >> /etc/sysctl.conf
I added this iptables rule because I’dont want the server be accessed with this IP address, this IP is going to use only to provide networking to the containers.
iptables -A INPUT -d 10.10.10.10 -j REJECT
At this point we have to have the LXC container created. In the following example we see a LXC with id 150:
$ sudo pct list VMID Status Lock Name 150 stopped pruebas-lxc
The bridge vmbr02v550 has one port connected to the physical network (bond0.550) and one virtual port connected to the container (veth150i1). Note that the container must be running to see the virtual port up:
$ sudo brctl show vmbr02v550 bridge name bridge id STP enabled interfaces vmbr02v550 8000.002590911cee no bond0.550 veth150i1
Now it’s time to put some ebtables rules to limit the traffic forwarded in the bridge, these are general rules used for all the containers:
# ebtables rules in the table filter and chain forward ebtables -A FORWARD -o veth+ --pkttype-type multicast -j DROP #1 ebtables -A FORWARD -i veth+ --pkttype-type multicast -j DROP #2 ebtables -A FORWARD -o veth+ --pkttype-type broadcast -j DROP #3 ebtables -A FORWARD -i veth+ --pkttype-type broadcast -j DROP #4 ebtables -A FORWARD -p IPv4 -i veth+ -o bond0.550 --ip-dst 10.10.10.0/24 -j ACCEPT #5 ebtables -A FORWARD -p IPv4 -i bond0.550 -o veth+ --ip-dst 10.10.10.0/24 -j ACCEPT #6 ebtables -A FORWARD -o veth+ -j DROP #6 ebtables -A FORWARD -i veth+ -j DROP #7
In this rules we use the expression veth+ to refer to all LXC virtual ports that can be connected to the bridge. The short explanation of each rule is the following:
- #1 and #2 => Stop all multicast layer 3 packets delivered from an to the LXC virtual ports
- #2 and #3 => Stop all broadcast layer 3 packets delivered from and to the LXC virtual ports
- #5 and #6 => Only allow forwarding if the source IP or the destination IP is in the LXC containers network.
- #6 and #7 => Drop all packets that don’t met the above rules. We don’t want any packets from layer 2.
We need some ebtables rules in the nat table, but in this case these rules are set for each container. For this example we suppose the following:
- Proxmox node MAC address: 0:25:90:91:1c:ee
- LXC real MAC address: 66:36:61:62:32:31
- LXC IP address: 10.10.10.11
#Ebtables rule to translate the packets that must be delivered to the LXC container IP to its real MAC ebtables -t nat -A PREROUTING -p IPv4 -d 0:25:90:91:1c:ee -i bond0.550 --ip-dst 10.10.10.11 -j dnat --to-dst 66:36:61:62:32:31 --dnat-target ACCEPT #Ebtables rule to reply to the ARP requests looking for the MAC of the LXC container with the MAC of the host: ebtables -t nat -A PREROUTING -i bond0.550 -p ARP --arp-op Request --arp-ip-dst 10.10.10.11 -j arpreply --arpreply-mac 0:25:90:91:1c:ee # I preffer the ebtables rule, but the above can be addressed too with the following arp command: # arp -i vmbr02v550 -Ds 10.10.10.11 vmbr02v550 pub #Ebtables rule to mask the LXC container MAC with the MAC of the host ebtables -t nat -A POSTROUTING -s 66:36:61:62:32:31 -o bond0.550 -j snat --to-src 0:25:90:91:1c:ee --snat-arp --snat-target ACCEPT
And we are done! with this configuration the LXC container, although connected directly to the linux bridge, it has a limited access to the network.