Skip to main content

Resize vps disk in an automated way

This is the commands I run to resize automatically the VPS disks when I have resized the disk through Proxmox interface.

#!/bin/bash

# This is only needed if you use GPT type tables
echo "Fix" | /usr/local/sbin/parted  ---pretend-input-tty /dev/sdb print
# Resize partition 1 using all the space
/usr/local/sbin/parted /dev/sdb resizepart 1 100%
#  I use LVM so the physical volume must be resized first
pvresize /dev/sdb1
# Resize the logical volume using all space available
lvresize /dev/vgdata/home -l +100%FREE
# Finally notify file system to use all the space
resize2fs /dev/vgdata/home

Note: I use parted 3.2, if your distribution comes with an older one uninstall it and install it from sources

venet access like in Linux Containers (LXC)

I’ve been using OpenVZ containers in Proxmox for a while, and after upgrading to Proxmox 4 OpenVZ has been removed in favor of LXC containers. Although LXC containers have a lot of great features, the way they access to the network is not very good if you have untrusted users using them, because the network device in the container is attached directly to the bridge.

I liked a lot the venet devices in OpenVZ because with this devices the container has only access to the layer 3 of the network. In this post I tried to get venet like access with LXC containers, for that I used ebtables rules to limit this access.

In my case I created a new bridge (vmbr02) in a separate vlan (550) for the containers. The idea is that the proxmox node is going to be the gateway in layer 2 and layer 3 for the container and the MAC of the container is going to be masqueraded with the MAC of the bridge in the proxmox node. Because I have already a gateway configured in the proxmox node I’m going to create a new route table with the configuration of this network.

First we add the IP (10.10.10.10) to the new bridge in the proxmox node. This IP will be the gateway in the container.

sudo ip addr add 10.10.10.10/32 dev vmbr02v550 
sudo ip route add 10.10.10.0/24 dev vmbr02v550 src 10.10.10.10

Create the new route table for the new network in vlan 550

sudo echo "550 vlan550" >> /etc/iproute2/rt_tables

Populate the new table with the real gateway of this network (10.10.10.1)

sudo ip route add throw 10.10.10.0/24 table 550
sudo ip route add  default via 10.10.10.1 table 550

Add a rule for the traffic comming up from the bridge lookup the table 550:

sudo ip rule add from 31.193.227.0/24 iif vmbr02v550 lookup vlan550

Enable ip forwarding because the proxmox node will be the router for the LXC container. Because we only want ip forwarding in the bridge interface we can enable it only for this interface:

sudo systectl -w net.ipv4.conf.vmbr02v550.forwarding= 1

or

echo 1 > /proc/sys/net/ipv4/conf/vmbr02v550/forwarding

and to be permanent between reboots:

echo "net.ipv4.conf.vmbr02v550.forwarding= 1" >> /etc/sysctl.conf

I added this iptables rule because I’dont want the server be accessed with this IP address, this IP is going to use only to provide networking to the containers.

iptables -A INPUT -d 10.10.10.10 -j REJECT

At this point we have to have the LXC container created. In the following example we see a LXC with id 150:

$ sudo pct list
VMID       Status     Lock         Name                
150        stopped                 pruebas-lxc         

The bridge vmbr02v550 has one port connected to the physical network (bond0.550) and one virtual port connected to the container (veth150i1). Note that the container must be running to see the virtual port up:

$ sudo brctl show vmbr02v550
bridge name	bridge id		STP enabled	interfaces
vmbr02v550		8000.002590911cee	no		bond0.550
							veth150i1

Now it’s time to put some ebtables rules to limit the traffic forwarded in the bridge, these are general rules used for all the containers:

# ebtables rules in the table filter and chain forward
ebtables -A FORWARD -o veth+ --pkttype-type multicast -j DROP #1
ebtables -A FORWARD -i veth+ --pkttype-type multicast -j DROP #2
ebtables -A FORWARD -o veth+ --pkttype-type broadcast -j DROP #3
ebtables -A FORWARD -i veth+ --pkttype-type broadcast -j DROP #4
ebtables -A FORWARD -p IPv4 -i veth+ -o bond0.550 --ip-dst 10.10.10.0/24 -j ACCEPT #5
ebtables -A FORWARD -p IPv4 -i bond0.550 -o veth+ --ip-dst 10.10.10.0/24 -j ACCEPT #6
ebtables -A FORWARD -o veth+ -j DROP #6
ebtables -A FORWARD -i veth+ -j DROP #7

In this rules we use the expression veth+ to refer to all LXC virtual ports that can be connected to the bridge. The short explanation of each rule is the following:

  • #1 and #2 => Stop all multicast layer 3 packets delivered from an to the LXC virtual ports
  • #2 and #3 => Stop all broadcast layer 3 packets delivered from and to the LXC virtual ports
  • #5 and #6 => Only allow forwarding if the source IP or the destination IP  is in the LXC containers network.
  • #6 and #7 => Drop all packets that don’t met the above rules. We don’t want any packets from layer 2.

We need some ebtables rules in the nat table, but in this case these rules are set for each container. For this example we suppose the following:

  • Proxmox node MAC address: 0:25:90:91:1c:ee
  • LXC real MAC address: 66:36:61:62:32:31
  • LXC IP address: 10.10.10.11
#Ebtables rule to translate the packets that must be delivered to the LXC container IP to its real MAC
ebtables -t nat -A PREROUTING -p IPv4 -d 0:25:90:91:1c:ee -i bond0.550 --ip-dst 10.10.10.11 -j dnat --to-dst 66:36:61:62:32:31 --dnat-target ACCEPT
#Ebtables rule to reply to the ARP requests looking for the MAC of the LXC container with the MAC of the host:
ebtables -t nat -A PREROUTING -i bond0.550 -p ARP --arp-op Request --arp-ip-dst 10.10.10.11 -j arpreply --arpreply-mac 0:25:90:91:1c:ee
# I preffer the ebtables rule, but the above can be addressed too with the following arp command:
# arp -i vmbr02v550 -Ds 10.10.10.11 vmbr02v550 pub
#Ebtables rule to mask the LXC container MAC with the MAC of the host
ebtables -t nat -A POSTROUTING -s 66:36:61:62:32:31 -o bond0.550 -j snat --to-src 0:25:90:91:1c:ee --snat-arp --snat-target ACCEPT

And we are done! with this configuration the LXC container, although connected directly to the linux bridge, it has a limited access to the network.

Cómo saber si los discos soportan la opción discard para liberar espacio

La opción discard de los diospositivos de bloques nos permite liberar el espacio de los discos de forma efectiva cuando en el sistema de ficheros borramos ficheros.
Para saber si tenemos la opción discard activada y podemos liberar espacio en el dispositivo se puede ejecutar este comando en linux:

$ sudo lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE

En el caso de que los dispositivos no lo soporten aparecerá en la colúmna DISC-MAX 0B:

MOUNTPOINT DISC-MAX FSTYPE
/boot      0B      ext4
/          0B      ext4
/usr       0B      ext4
/var/tmp   0B      ext4
/var       0B      ext4
/home      0B      ext4

 

Con la opción discard activada nos aparecerá bajo la columna DISC-MAX el tamaño máximo de bytes descartables:

MOUNTPOINT DISC-MAX FSTYPE
/boot      1G       ext4
/          1G       ext4
/usr       1G       ext4
/var/tmp   1G       ext4
/var       1G       ext4
/home      1G       ext4

 

 

Otra opción posible es usando la opción -D del mismo comando que nos proporciona algo más de información:

$ sudo lsblk -D
NAME            DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda                    0        4K       1G         0
├─sda1                 0        4K       1G         0
└─sda2                 0        4K       1G         0
  ├─vgsys-root         0        4K       1G         0
  ├─vgsys-usr          0        4K       1G         0
  ├─vgsys-tmp          0        4K       1G         0
  └─vgsys-var          0        4K       1G         0
sdb                    0        4K       1G         0
└─sdb1                 0        4K       1G         0
  └─vgdata-home        0        4K       1G         0
sr0                    0        0B       0B         0

Una vez que sabemos que el dispositivo soporta la opción DISCARD podemos ejecutar el comando fstrim para liberar espacio en el backend.

Proxmox two node cluster

Although a two node cluster is not recomended for HA due to split brain problem (see Two-Node_High_Availability_Cluster in Proxmox wiki for more info with this config), you can set it up in Proxmox for a basic cluster usage. A two node cluster has an special cman configuration in order to maintain the quorum when one node is not available.

To configure a two node cluster in proxmox, copy /etc/pve/cluster.conf to /etc/pve/cluster.conf.new and edit the new file changing the following line:

<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>

to:

<cman keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1" expected_votes="1"/>

Now, you can activate it through the proxmox web interface in datacenter -> HA, review the changes and activate them for the two nodes. Now you can restart one node without losing cluster quorum.

Proxmox 2 console freezes and browser crashes with 64 bits java plugin

 

I had a very critical issue with Proxmox 2 web interface in my system, when I opened a vnc console my browser crashed and I could not access to my virtual machines.  This can be very painful when you have something wrong in your virtual machine and you can’t access to it. I had this issue only in 64 bits ubuntu system, the behaviour is the same for Firefox, Chrome and Opera. Same system/configuration in 32 bits is working Ok.

Digging into the problem I encountered that the issue came from the java plugin that is used for vnc viewer. Specifically from icedtea package from OpenJDK. To solve this issue I installed Sun JRE.

To install Sun JRE follow these steps:

– Go to http://java.com/en/download/manual.jsp?locale=en and download 64 bits linux package.

– Untar downloaded package and move to /opt/java/java-sun/

# tar xzvf jre-7u5-linux-x64.tar.gz
# mkdir -p /opt/java/java-sun
# mv jre1.7.0_05/ /opt/java/java-sun/

– Remove existing icedtea plugin:

# apt-get remove icedtea-6-plugin icedtea-7-plugin

– Set Sun JRE as default in the system:

# update-alternatives --install "/usr/bin/java" "java" "/opt/java/java-sun/jre1.7.0_05/bin/java" 1

– Configure your desired java version in your system:

# update-alternatives --config java

– Configure java browser plugin:

First remove any ~/.mozilla/plugins/libjava* file if existing.

# ln -s /opt/java/java-sun/jre1.7.0_05/lib/amd64/libnpjp2.so ~/.mozilla/plugins/

– Restart the brower and verify the java plugin. Got to this test page and verify that the browser can run the applet correctly.

Now VNC viewer works great on Firefox and Chrome.