Skip to main content

Proxmox 2 console freezes and browser crashes with 64 bits java plugin


I had a very critical issue with Proxmox 2 web interface in my system, when I opened a vnc console my browser crashed and I could not access to my virtual machines.  This can be very painful when you have something wrong in your virtual machine and you can’t access to it. I had this issue only in 64 bits ubuntu system, the behaviour is the same for Firefox, Chrome and Opera. Same system/configuration in 32 bits is working Ok.

Digging into the problem I encountered that the issue came from the java plugin that is used for vnc viewer. Specifically from icedtea package from OpenJDK. To solve this issue I installed Sun JRE.

To install Sun JRE follow these steps:

– Go to and download 64 bits linux package.

– Untar downloaded package and move to /opt/java/java-sun/

# tar xzvf jre-7u5-linux-x64.tar.gz
# mkdir -p /opt/java/java-sun
# mv jre1.7.0_05/ /opt/java/java-sun/

– Remove existing icedtea plugin:

# apt-get remove icedtea-6-plugin icedtea-7-plugin

– Set Sun JRE as default in the system:

# update-alternatives --install "/usr/bin/java" "java" "/opt/java/java-sun/jre1.7.0_05/bin/java" 1

– Configure your desired java version in your system:

# update-alternatives --config java

– Configure java browser plugin:

First remove any ~/.mozilla/plugins/libjava* file if existing.

# ln -s /opt/java/java-sun/jre1.7.0_05/lib/amd64/ ~/.mozilla/plugins/

– Restart the brower and verify the java plugin. Got to this test page and verify that the browser can run the applet correctly.

Now VNC viewer works great on Firefox and Chrome.

How to reset cluster configuration in Proxmox 2

If you have already made the proxmox cluster, but you want to make changes to the cluster config, for example for changing the hostname of the node, or the network with which the nodes are communicating on in the cluster, you can remove the cluster and create it again:

First, make a backup of the cluster:

cp -a /etc/pve /root/pve_backup

Stop cluster service:

/etc/init.d/pve-cluster stop

Umount /etc/pve if it is mounted:

umount /etc/pve

Stop corosync service:

/etc/init.d/cman stop

Remove cluster configuration:

# rm /etc/cluster/cluster.conf
# rm -rf /var/lib/pve-cluster/*

Start again cluster service:

/etc/init.d/pve-cluster start

Now, you can create new cluster:

# pvecm create newcluster 

Restore cluster and virtual machines configuration from the backup:

# cp /root/pve_backup/*.cfg /etc/pve/
# cp /root/pve_backup/qemu-server/*.conf /etc/pve/qemu-server/
# cp /root/pve_backup/openvz/* /etc/pve/openvz/

UPDATE: This post is also valid to change the hostname of a node in a cluster or to move a node between two clusters. When you have removed a node from the cluster, it still appears in the proxmox nodes tree, to remove it from the tree you have to delete the node directory from another node in the cluster:

# rm -rf /etc/pve/nodes/HOSTNAME

KVM/Openvz dumps with specific name using a hook

Vzdump is the tool used to back up virtual machines in Proxmox.
A few weeks ago, I wrote an entry describing how to do a dump with an specified name, rather than using the default naming for this backups in Creating Kvm backups with an specific name, but now, I’m comming with a much more elegant solution using a hook for vzdump. In Proxmox 2.x, you can back up machines from the web interface very easily and using a hook all is done transparently.

This tool has an option “-script” to be able to call an script during its execution.

 -script    string
	     Use specified hook script.

You can configure an script to do what ever you want in diferent states during vzdump execution. In this case, I only want to rename default file to a more representative one so I’ve only used the backup-end state to do it at the end of the dump process. My script located in /root/scripts/

#!/usr/bin/perl -w

# example hook script for vzdump (--script option)

use strict;
use File::Copy qw(move);

my $basedir="/mnt/pve/pve-backups/dump";
print "HOOK: " . join (' ', @ARGV) . "\n";

my $phase = shift;
if ($phase eq 'backup-end' ){
    my $mode = shift; # stop/suspend/snapshot
    my $vmid = shift;
    my $vmtype = $ENV{VMTYPE} if defined ($ENV{VMTYPE}); # openvz/qemu
    my $dumpdir = $ENV{DUMPDIR} if defined ($ENV{DUMPDIR});
    my $hostname = $ENV{HOSTNAME} if defined ($ENV{HOSTNAME});
    # tarfile is only available in phase 'backup-end'
    my $tarfile = $ENV{TARFILE} if defined ($ENV{TARFILE});
    # logfile is only available in phase 'log-end'
    my $logfile = $ENV{LOGFILE} if defined ($ENV{LOGFILE});
    print "HOOK-ENV: vmtype=$vmtype;dumpdir=$dumpdir;hostname=$hostname;tarfile=$tarfile;logfile=$logfile\n";
    if ($phase eq 'backup-end' and defined ($tarfile) and defined ($hostname)) {
        if ( $tarfile=~/($basedir\/vzdump-(qemu|openvz)-\d+-)(\d\d\d\d_.+)/ ){
          my $tarfile2=$1.$hostname."-".$3;
          print "HOOK: Renaming file $tarfile to $tarfile2\n";
          move $tarfile, $tarfile2;

exit (0);


With this script you get a file name like vzdump-qemu-106-HOSTNAME-2012_06_22-14_35_59.tar.gz and this files are correctly listed by Proxmox interface. You can check the sample script /usr/share/doc/pve-manager/examples/ to view all available states.

Configure the script in /etc/vzdump.conf

# vzdump default settings

#tmpdir: DIR
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#maxfiles: N
script: /root/scripts/
#exclude-path: PATHLIST

Now, every vzdump execution will call our script to rename the dump file name.
To finish you only have to apply this changes to all nodes in the Proxmox cluster.

vzdump: create KVM backups with an specific name

With Proxmox VE, the virtual environment I am using, you can configure KVM backups, but you are going to get dump files like this:


The names of these files are not very representative and you should rename them if you want to easily identify your KVM backups in your storage.

I am using this command for KVM backups to get the files with their server names:

# vzdump --compress  --snapshot --storage pve-backups --maxfiles 2 --stdout 117 > /mnt/pve/pve-backups/myserver_YYYY_MM_DD.tgz

You can put this command in a cron if you want.

To restore the machine simply run:

# qmrestore /mnt/pve/pve-backups/myserver_YYYY_MM_DD.tgz 117

VLAN tagging on Linux for KVM

Today, I’m going to explain my config for KVM server to get network connectivity on guests machines using tagged vlans to get independent networks. As virtual platform I am using Proxmox ve. Proxmox is a great platform to administer KVM and OpenVZ machines, actually it is based on Debian Lenny, but very soon will be available the 2.0 version based on Debian Squeeze and with many great features.

I have connected my kvm server network interfaces to two different switches and the switch ports configured in trunk mode only accepting traffic for my tagged vlans. For vlan configuration I am using vlan package in debian, rather than specify them like eth0.X, I prefer to configure them using this tool.

To install vlan package simply run:

 # apt-get install vlan

Above the two network interfaces I have configured a bond interface in active-backup mode. My /etc/network/interfaces file looks like this:

iface eth0 inet manual
iface eth1 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode active-backup

auto vlan50
iface vlan50 inet manual
        vlan_raw_device bond0

auto vlan60
iface vlan60 inet manual
       vlan_raw_device bond0

auto vlan100
iface vlan100 inet manual
       vlan_raw_device bond0

auto vmbr0
iface vmbr0 inet static
        bridge_ports vlan100
        bridge_stp off
        bridge_fd 0

auto vmbr50
iface vmbr50 inet static
        bridge_ports vlan50
        bridge_stp off
        bridge_fd 0

auto vmbr60
iface vmbr60 inet static
        bridge_ports vlan60
        bridge_stp off
        bridge_fd 0

I have three bridges configured, vmbr0 (with vlan 100), required to access proxmox web interface, and vmbr50 and vmbr60, each of them accessing to their vlans to provide access to guests. The bridge vmbr0 is the only bridge that has an IP address configured, because is the only interface I’m going to use to access to the kvm server.

Now, it is easy to provide network connectivity to the kvm guests machines, simply you have to link their network interfaces to the bridge you want depending on, to that vlan you want they get access.

For example, part of one of my kvm machine config file looks like this:

vlan60: virtio=DE:17:7C:C3:CE:B2
vlan50: virtio=B2:0A:19:3E:72:4D

This is automatically added using proxmox ve web interface.

Notice: Undefined variable: wp_sh_class_name in /var/www/ on line 1002

Notice: Undefined variable: wp_sh_class_name in /var/www/ on line 1002

Warning: Use of undefined constant XML - assumed 'XML' (this will throw an Error in a future version of PHP) in /var/www/ on line 1048