Skip to main content

Cómo añadir un esquema en OpenLDAP

En este caso vamos a añadir el esquema dnsdomain2 para servir las zonas DNS desde PowerDNS

Para ver los esquemas que tenemos actualmente ejecutamos el siguiente comando:

~# ls -1 /etc/ldap/slapd.d/cn\=config/cn\=schema
cn={0}core.ldif
cn={1}cosine.ldif
cn={2}nis.ldif
cn={3}inetorgperson.ldif
cn={4}postfix.ldif

Primero crear un fichero de conversión añadiendo una línea con el esquema que queremos añadir a los que tenemos actualmente (Revisa con el punto anterior que los esquemas que incluyes existen en tu instalación):

~# cat > ./schema_conv.conf << EOL
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/inetorgperson.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/misc.schema
include /etc/ldap/schema/postfix.schema
include /etc/ldap/schema/dnsdomain2.schema
EOL

Convertimos el fichero de esquema a formato LDIF:

~# mkdir /tmp/ldif
~# slaptest -f ./schema_conv.conf -F /tmp/ldif/

Abrir el fichero /tmp/ldif/cn=config/cn=schema/cn={6}dnsdomain2.ldif y cambiar las siguientes líneas:

dn: cn={6}dnsdomain2
objectClass: olcSchemaConfig
cn: {6}dnsdomain2

A esto otro:

dn: cn=dnsdomain2,cn=schema,cn=config
objectClass: olcSchemaConfig
cn: dnsdomain2

Además se deben borrar las siguientes líneas justo al final del fichero:

structuralObjectClass: olcSchemaConfig
entryUUID: ccd26c58-54b6-1036-8f0f-cd16c06c9857
creatorsName: cn=config
createTimestamp: 20161212130111Z
entryCSN: 20161212130111.420925Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20161212130111Z

Copiar el fichero al directorio de schemas:

~# cd /etc/ldap/schema
~# cp /tmp/ldif/cn\=config/cn\=schema/cn\=\{6\}dnsdomain2.ldif  ./dnsdomain2.ldif

Insertar el nuevo esquema en el árbol de LDAP

~# ldapadd -Q -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/dnsdomain2.ldif
adding new entry "cn=dnsdomain2,cn=schema,cn=config"

Y por último verificar que efectivamente está incluído:

~# ls -1 /etc/ldap/slapd.d/cn\=config/cn\=schema
cn={0}core.ldif
cn={1}cosine.ldif
cn={2}nis.ldif
cn={3}inetorgperson.ldif
cn={4}postfix.ldif
cn={5}dnsdomain2.ldif

Removing multipath device – map in use

I got in trouble when I tried to remove a multipath device from my servers. This device is on top on some lvm volumes that I am not using it any longer. I tried to remove with multipath -f, but it was not possible, it said that the map was in use:

~# multipath -f /dev/mapper/2554b454e79496758
Dec 05 12:22:31 | 2554b454e79496758: map in use
Dec 05 12:22:31 | failed to remove multipath map 2554b454e79496758

You can view how many processes are using this map with the dmsetup tool, see the open count field:

~# dmsetup info  /dev/mapper/2554b454e79496758
Name:              2554b454e79496758
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        4
Event number:      1086846
Major, minor:      251, 4
Number of targets: 1
UUID: mpath-2554b454e79496758

First remove any LVM active devices on this device, ex: if you have a volume group “vggroup” and a lvm “vol1” on this device remove them:

~# lvremove /dev/vggroup/vol1
~# vgremove vggroup
~# pvremove /dev/mapper/2554b454e79496758

and if the device file is still mapped under /dev remove it:

~# dmsetup remove /dev/vggroup/*

At this point there shouldn’t be any processes accessing this device and we should be able to remove it with the command above, but it some cases there are still processes blocked waiting for the device. We can try to find out which processes are with lsoft command filtering by device mayor and minor number:

~# lsof | grep "251,4"

In my case there was some vgs processes blocked trying to access the device. We cannot kill these processes, because they are already waiting for a signal from the kernel.

~# ps aux | grep sbin/vgs
root     1206972  0.0  0.0  32444  4288 ?        D    dic02   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free
root     1213321  0.0  0.0  32444  4308 ?        D    dic02   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free
root     1248170  0.0  0.0  32444  4196 ?        D    dic02   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free
root     2542017  0.0  0.0  32444  4252 ?        D    10:46   0:00 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free

We can try to suspend the multipath device to force timeout to the processes:

~# dmsetup suspend /dev/mapper/2554b454e79496758
~# dmsetup info /dev/mapper/2554b454e79496758
Name:              2554b454e79496758
State:             SUSPENDED
Read Ahead:        256
Tables present:    LIVE
Open count:        4
Event number:      1086846
Major, minor:      251, 4
Number of targets: 1
UUID: mpath-2554b454e79496758

And try to clear the device table:

~# dmsetup clear  /dev/mapper/2554b454e79496758
~# dmsetup wipe_table  /dev/mapper/2554b454e79496758

We are lucky and finally the device is not in use any longer:

~# dmsetup info  /dev/mapper/2554b454e79496758
Name:              2554b454e79496758
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      1086846
Major, minor:      251, 7
Number of targets: 1
UUID: mpath-2554b454e79496758

Now, we can remove it without problems:

~# mutipath -f  /dev/mapper/2554b454e79496758

To avoid multipath rediscover the device again we can blacklist it. Remove device from already discovered devices:

~# sed -i '/2554b454e79496758/d' /etc/multipath/wwids

In the multipath configuration file add an entry in the blacklist section withe the wwid of the deivce, if the file does not exist create it:
/etc/multipath.conf

blacklist {
   wwid 2554b454e79496758
}

And finally reload multipath:

~# systemctl reload multipath-tools

venet access like in Linux Containers (LXC)

I’ve been using OpenVZ containers in Proxmox for a while, and after upgrading to Proxmox 4 OpenVZ has been removed in favor of LXC containers. Although LXC containers have a lot of great features, the way they access to the network is not very good if you have untrusted users using them, because the network device in the container is attached directly to the bridge.

I liked a lot the venet devices in OpenVZ because with this devices the container has only access to the layer 3 of the network. In this post I tried to get venet like access with LXC containers, for that I used ebtables rules to limit this access.

In my case I created a new bridge (vmbr02) in a separate vlan (550) for the containers. The idea is that the proxmox node is going to be the gateway in layer 2 and layer 3 for the container and the MAC of the container is going to be masqueraded with the MAC of the bridge in the proxmox node. Because I have already a gateway configured in the proxmox node I’m going to create a new route table with the configuration of this network.

First we add the IP (10.10.10.10) to the new bridge in the proxmox node. This IP will be the gateway in the container.

sudo ip addr add 10.10.10.10/32 dev vmbr02v550 
sudo ip route add 10.10.10.0/24 dev vmbr02v550 src 10.10.10.10

Create the new route table for the new network in vlan 550

sudo echo "550 vlan550" >> /etc/iproute2/rt_tables

Populate the new table with the real gateway of this network (10.10.10.1)

sudo ip route add throw 10.10.10.0/24 table 550
sudo ip route add  default via 10.10.10.1 table 550

Add a rule for the traffic comming up from the bridge lookup the table 550:

sudo ip rule add from 31.193.227.0/24 iif vmbr02v550 lookup vlan550

Enable ip forwarding because the proxmox node will be the router for the LXC container. Because we only want ip forwarding in the bridge interface we can enable it only for this interface:

sudo systectl -w net.ipv4.conf.vmbr02v550.forwarding= 1

or

echo 1 > /proc/sys/net/ipv4/conf/vmbr02v550/forwarding

and to be permanent between reboots:

echo "net.ipv4.conf.vmbr02v550.forwarding= 1" >> /etc/sysctl.conf

I added this iptables rule because I’dont want the server be accessed with this IP address, this IP is going to use only to provide networking to the containers.

iptables -A INPUT -d 10.10.10.10 -j REJECT

At this point we have to have the LXC container created. In the following example we see a LXC with id 150:

$ sudo pct list
VMID       Status     Lock         Name                
150        stopped                 pruebas-lxc         

The bridge vmbr02v550 has one port connected to the physical network (bond0.550) and one virtual port connected to the container (veth150i1). Note that the container must be running to see the virtual port up:

$ sudo brctl show vmbr02v550
bridge name	bridge id		STP enabled	interfaces
vmbr02v550		8000.002590911cee	no		bond0.550
							veth150i1

Now it’s time to put some ebtables rules to limit the traffic forwarded in the bridge, these are general rules used for all the containers:

# ebtables rules in the table filter and chain forward
ebtables -A FORWARD -o veth+ --pkttype-type multicast -j DROP #1
ebtables -A FORWARD -i veth+ --pkttype-type multicast -j DROP #2
ebtables -A FORWARD -o veth+ --pkttype-type broadcast -j DROP #3
ebtables -A FORWARD -i veth+ --pkttype-type broadcast -j DROP #4
ebtables -A FORWARD -p IPv4 -i veth+ -o bond0.550 --ip-dst 10.10.10.0/24 -j ACCEPT #5
ebtables -A FORWARD -p IPv4 -i bond0.550 -o veth+ --ip-dst 10.10.10.0/24 -j ACCEPT #6
ebtables -A FORWARD -o veth+ -j DROP #6
ebtables -A FORWARD -i veth+ -j DROP #7

In this rules we use the expression veth+ to refer to all LXC virtual ports that can be connected to the bridge. The short explanation of each rule is the following:

  • #1 and #2 => Stop all multicast layer 3 packets delivered from an to the LXC virtual ports
  • #2 and #3 => Stop all broadcast layer 3 packets delivered from and to the LXC virtual ports
  • #5 and #6 => Only allow forwarding if the source IP or the destination IP  is in the LXC containers network.
  • #6 and #7 => Drop all packets that don’t met the above rules. We don’t want any packets from layer 2.

We need some ebtables rules in the nat table, but in this case these rules are set for each container. For this example we suppose the following:

  • Proxmox node MAC address: 0:25:90:91:1c:ee
  • LXC real MAC address: 66:36:61:62:32:31
  • LXC IP address: 10.10.10.11
#Ebtables rule to translate the packets that must be delivered to the LXC container IP to its real MAC
ebtables -t nat -A PREROUTING -p IPv4 -d 0:25:90:91:1c:ee -i bond0.550 --ip-dst 10.10.10.11 -j dnat --to-dst 66:36:61:62:32:31 --dnat-target ACCEPT
#Ebtables rule to reply to the ARP requests looking for the MAC of the LXC container with the MAC of the host:
ebtables -t nat -A PREROUTING -i bond0.550 -p ARP --arp-op Request --arp-ip-dst 10.10.10.11 -j arpreply --arpreply-mac 0:25:90:91:1c:ee
# I preffer the ebtables rule, but the above can be addressed too with the following arp command:
# arp -i vmbr02v550 -Ds 10.10.10.11 vmbr02v550 pub
#Ebtables rule to mask the LXC container MAC with the MAC of the host
ebtables -t nat -A POSTROUTING -s 66:36:61:62:32:31 -o bond0.550 -j snat --to-src 0:25:90:91:1c:ee --snat-arp --snat-target ACCEPT

And we are done! with this configuration the LXC container, although connected directly to the linux bridge, it has a limited access to the network.

Cómo saber si los discos soportan la opción discard para liberar espacio

La opción discard de los diospositivos de bloques nos permite liberar el espacio de los discos de forma efectiva cuando en el sistema de ficheros borramos ficheros.
Para saber si tenemos la opción discard activada y podemos liberar espacio en el dispositivo se puede ejecutar este comando en linux:

$ sudo lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE

En el caso de que los dispositivos no lo soporten aparecerá en la colúmna DISC-MAX 0B:

MOUNTPOINT DISC-MAX FSTYPE
/boot      0B      ext4
/          0B      ext4
/usr       0B      ext4
/var/tmp   0B      ext4
/var       0B      ext4
/home      0B      ext4

 

Con la opción discard activada nos aparecerá bajo la columna DISC-MAX el tamaño máximo de bytes descartables:

MOUNTPOINT DISC-MAX FSTYPE
/boot      1G       ext4
/          1G       ext4
/usr       1G       ext4
/var/tmp   1G       ext4
/var       1G       ext4
/home      1G       ext4

 

 

Otra opción posible es usando la opción -D del mismo comando que nos proporciona algo más de información:

$ sudo lsblk -D
NAME            DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda                    0        4K       1G         0
├─sda1                 0        4K       1G         0
└─sda2                 0        4K       1G         0
  ├─vgsys-root         0        4K       1G         0
  ├─vgsys-usr          0        4K       1G         0
  ├─vgsys-tmp          0        4K       1G         0
  └─vgsys-var          0        4K       1G         0
sdb                    0        4K       1G         0
└─sdb1                 0        4K       1G         0
  └─vgdata-home        0        4K       1G         0
sr0                    0        0B       0B         0

Una vez que sabemos que el dispositivo soporta la opción DISCARD podemos ejecutar el comando fstrim para liberar espacio en el backend.

Instalar php5.4 en Debian 8 Jessie

En Debian 8 (Jessie) viene por defecto la versión php 5.6 instalada, pero en mi caso necesitaba tener la versión php 5.4 por los requisitos del proyecto. Para instalar la versión php 5.4 simplemente añadir las sources de dotdeb.org en su versión wheezy:

Crear el fichero /etc/apt/sources.list.d/dotdeb.list con el siguiente contenido:

deb http://packages.dotdeb.org wheezy all
deb-src http://packages.dotdeb.org wheezy all

Ejecutar el siguiente comando para importar la clave del repositorio

wget -O - https://www.dotdeb.org/dotdeb.gpg | apt-key add -

Y por último instalar php 5.4 en el servidor:

sudo apt-get update
sudo apt-get install php5=5.4.45-1~dotdeb+7.1

y ya podemos disfrutar de la versión 5.4!