domenica 10 marzo 2013

LXC and cgroup.memory on Debian

Two days ago on Lurch, I was trying to show/set a memory limit for a container (LXC), using "lxc-cgroup -n <container> memory.limit_in_bytes"

Unfortunately, I got the message "lxc-cgroup: missing cgroup subsystem", that I've firstly intended as "I couldn't mount this cgroup at this session"

Briefly, asking about memory cgroup to LXC, everything was ok
lurch:~# lxc-checkconfig
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
...
view raw gistfile1.txt hosted with ❤ by GitHub


while asking to linux not
lurch:~# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 1 9 1
cpuacct 1 9 1
memory 0 9 0
view raw gistfile1.txt hosted with ❤ by GitHub


Another confusing point to me, was the check of the dmesg output, that showed memory cgroup between the others
lurch:~# dmesg | grep cgroup
...
[ 0.008000] Initializing cgroup subsys cpuacct
[ 0.008000] Initializing cgroup subsys memory
[ 0.008000] Initializing cgroup subsys devices
[ 0.008000] Initializing cgroup subsys freezer
[ 0.008000] Initializing cgroup subsys net_cls
[ 0.008000] Initializing cgroup subsys blkio
[ 0.008000] Initializing cgroup subsys perf_event
view raw gistfile1.txt hosted with ❤ by GitHub

So, after a little of googling, I have understood like, the memory cgroup  is just not enabled on Debian by default. That because having the cgroup.memory enabled, costs around 15Mb of ram, that is obviously a waste if you don't use that cgroup

In order to have the availability of said cgroup, you need to instruct the Grub by /etc/default/grub with the boot parameter cgroup_enable=memory

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet cgroup_enable=memory"
view raw gistfile1.as hosted with ❤ by GitHub
The amount of memory reserved to the cgroup nos is printed out during the boot time
lurch:~# dmesg | grep cgroup
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
...
[ 0.000000] allocated 16777216 bytes of page_cgroup
[ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
...
view raw gistfile1.as hosted with ❤ by GitHub
In the end I could set my cgroup memory limit
lurch:~# lxc-cgroup -n container memory.limit_in_bytes "1G"
lurch:~#
view raw gistfile1.txt hosted with ❤ by GitHub

sabato 9 marzo 2013

Pxe with Dnsmasq

Just a couple of words..

For a much too long period of my life I have always manually changed the "pxelinux" entry in the "dnsmasq.d/domain.conf" file, to achieve the boot with this and with that image depending to the needed install distro

# PXE boot
dhcp-boot = net:domain, pangolin-amd64/pxelinux.0, lurch, 192.168.1.100
view raw gistfile1.txt hosted with ❤ by GitHub
I just have finally found a way to serve each pxe images in one shot and even with a confortable menu list

That is accomplished adding something like:
# Multi PXE boot
pxe-prompt="Welcome to Lurch's TFTP"
pxe-service=x86PC, "Pangolin-amd64 on Lurch TFTP", "pangolin-amd64/pxelinux", 192.168.1.100
pxe-service=x86PC, "Pangolin-i386 on Lurch TFTP", "pangolin-i386/pxelinux", 192.168.1.100
pxe-service=x86PC, "Debian-amd64 on Lurch TFTP", "debian-amd64/pxelinux", 192.168.1.100
pxe-service=x86PC, "Fedora17-amd64 on Lurch TFTP", "fedora-x86_64/pxelinux", 192.168.1.100
view raw gistfile1.txt hosted with ❤ by GitHub
Needless to say that you could even make point each pxe entry to a different tftp servers