Monday, December 14, 2020

IPv6 tentative dadfailed

On a couple of occasions I've had LXC containers restarted but they've ended up in a bad situation with their IP's.

Basically the IPv4 address comes up but experiences disconnects. The IPv6 address fails to come up with "tentative dadfailed".

What's happened here is the veth from the previous container wasn't cleaned up properly.

Unfortunately arp -a or ip neigh show don't show the problem because as far as they're concerned the offending endpoint is behind br0.

 The only way I know of to confirm this is to check the bridge against the LXC info:

$ brctl show br0
br0             8000.fe46606ac64f       no              veth7UFFVA


$ lxc-ls --active
lxc-guest-1  lxc-guest-1

$ lxc-info -n lxc-guest-1 | grep veth
Link:           vethFF6D6Y

$ lxc-info -n lxc-guest-1 | grep veth
Link:           vethXAOAMY

We see here that veth7UFFVA is abandoned.

To fix this we need to shut down the LXC instance that's experiencing connectivity issues, then remove the abandoned interface from the bridge.

$ lxc-stop -n lxc-guest-1

$ brctl delif br0 veth7UFFVA

$ lxc-start -n lxc-guest-1

And confirm in the newly started container that we have a fully assigned IPv6 address.

Friday, March 20, 2015

OCSP Stapling with HAProxy

OCSP stapling was introduced in RFC 2560 back in 1999.  In 2006 RFC 4366 introduced TLS extensions, among which was included the ability to allow the server to send certificate status information as part of the TLS extensions during a TLS handshake.  In July 2013 Mozilla introduced OCSP stapling support in Firefox.

OCSP stapling provides the client with the certificate status immediately and specifically, reducing the latency for the page load by avoiding a separate request to an OCSP service hosted by the issuing CA.  Anyone who's turned on strict OCSP checking in their browser will have observed higher latency while the OCSP check blocks the connection to the site, and depending on the fallback preference the site may not load if the CA's OCSP service isn't responding.  Additionally, using OCSP via the CA's service may be undesirable as it leaks information to the CA about what site the user is visiting.

A lesser known feature of the recent HAProxy 1.5 release, in which SSL/TLS support was introduced, is to support OCSP stapling.  To make use of this feature we need to periodically retrieve the certificate status and provide this information to HAProxy.

HAProxy offers two ways to achieve this, either via static files or by way of the unix socket commands.

For each certificate provided to HAProxy it checks for the presence of another file at the same path suffixed by .ocsp.  It will then serve the content of this file via the TLS extensions when a new client connects.

For example, the following configuration provides a single PEM file that contains the signed certificate, key and intermediate certificates.

frontend https
    bind ssl crt /etc/pki/pems/ no-sslv3

Therefore to allow HAProxy to serve up the certificate status information we expect to see the following files:


There are three conditions the .ocsp file must satisfy in order to be used by HAProxy:
  1. it has to indicate a good status
  2. it has to be a single response for the certificate of the PEM file
  3. it has to be valid at the moment of addition
It's important to note that this last point requires us to update our .ocsp file regularly because a signed OCSP response will often only be valid for anything from a few hours to a few days.

We can easily automate updating our .ocsp file with the openssl ocsp command.  Before doing this we'll first need to store the issuer certificate for the openssl-ocsp command.  HAProxy is aware of this and will ignore any files with the suffix .issuer, so we'll use this as part of our naming, which means we'll have the following files before we begin.


If the certificate is signed by an intermediate certificate we will have received this with the certificate and that is the only certificate that should be in the .issuer file.

We can then use the following script to automate retrieval of the OCSP response.

#!/bin/sh -e

# Get an OSCP response from the certificates OCSP issuer for use
# with HAProxy, then reload HAProxy if there have been updates.

# Path to certificates

# Path to log output to

# Create the log path if it doesn't already exist
[ -d ${LOGDIR} ] || mkdir ${LOGDIR}

for pem in *.pem; do
    echo "= $(date)" >> ${LOGDIR}/${pem}.log

    # Get the OCSP URL from the certificate
    ocsp_url=$(openssl x509 -noout -ocsp_uri -in $pem)

    # Extract the hostname from the OCSP URL
    ocsp_host=$(echo $ocsp_url | cut -d/ -f3)

    # Only process the certificate if we have a .issuer file
    if [ -r ${pem}.issuer ]; then

        # Request the OCSP response from the issuer and store it
        openssl ocsp \
            -issuer ${pem}.issuer \
            -cert ${pem} \
            -url ${ocsp_url} \
            -header Host ${ocsp_host} \
            -respout ${pem}.ocsp >> ${LOGDIR}/${pem}.log 2>&1
    UPDATED=$(( $UPDATED + 1 ))

if [ $UPDATED -gt 0 ]; then
    echo "= $(date) - Updated $UPDATED OCSP responses" >> ${LOGDIR}/${pem}.log
    service haproxy reload > ${LOGDIR}/service-reload.log 2>&1
    echo "= $(date) - No updates" >> ${LOGDIR}/${pem}.log

This script does all we need to make use of static file OCSP stapling.  You can then cron this script to pull in your updates as often as you want.  It's a good idea to also logrotate the output files.

You can test the OCSP response using the openssl s_client:

$ openssl s_client -connect -tlsextdebug -status

TLS server extension "renegotiate" (id=65281), len=1
0001 - <SPACES/NULS>
TLS server extension "server ticket" (id=35), len=0
TLS server extension "status request" (id=5), len=0
depth=1 /O=CAcert Inc./OU= Class 3 Root
verify error:num=20:unable to get local issuer certificate
verify return:0
OCSP response:
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = AU, ST = NSW, L = Sydney, O = CAcert Inc., OU = Server Administration, CN =
    Produced At: Mar 21 18:35:14 2015 GMT
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: A11F312582B6DA5AD0B98D3A135E35D1EB183661
      Issuer Key Hash: 64C782514C8813F078D98977B56DC589DFBCB17A
      Serial Number: 03FB93
    Cert Status: good
    This Update: Mar 21 18:02:21 2015 GMT
    Next Update: Mar 23 18:35:14 2015 GMT

The key parts of this output are in bold and give us an indication of how often we'll need to cron our updates.  For example if an update were to fail due to the CA's OCSP responder being offline we'll want at least one or two retries before our OCSP staple expires.

Making use of HAProxy's OCSP stapling support via the command socket improves on this static file approach by avoiding the need for reloading HAProxy.  This method will be covered in a subsequent post.

Thursday, April 19, 2012

Mosh, NAT & LXC containers

I've been using mosh for a few days now and find it very promising.  There are a few aspects of it that are not the most convenient, but these are balanced out by other features.  In particular the following points need patience or workarounds:
  1. Lack of SSH agent forwarding.  This is a pretty big deal and a source of much frustration.
  2. No IPv6 support yet.
  3. No configuration file similar to SSH's ssh_config or ~/.ssh/config
  4. No graceful handling of NAT'd servers.
The first point about SSH agent forwarding I'm hoping will just need some patience.  There appear to be enough people feeling the pain so I suspect this will be resolved soon.

The lack of IPv6 support is frustrating since there is the lack of SSH agent forwarding and the inability for mosh to easily traverse NAT's.  I expect this too should be an easy win and I hope it will be coming sooner than later.

Regarding point three, I find myself wanting a configuration file much like SSH for various reasons.  For example to create shortcut entries for hosts, or to handle some sort of ProxyCommand, etc.  However my motivation for this post relates more to the need to configure a port for a particular host, which leads me to the NAT related point.

I have LXC containers running on bare metal hardware.  Frequently these LXC's are behind a NAT like this:

                                     +--- (eth0) lxc-guest-1
WAN --- (eth0) lxc-master-1 (br0) ---+   |
                                     +--- (eth0) lxc-guest-2

In this case you wouldn't normally be able to establish a mosh connection to either of the guest servers, however with some ugly setup it can be done.  This requires iptables rules and some client-side magic.

With SSH there are a couple of ways to handle this scenario.  My preferred approach is to make use of the ProxyCommand through the NAT'ing master, however this doesn't work with mosh:
Host lxc-guest-1
    ProxyCommand ssh lxc-master-1 /bin/nc -q 8 -w 3 lxc-guest-1 22

For mosh you need to get far more involved, setting up port forwarding through the NAT and finally having custom command lines to connect.

First, to get this NAT working the standard easiest approach is to:
iptables -A POSTROUTING -o eth0 -s -j MASQUERADE 

Then you need to set up port forwarding for ssh:
iptables -A PREROUTING -i eth0 -p tcp --dport 22001 -j DNAT --to-destination
iptables -A PREROUTING -i eth0 -p tcp --dport 22002 -j DNAT --to-destination

And finally port forwarding for mosh.  It seems that by default mosh will start listening at the low end of the ports, so each LXC container will listen on port 60001 by default first.  This will quickly overlap between host and guests.
iptables -A PREROUTING -i eth0 -p udp --dport 60010:60019 -j DNAT --to-destination
iptables -A PREROUTING -i eth0 -p udp --dport 60020:60029 -j DNAT --to-destination

With all the relevant ports being forwarded you can now set up your SSH config in ~/.ssh/config
Host lxc-guest-1
    Hostname lxc-guest-1
    Port 22001

Host lxc-guest-2
    Hostname lxc-guest-2
    Port 22002

And finally you should be able to establish a mosh connection with a defined port:
mosh -p 60010 lxc-guest-1

With this config you can establish up to 5 total connections to the each guest.  This is because mosh uses the requested port for one direction of the communication, and the port+1 for the return communication.  Thus for each additional connection you'll need to increment your port number by 2.

Since you've made it this far hopefully this is working for you too now.

Since there could easily be many ports with many hosts I've made a very simple script to handle the port config for the first connection.  Put this in one of your PATH folders.  I've saved this script at /usr/local/bin/moshi

source ~/.ssh/mosh_config
eval opts=\$${1//-/_}
mosh $opts $1

And as you can see the script expects a rudimentary mosh config file at ~/.ssh/mosh_config.  For the above server setup the config file contains the following:
lxc_guest_1="-p 60010"
lxc_guest_2="-p 60020"

Now to connect to either guest I use:
moshi lxc-guest-1

As you see this is horribly messy, but for now working with several dozen servers at the other side of the world it's making my life slightly easier.  If I find it useful enough I might even create a chef recipe for it, but I sincerely hope this won't be necessary and that both SSH agent forwarding and IPv6 support will be implemented soon.

Wednesday, April 11, 2012

Mosh - the great new SSH replacement

I'm regularly connecting to servers at the other side of the planet, frequently with latencies of 280-300ms.  While you do get used to the slow response of such connections any improvement is welcome, to the extent that I was pleased to hear that a 60ms improvement would be on the cards soon.

I heard about the mosh project today and made time to get it working this evening.  The initial impression is great.  In addition to the improvements in the feeling of latency it's fantastic to be able to switch between wired and wireless connectivity without losing the terminal session.

I can definitely see this becoming an essential tool.

Installation on Mac OSX

On my OSX 10.7 client installation was as simple as:
sudo port install mosh

Installation on Debian Squeeze

Update: It appears mosh has arrived in Debian squeeze backports:

On the Debian Squeeze server it was a little more involved.  I tried to use the debian testing repository with apt preferences for Pin-Priority.  Unfortunately trying to install mosh wanted to upgrade around 20 packages to the testing versions.  Not something I want to do on production servers.

Fortunately building and install mosh was easier than expected using the standard squeeze packages:
sudo aptitude install build-essential autoconf protobuf-compiler libprotobuf-dev libboost-dev libutempter-dev libncurses5-dev zlib1g-dev pkg-config
git clone
cd mosh
sudo make install

Following this it was necessary to open up the relevant ports in the firewall:

$ sudo iptables -A INPUT -p udp -m multiport --dports 60000:61000 -j ACCEPT

Locale configuration

The Mosh developers have decided to only go down the path of supporting UTF-8, which doesn't seem like such a bad idea, however it does mean you'll have to ensure both ends of your connection properly support the UTF-8 locales.  To do this the following conditions need to be met.

On the server ensure you have the following line in your sshd_config.  On debian this is located at /etc/ssh/sshd_config:

Server /etc/ssh/sshd_config
AcceptEnv LANG LC_*

OSX client ~/.ssh/config or /etc/ssh_config. Thanks to srmadden for this snippet.
Host *

Now confirm the locales are correct.  Initially running locale on both client and server reported the locales were all correctly UTF-8, however checking the server locale via the ssh one liner "ssh remotehost locale" was returning POSIX. After adding the above config the correct locales were returned.

On the local workstation:
user@workstation $ locale  

And a regular connection to the remote host.
user@workstation $ ssh remotehost locale

Now go ahead and enjoy the low latency "feeling" and the ability to seamlessly move between connections.

Last but not least a big thank you to all the people that made the mosh project a reality!

Thursday, March 29, 2012

Opscode Chef client within an rbenv environment

Ruby dependencies and versions on distro's aren't always maintained to our liking. I've had problems deploying Chef on CentOS appliances running ruby on rails apps that break when the packages for Chef are installed. I know the problem should be fixed so dependencies can be handled correctly, but that's not always going to happen as quickly as we'd like. So I adapted the default CentOS bootstrap script to install Chef within an rbenv environment.

Hopefully I won't need to ever do this for Debian.

Since this is limited to CentOS I have the following hard-coded condition in the recipe[chef-client::service]

*** service.rb.orig 2012-03-29 22:36:48.000000000 +0100
--- service.rb 2012-03-18 14:29:20.000000000 +0000
*** 50,55 ****
--- 50,60 ----
    init_content ="#{node["languages"]["ruby"]["gems_dir"]}/gems/chef-#{chef_version}/distro/#{dist_dir}/etc/init.d/chef-client")
    conf_content ="#{node["languages"]["ruby"]["gems_dir"]}/gems/chef-#{chef_version}/distro/#{dist_dir}/etc/#{conf_dir}/chef-client")
+   # We're always using rbenv on CuntOS, so ensure the service does too
+   if platform?("centos") then
+  conf_content = "#{conf_content}\nexport PATH=\"$HOME/.rbenv/bin:/usr/local/bin:$PATH\"\neval \"$($HOME/.rbenv/bin/rbenv init -)\"\nrbenv shell $(rbenv versions | tail -n 1| grep -Eo '\w+\.\w+\.\w+-\w+')\n"
+   end
    file "/etc/init.d/chef-client" do
      content init_content
      mode 0755

Adapt the following bootstrap script as needed, and use at your own risk.

bash -c -x -e '
<%= "export http_proxy=\"#{knife_config[:bootstrap_proxy]}\"" if knife_config[:bootstrap_proxy] -%>

# knife bootstrap -N your_host -E development -r 'role[base-server]','role[lsb]' --template-file ~/scm/git/chef/bootstrap/centos5-rbenv.erb your_host_fqdn

export RBENV_VERSION="1.9.3-p125"

[ -f /usr/bin/git ] || yum -y install git
if [ ! -d ~/ruby-build ]; then
 cd ~/
 git clone git://
 cd ~/ruby-build
if [ ! -d ~/.rbenv ]; then
 cd ~/
 git clone git:// .rbenv

 echo "export PATH=\"\$HOME/.rbenv/bin:/usr/local/bin:\$PATH\"" >> ~/.bash_profile
 echo "export PATH=\"\$HOME/.rbenv/bin:/usr/local/bin:\$PATH\"" >> ~/.zshenv

 echo "eval \"\$(\$HOME/.rbenv/bin/rbenv init -)\"" >> ~/.bash_profile
 echo "eval \"\$(\$HOME/.rbenv/bin/rbenv init -)\"" >> ~/.zshenv

 echo "source \$HOME/.bash_profile" >> ~/.bashrc
 echo "rbenv shell $RBENV_VERSION" >> ~/.bashrc

 echo "source \$HOME/.zshenv" >> ~/.zshrc
 echo "rbenv shell $RBENV_VERSION" >> ~/.zshrc

export PATH="$HOME/.rbenv/bin:/usr/local/bin:$PATH"
eval "$($HOME/.rbenv/bin/rbenv init -)"

if  $HOME/.rbenv/bin/rbenv versions | grep -q $RBENV_VERSION && [ -x $HOME/.rbenv/shims/ruby ] ; then
 eval "`$HOME/.rbenv/bin/rbenv sh-shell $RBENV_VERSION`"

 export CONFIGURE_OPTS="--disable-install-doc"
 #export MAKEOPTS="-j$(cat /proc/cpuinfo | grep ^processor | tail -n1 | awk \"{print $3 + 2}\")"
 rm -rf /tmp/ruby-build*
 rbenv install $RBENV_VERSION &
 while [ ! -d /root/.rbenv/versions/1.9.3-p125/lib/ruby/1.9.1 ]; do
  sleep 5
  echo "Waiting for ruby-build to complete."
 $HOME/.rbenv/bin/rbenv sh-shell $RBENV_VERSION

if [ ! -f /usr/bin/chef-client ]; then
 rpm -qa epel-release | grep -q epel-release || {
  wget <%= "--proxy=on " if knife_config[:bootstrap_proxy] %>
  rpm -Uvh epel-release-5-4.noarch.rpm

 [ -f /etc/yum.repos.d/aegis.rep ] || wget <%= "--proxy=on " if knife_config[:bootstrap_proxy] %>-O /etc/yum.repos.d/aegis.repo
 yum install -y gcc gcc-c++ automake autoconf make

GEM_OPTS="--no-rdoc --no-ri"

gem update $GEM_OPTS --system
gem update $GEM_OPTS
[ -x $RBENV_BIN/ohai ] || gem install ohai $GEM_OPTS --verbose
[ -x $RBENV_BIN/chef-client ] || gem install chef $GEM_OPTS --verbose <%= bootstrap_version_string %>

mkdir -p /etc/chef

for x in chef-client chef-solo knife ohai shef; do
 [ -h /usr/bin/${x} ] && rm -f /usr/bin/${x}
 ln -s $RBENV_BIN/${x} /usr/bin/${x}

cat <<'EOP'
<%= validation_key %>
) > /tmp/validation.pem
awk NF /tmp/validation.pem > /etc/chef/validation.pem
rm /tmp/validation.pem

cat <<'EOP'
<%= config_content %>
) > /etc/chef/client.rb

cat <<'EOP'
<%= { "run_list" => @run_list }.to_json %>
) > /etc/chef/first-boot.json

<%= start_chef %>'

Monday, February 27, 2012

RabbitMQ startup and "Too short cookie string"

While setting up a RabbitMQ cluster through the Opscode Chef cookbookI ended up in a situation where rabbitmq-server wouldn't start even without a config file.  The error logs showed:

> /var/log/rabbitmq/startup_err
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

> /var/log/rabbitmq/startup_log
{error_logger,{{2012,2,27},{18,8,41}},"Too short cookie string",[]}
{error_logger,{{2012,2,27},{18,8,41}},crash_report,[[{initial_call,{auth,init,['Argument__1']}},{pid,<0.19.0>},{registered_name,[]},{error_info,{exit,{"Too short cookie string",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[<0.17.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,822}],[]]}
{error_logger,{{2012,2,27},{18,8,41}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{"Too short cookie string",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{offender,[{pid,undefined},{name,auth},{mfargs,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]}
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} 

The solution was simple. Checking in the data folder I saw the .erlang.cookie was of size zero:
# ls -al /var/lib/rabbitmq
total 272
drwxr-xr-x  3 rabbitmq rabbitmq   4096 Feb 27 18:05 .
drwxr-xr-x 28 root     root       4096 Feb 27 18:04 ..
-r--------  1 rabbitmq rabbitmq      0 Feb 23 16:45 .erlang.cookie
-rw-r-----  1 rabbitmq rabbitmq 213540 Feb 27 18:08 erl_crash.dump
drwxr-xr-x  4 rabbitmq rabbitmq   4096 Feb 23 16:44 mnesia

Simply remove the cookie file and start rabbitmq-server again and it succeeds.
# rm /var/lib/rabbitmq/.erlang.cookie

Monday, May 23, 2011

Upgrading Chef 0.9 to 0.10 - some gotchas

There are a couple of problems I experienced with the upgrade of Opscode Chef from 0.9 to 0.10.

No DEB packages yet

Impatient as I was I wanted to upgrade before the Ubuntu deb's were released. So I uninstalled the deb install and did the ruby gem install, keeping my existing config & DB.  Perhaps I'm a bit na├»ve to think this should work :p

RabbitMQ not properly setup by chef-solo

Once the chef-solo bootstrap was complete I started up all the services and tried things out. It didn't work. Couldn't do the last step in the upgrade docs: knife index rebuild.

Turned out this was related to rabbitmq, which was also filling up the disk quickly with error logs.  The solution was to update the rabbitmq password: rabbitmqctl change_password chef testing.

Internal Server Error 500

Another error I keep getting on some servers is
File with checksum a73b7f6222549364ab0d6c4ed2442abf not found in the  repository (this should not happen) -  (Merb::ControllerExceptions::InternalServerError)"
It looks like there's an inconsistency between CouchDB and the filesystem cache.

The tips on chef bug #1397 explains how this can be fixed with a combination of Shef and rake install:
# shef
chef > require 'chef/checksum'
  => false 
chef > r ='http://localhost:5984/chef/_design/checksums/_view/', false, false)
 => #<Chef::REST:0x7f831db382b8 @redirect_limit=10, @cookies={}, @redirects_followed=0, @auth_credentials=#, @url="http://localhost:5984/chef/_design/checksums/_view/", @sign_request=true, @default_headers={}, @sign_on_redirect=true>
chef > r.get_rest("all")["rows"].each do |c| c["value"].cdb_destroy end

After doing this I did the rake install. During this process I had some difficulties that resulted in this error:
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/openssl/buffering.rb:178:in `syswrite': Broken pipe (Errno::EPIPE)
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/openssl/buffering.rb:178:in `do_write'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/openssl/buffering.rb:192:in `write'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:177:in `write0'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:153:in `write'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:168:in `writing'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:152:in `write'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1557:in `send_request_with_body_stream'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1527:in `exec'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1048:in `__request__'
        from /Library/Ruby/Gems/1.8/gems/rest-client-1.6.1/lib/restclient/net_http_ext.rb:15:in `request'
        from /Library/Ruby/Gems/1.8/gems/rest-client-1.6.1/lib/restclient/request.rb:167:in `transmit'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:543:in `start'
        from /Library/Ruby/Gems/1.8/gems/rest-client-1.6.1/lib/restclient/request.rb:166:in `transmit'
        from /Library/Ruby/Gems/1.8/gems/rest-client-1.6.1/lib/restclient/request.rb:60:in `execute'
        from /Library/Ruby/Gems/1.8/gems/rest-client-1.6.1/lib/restclient/request.rb:31:in `execute'
        from /Library/Ruby/Gems/1.8/gems/rest-client-1.6.1/lib/restclient/resource.rb:72:in `put'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:134:in `uploader_function_for'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:25:in `call'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:25:in `setup_worker_threads'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:24:in `loop'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:24:in `setup_worker_threads'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:23:in `initialize'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:23:in `new'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:23:in `setup_worker_threads'
        from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:135:in `map'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:22:in `each'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:22:in `map'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:22:in `setup_worker_threads'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_uploader.rb:69:in `upload_cookbook'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/knife/cookbook_upload.rb:138:in `upload'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/knife/cookbook_upload.rb:74:in `run'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_loader.rb:89:in `each'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/cookbook_loader.rb:88:in `each'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/knife/cookbook_upload.rb:72:in `run'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/knife.rb:391:in `run_with_pretty_exceptions'
        from /Library/Ruby/Gems/1.8/gems/chef-0.10.0/lib/chef/knife.rb:166:in `run'

To resolve this problem I removed the nginx proxy from the communications chain and that resolved the problem. The problem occurred in just a single recipe, and I believe the culprit was a 2.2MB file that was failing to upload through the proxy. Yes - I know chef should not be used with 2.2MB files.

After removing the proxy from the chain it was possible to do the rake install and all worked well.

Tuesday, April 12, 2011

LXC, interface bonding, vlans, macvlan and communication with host

I've been using LXC for several months now and have found it to be excellent. There are a few caveats, however these are being dealt with as LXC matures.

One of the difficulties with setting up a container system can be how the network is configured. In most cases if communication is required between the host and the container the advice is to use linux bridges on your host system. This works well in most cases, except with some network drivers configured for interface bonding with bridges and sub-interfaces on other vlans.

To summarize what didn't work with the tigon3 driver on Ubuntu 10.04.2, with a PPA 2.6.38-8-server kernel. This may or may not be the case with other drivers and distros.

  slaves eth0 eth1

So it turned out the tigon3 driver didn't support vlans when the base port was part of a bridge. This meant bond0.100 was behaving erratically, even with various bridge options configured, such as bridge_fd 0, etc.

The result was that using veth mode for the lxc containers was out of the question. So I turned to macvlan. Unfortunately the containers can't communicate with the host system when using the macvlan interface type, so another hurdle here.

Enter the newer macvlan bridge mode. The lxc.conf man page indicates that that when using macvlan bridge mode the guests can communicate with each other. Unfortunately they still can't communicate with the host. Easily solved - give the host a macvlan interface in bridge mode too. Well - not so easily solved since the Ubuntu 10.04 version of iproute2 doesn't support this.

So hack around this and install the Debian squeeze version of iproute into Ubuntu Lucid, then do the config and it works. Note that I'm using a PPA 2.6.38 kernel which has a more recent implementation of the macvlan code.

Once you have the right kernel and the right iproute2 version you need the following config in your /etc/network/interfaces to make this work:
auto bond0
iface bond0 inet static
    bond-slaves eth0 eth1
    bond-mode 1
    bond-miimon 100
    # address & netmask are required to satisfy the startup scripts

auto mv0
iface mv0 inet static
    pre-up ip link add link bond0 name mv0 type macvlan mode bridge

auto bond0.100
iface bond0.100 inet static

Now the host and guests can all communicate with each other and the guests will appear on the network with their own MAC address.

This approach also works on Debian Squeeze 6.0 with the FAI kernels.

Thursday, January 20, 2011

Determine the latency to a SixXS Point of Presence

I've recently moved from NL to the UK and was looking at my SixXS IPv6 link. I wanted to make sure I'm getting the best connection possible, so went about checking the latency between my endpoint and the PoPs.

First - I did a simple mtr to my existing PoP in NL, then to the only UK based PoP. This initial test showed the latency to my existing endpoint was around 23ms, while it was around 30ms to the local UK based PoP. This doesn't seem right, so dig further.

mtr --psize=1496

mtr uses ICMP echo requests to determine latency. By default these are 56 bytes, which is an unrealistic packet size for determining latency practical between two points. Once the packet size was set to something more reasonable, such as --psize=1496, then I saw less latency to the UK PoP.

mtr --psize=1496

Aside from using mtr, there is another way to get a broader idea of latency between your endpoint and the available PoPs. The PoPs page at SixXS has a list of all currently active pops. Using this, here's a one-liner to fping all of them and quickly determine your best link:

wget -q --no-check-certificate -O - | grep -Eo '[a-z]{5}[0-9]{2}' | grep -v xhtml | sort -u | while read PoP; do echo -n "$ "; done | xargs fping -i 100 -p 50 -b 1208 -c 50 -n -s -a

Sunday, January 2, 2011

LIRC in the 2.6.36 linux kernel

I've just upgraded to the 2.6.36 kernel and have found that lirc stopped working. lircd would load with the lirc-0.8.6 lirc_dev and lirc_sir modules in my kernel, but when irexec starts I'd get the following type of kernel oops.

BUG: unable to handle kernel NULL pointer dereference at 0000005c
IP: [] lirc_get_pdata+0x2e9/0x841 [lirc_dev]
*pde = 00000000 
Oops: 0000 [#6] 
last sysfs file: /sys/devices/virtual/block/md3/dev
Modules linked in: lirc_sir lirc_dev cls_route cls_u32 cls_fw sch_sfq sch_htb ipt_addrtype xt_DSCP xt_dscp xt_NFQUEUE xt_iprange xt_hashlimit xt_connmark nf_conntrack_sip w83697hf_wdt [last unloaded: lirc_dev]

Pid: 22422, comm: lircd Tainted: G      D #3 CN700-8237R/ 
EIP: 0060:[] EFLAGS: 00010246 CPU: 0
EIP is at lirc_get_pdata+0x2e9/0x841 [lirc_dev]
EAX: f63fdec0 EBX: 80046900 ECX: 08062348 EDX: 80046900
ESI: f63fdec0 EDI: 00000000 EBP: 00000007 ESP: dc67bf14
 DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
Process lircd (pid: 22422, ti=dc67a000 task=f3dddc00 task.ti=dc67a000)
 08062348 f63fdec0 fd6f82bd 00000007 00000007 c1079b76 c1deb3a0 f576a7ac
<0> b78287c0 00000004 00000028 c105d5f3 f5d17b78 000000c7 00000000 00000000
<0> 00000000 f5dbd900 b78287c0 f5d17b78 000000a0 00000246 f63fdec0 00000020
Call Trace:
 [] ? lirc_get_pdata+0x2bd/0x841 [lirc_dev]
 [] ? do_vfs_ioctl+0x456/0x4a1
 [] ? handle_mm_fault+0x2e0/0x672
 [] ? sys_ioctl+0x44/0x64
 [] ? sysenter_do_call+0x12/0x26
Code: 57 56 89 c6 53 89 d3 83 ec 04 83 3d 78 98 6f fd 00 89 0c 24 8b 78 68 74 12 52 ff 77 28 57 68 37 8f 6f fd e8 5a 0d ce c3 83 c4 10 <8b> 47 5c 85 c0 74 1d 8b 68 20 85 ed 74 16 8b 0c 24 89 f0 89 da 
EIP: [] lirc_get_pdata+0x2e9/0x841 [lirc_dev] SS:ESP 0068:dc67bf14
CR2: 000000000000005c
---[ end trace 1bec319525f4926b ]---

In my digging I found that parts of lirc are now in the linux kernel as of 2.6.36. But it's not obvious how to make use of this yet.

First - the lirc_dev module is selected from the "Drivers > Staging" section. But it won't appear there until you enable "Drivers > Multimedia > Infrared remove control adapters". This is where the IR_CORE is selected.

Next - lirc-0.9 is going to be needed to make proper use of these in-kernel modules. As of writing this lirc-0.9 is still pre-release, so I have taken the shortest path to get a working remote.

On my gentoo system, after building my new kernel and installing it I used to emerge lirc again to make sure the modules are in place. This is no longer necessary since the modules I need are fully in kernel. I've found that simply replacing the lirc built modules with the in-kernel ones, and continuing to use lirc-0.8.7 appears to work now. The lirc modules loaded are lirc_dev and lirc_sir.

See the new maintainers blog post about this at

Tuesday, December 28, 2010

LXC Linux Containers, Ubuntu & udev

I recently started using linux containers instead of xen virtualization. It's not a fully mature setup yet, but I prefer the approach for what my needs are. Plus with the evolving cgroups feature in the kernel it's shaping up to be an efficient way to have multiple independent environments without the overhead of virtualization. For example, IIRC there are fewer context switches required when using LXC to access the network.

I have a base host of Debian Squeeze (currently in testing as of this writing). I have Debian Lenny, Ubuntu Lucid, and Gentoo as guest systems. The Debian squeze installer works well for Lenny and Lucid, but the Ubuntu folks haven't taken the necessary steps to make Ubuntu play nice in a container.

One main glitch I found with Ubuntu Lucid was that during the regular system upgrades I received a new udev package, which started causing problems with dpkg. Essentially we don't want to have udev in the guest since the host deals with the /dev/ filesystem. If your container is set up with a default deny on the dev fs, then you'll have seen the below errors:

Setting up udev (151-12.2) ...
mknod: `/lib/udev/devices/ppp': Operation not permitted
dpkg: error processing udev (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of plymouth:
 plymouth depends on udev (>= 149-2); however:
  Package udev is not configured yet.
dpkg: error processing plymouth (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install.  Trying to recover:
Setting up udev (151-12.2) ...
mknod: `/lib/udev/devices/ppp': Operation not permitted
dpkg: error processing udev (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of plymouth:
 plymouth depends on udev (>= 149-2); however:
  Package udev is not configured yet.
dpkg: error processing plymouth (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:

In this error message we see that the udev.postinst script is trying to make a node in /dev/, which we don't want it to do.

There is probably a more graceful way to fix this, but for now I'm quite happy to hack it outta my way by editing /var/lib/dpkg/info/udev.postinst and putting an exit 0 before anything else is done in the script. Once that's done just reconfigure it and it should work:

# dpkg --configure udev
# dpkg --configure plymouth

Wednesday, November 10, 2010

Using AutoFS to mount EncFS over CIFS


My hosting provider offers backup space accessible using FTP, SSH/SFTP, and CIFS. I want to make use of this space, but I don't want to put potentially sensitive data on there unencrypted, so I went looking for a solution.

I considered using LUKS, but it's not ideal because in this case I'd have to create a disk image on the remote storage which would be remounted via a loop device locally. In addition it would be complicated to resize the remote image.

I found EncFS to be a suitable alternative. It provides an encrypted filesystem in user-space, and works on file level instead of on a block level. This way there's no need to determine your block device size in advance.

I won't go into how to set up an EncFS mount as there's plenty of documentation out there for this.

First round: EncFS over CurlFTPfs

Initially I thought the hosting provider only gave FTP access to the backup space, so I went about stacking EncFS on top of CurlFTPfs - a most inelegant solution. It appeared to work, however I found that eventually doing an rsync through all the layers was causing problems. I tried limiting the rsync bandwidth with the --bwlimit option, but it only delayed the problem. I guessed this was CurlFTPfs so I did some more digging.

Second round: EncFS over CIFS/Samba

I found there was also CIFS (Samba) access to the backup space, so I tried out the same EncFS on top of the CIFS mount and it worked great. No more problems. So I wanted to automate this as much as possible, removing the extra logic to make sure the CIFS mount is in place before mounting the EncFS space.


Unfortunately this is where the bad news comes (although there is also good news later). I tried various approaches but was unable to get EncFS to mount via AutoFS, although it does work through fstab. This appears to be due to how EncFS and AutoFS manage creation of the mount point.

So the current setup is to use AutoFS to mount the CIFS share, and to mount the EncFS share with it's own idle handling logic.

The relevant configs are:
/net  /etc/auto.hetzner  --timeout=150

ht-backup-raw  -fstype=cifs,credentials=/etc/backup/samba.auth  ://


Creating an automated way to ount the EncFS space also required some hackery. There is an option to use an external password program, however this can't be referenced in /etc/fstab, so an additional FUSE wrapper script needs to be created. This script receives 4 arguments. This script I've written is not perfect, but enough for the task:



if echo $4 | grep -q 'autofs'; then
MOUNTPOINT_PATH=/$(basename $2)
encfs --extpass="/etc/encfs${MOUNTPOINT_PATH}.sh" "$1" "${MOUNTPOINT_PATH}" -o "$OPTIONS"
encfs --ondemand --idle=1 --extpass="/etc/encfs${2}.sh" "$1" "$2" -o "$4"

You can see my feeble attempt in there to debug and mount via autofs, but it didn't work. I leave it there for others to try if they wish.

The final bit of the magic is:

encfs-extpass#/net/ht-backup-raw  /ht-backup  fuse  defaults  0 0

Here I have a line in /etc/fstab that uses FUSE to call my custom script, which in turn mounts my encfs space, but without an interactive prompt. This is because I have a script at /etc/encfs/ that echo's out the password, which is all encfs needs. I make sure the password script is chmod 400 and owned by root and my password is relatively safe.


Now when the system comes up the EncFS space is mounted automatically. I haven't tested the idle unmounting thoroughly yet, but in theory it should work.

Thursday, September 23, 2010

Rolling upgrades of a gentoo system

I have a gentoo system I built back in 2000 or 2001 that has served many purposes in my home. It started out as my main workstation, then when I moved it became a remote server, and now it's back in my home as a media center/ltsp server/wan router/nas/etc.

I've always had it mirror raided and migrated the disks between motherboards for upgrades, but essentially it's the same old installation from 10 years ago. About once a year or two I do a full emerge world, and I've just completed one today.

Overall I started the upgrade about a week ago and have learned some useful practices this time around.

I use a couple of laptops with distcc which I boot via LTSP so I can easily keep the glibc and gcc versions in line between systems. These systems improve the build times massively since my main system is a fanless Via C7 at 1.2Ghz.

The first step is to upgrade any critical services independently so you can keep an eye on them. When doing this you should emerge --newuse --deep --update (atom) to ensure as many of the related packages are rebuilt too.

I also like to use a few other flags for convenience: emerge -va --newuse --deep --update --tree --keep-going --jobs=4 (atom).
  • The --tree flag is a cosmetic addition so I can visualize the dependencies of the build plan before approving it (-va).
  • The --keep-going flag allows building of subsequent packages so as much as possible gets done.
  • The --jobs=4 flag allows multiple non-dependent packages to be merged simultaneously - this can speed up things. I also tried using the --load-average=6.0 setting, but this was causing my distcc slaves to block compiles - I suspect because the master NFS server was too busy coordinating the compiles.
Some caveats I faced while upgrading were:
  1. The dev-lang/mono package is buggy. I was moving from dev-lang/mono:1 to dev-lang/mono:2 and it kept failing to compile. It turned out that this is a known issue and compiling a new mono instance will make use of a pre-existing installation causing the failure. The workaround is to unmerge the old one before emerging the new one!
  2. The media-libs/libcanberra package doesn't like to be distributed or run with anything greater than MAKEOPTS="-j1".
Overall with these tips it should be possible to do a clean world update in less than a week ;)

Wednesday, April 28, 2010

Upgrading iDRAC firmware (Dell IPMI)

Upgrading the Dell iDRAC (IPMI/BMC) firmware on a non-RedHat system is a painful experience.
DISCLAIMER: This process bypasses all the checks, licenses agreements, notes and anything else meant to protect you from yourself. Only carry out these steps if you really do know what you're doing!

UPDATE: These instructions aren't currently working on Debian Squeeze AMD64.

Right - so here's the steps:
  1. Download the latest firmware from
  2. Prepare a Debian sub-environment. For instructions see
    1. The instructions are for a Debian Etch 4.0 environment. You can easily use those instructions to build the latest Debian environment by substituting etch for the latest distro, e.g. lenny.
    2. Take a backup of your new debian sub-environment. It'll save your life when least expected while working with commercial, so-called "open" tools, that don't work anywhere except on RedHat.
    3. You can clear out a few unused packages as detailed after the chroot command below. You can also clear out downloaded package files before packaging your environment for re-use.
  3. Ensure a few mounts are in place within your Debian sub-environment:
    mount -o bind /dev /debian/dev
    mount -t proc none /debian/proc
    mount -t sysfs none /debian/sys
  4. Make sure the necessary modules are loaded for the actual flashing:
    modprobe ipmi-devintf
    modprobe ipmi-si
  5. Enter the Debian sub-environment
    chroot /debian /bin/bash
    1. At this point you can purge a few unnecessary apps so they don't interfere with your main environment, and to save a bit of space:
      # aptitude purge bsd-mailx cron exim4 exim4-base exim4-config \
      exim4-daemon-light iptables klogd laptop-detect logrotate \
      sysklogd tasksel tasksel-data
    2. And add a few useful tools: sysvconfig provides RedHat-like service controls; vi(m), well, enough said..
      # aptitude install sysvconfig vim
      UPDATE: if you're running a 64bit system you'll also need to install 32bit support:
      # aptitude install ia32-libs rpm
    3. Finally clean out unnecessary package files from /var/cache/apt/.
      # aptitude clean
  6. Unpack the new firmware package and apply the firmware update (it's a good idea to do this in a screen session to avoid being interrupted):
    # screen -R -DD
    # ./IDRAC6_FRMW_LX_R257033.BIN --extract /tmp/idrac
    # cd /tmp/idrac
    # ./bmcfwul -i=payload/firmimg.d6
    • The final command, bmcfwul, can take a long time and may appear to hang. Be patient - it will report back eventually.
    • You will have to make sure the ipmi-devintf and ipmi-si kernel modules are loaded in your main OS instance

  7. That's it - you should now have an upgraded firmware.

Note that you must unmount /debain/dev, /debian/proc and /debian/sys before packaging.
# umount /debian/dev
# umount /debian/proc
# umount /debian/sys
# tar -vcjf /tmp/debian.tbz2 /debian


Monday, March 22, 2010

MySQL table & index sizes

This almost works for getting the table stats - needs a tweak to correctly report database and index space usage, but don't have time to check that now.

table_name, engine, table_rows as tbl_rows, avg_row_length as rlen,
floor((data_length+index_length)/1024/1024) as allMB,
floor((data_length)/1024/1024) as dMB,
floor((index_length)/1024/1024) as iMB
from information_schema.tables
where table_schema=database()
order by (data_length+index_length) desc;

Thursday, February 4, 2010

Grow linux md raid5 with mdadm --grow

Growing an mdadm RAID array is fairly straight forward these days. There's a few limitations, depending on your setup, and I strongly recommend you read the mdadm man page in addition to the notes here.

A couple of the limitations include:
  • raid arrays in a container can not be grown, so this excludes DDF arrays
  • arrays with 0.9x metadata are limited to 2Gb components - the total size of the array is not affected though

Before you start it's a good idea to run a consistency check on the array. Depending on the size of the array this can take a looong time. On my 3 x 1Tb RAID5 array this usually takes around 10 hours with the default settings. You can explore tweaking the settings, though I haven't done this for checks yet. We will see how we can tweak the settings for the reshape later on.

Running a consistency check is done as follows. I don't have the sample mdstat output at this time but have included the command for consistency.
# echo check >> /sys/block/md4/md/sync_action
# cat /proc/mdstat

You'll see if any errors were corrected in the array parity in the dmesg output and/or kernel logs.

Once the check is complete you should be safe to grow the array. First you have to add a new device to it so there is a spare drive in the set.
mdadm --add /dev/md3 /dev/sdc1

The event will appear in the dmesg output and the spare will show up in mdstat:
# dmesg
md: bind

# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdc1[s] sdb1[0] sda1[2] hdd1[1]
1953519872 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

Now the spare is there - you can give the command to grow the array:
# mdadm --grow /dev/md3 --backup-file=~/mdadm-grow-backup-file.mdadm --raid-devices=4

The array now starts reshaping. You can monitor progress:
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdc1[3] sdb1[0] sda1[2] hdd1[1]
1953519872 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
[>....................] reshape = 3.3% (33021416/976759936) finish=1421.0min speed=10341K/sec

In dmesg you should see something like this:
# dmesg

RAID5 conf printout:
--- rd:4 wd:4
disk 0, o:1, dev:sdb1
disk 1, o:1, dev:hdd1
disk 2, o:1, dev:sda1
disk 3, o:1, dev:sdc1
md: reshape of RAID array md3
md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
md: using 128k window, over a total of 976759936 blocks.

Before tweaking the speed settings, now is a good time to edit your /etc/mdadm.conf file with the new ARRAY changes so it's recognized and started on your next reboot.

Now we can tweak the speed settings to speed up the reshape time. I played around with a few settings and found the following to be good for my own system.

# echo 8192 >> /sys/block/md3/md/stripe_cache_size
# echo 15000 >> /sys/block/md3/md/sync_speed_min
# echo 200000 >> /sys/block/md3/md/sync_speed_max

On my system this cut about a third off the predicted finish time:
# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdc1[3] sdb1[0] sda1[2] hdd1[1]
1953519872 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
[>....................] reshape = 4.0% (39983488/976759936) finish=957.6min speed=16303K/sec

It seems values greater than 8192 for stripe_cache_size were more harmful than beneficial on my system. It's not clear to me if this is CPU bound or bandwidth to the drives, though looking at older posts I suspect both can play a roll.

Also note that reducing the stripe_cache_size may not occur immediately when you echo a smaller value to the file. I had to echo smaller values several times before the value was adopted. This was on kernel

You can monitor the stripe_cache_active file to see how filled the cache is:
# cat /sys/block/md3/md/stripe_cache_active

When the reshape is complete you will still need to grow the file system (or volume group if you use LVM) contained in there. I'll document that tomorrow when my reshape is complete ;)

Wednesday, January 20, 2010

Virtual Inbox in Thunderbird

Right - I've finally figured out how to set up a virtual inbox in ThunderBird3 that centralizes messages from multiple accounts and folders into a single location.
  1. Select an exsting folder on an existing IMAP account
  2. From the menu select File > New > Saved Search
  3. Decide where you want to keep the Virtual Inbox. It can't be top-level, so I choose Local Folders as the parent.
  4. Name the folder, such as Virtual Inbox.
  5. Use the Choose button to select the source folders of your messages
  6. Select "Match all messages"
  7. Done

Wednesday, January 6, 2010

Hack OTA installation of BlackBerry applications

Finally found a way to install BB apps when the data plan doesn't allow the BB browser to work properly.

I wrote a little bash script to do the dirty work for me. Works a treat for me :) But I can't guarantee it won't hose your phone :p

The script can be retreived from github at

Monday, January 4, 2010

Hard disk upgrade quick reference

Howto copy entire system to a new disk

echo cp -a `/bin/ls -1Ab | egrep -v '^(new|dev|proc|vols|sys)$'` /new/

Friday, December 11, 2009

Speeding up Firefox on OSX

Wow! There's two very simple actions that will speed up firefox.

After some time of using firefox the address bar can slow down considerably. SpeedyFox is an extension that claims to speed up the browser, but there's no need for an extension - it's simple enough to do manually.

Here's how you do it.

1. Quit firefox.

2. Open your favorite or

3. Change to your profile directory
cd ~/Library/Application Support/Firefox/Profiles/your_profile_code.default

4. Issue the command to vacuum your sqlite db's
for x in *.sqlite; do echo 'VACUUM;' | sqlite3 ${x}; done

That's it! Next time you start firefox it'll be faster.

And the second tweak you can make is to reduce the page render delay. I guess firefox has this since it assumes pages take a while to come in - but I don't see the point.

1. Open up the about:config area:

2. Edit the print.print_pagedelay config item, and set it to 0 (zero). This tells firefox to start drawing the page immediately instead of waiting half a second.