Here are two interesting articles I stumbled across. The first contains some pro racers’ tips for nutrition during races. The second is an interesting article from Joe Friel about fatigue and the relationship to CTL/ATL.

Hope you find them interesting and/or useful.

Well I thought I ought to sort out my laptop to connect to my CompuTrainer for the upcoming winter indoor training which ought to take place. As it turns out, I ran into a few issues as my laptop has been rebuilt since I last used the CompuTrainer connected to my laptop. I’ve been using the CompuTrainer handlebar control for varying the power rather than courses or 3D videos.

Anyway the bits of software I’ve installed are:

CompuTrainer 3D

Download the latest version (v3.0 2010/08/05) and install. Make sure to install into C:\ and not C:\program files.

Coaching Software CS

Download and install into C:\ aswell. This is now free.

Topo GPS

Download and install. I also needed to download the “other” topogps.exe from

which was included in the forum post at

Without this update I was unable to view/edit 3DC files, as described in the forum post. The error in the application’s fatal.log file was:

Error in X:\_f\lib\velib2\willcourse.cpp at line 113:
nlegs error

This is a great piece of software which allows one to convert .gpx files into .3dc files which can be “ridden” using the CompuTrainer 3D software.


I am considering getting the updated RacerMate One software which has recently been released. I do wonder how useful it wil be though. Hopefully DC Rainmaker will do an indepth review of the software in the near future.


Well, some of you astute followers of mine noticed that there was some unusual behaviour of the blog earlier today. This was due to the blog software being upgraded and migrated to a new server. It seems that the WordPress export/import feature doesn’t work quite as well as one would hope, particularly in relation to images and other uploaded media.

All issues should be resolved now and the blog should be working as expected.


When using Veritas Cluster Server (VCS), the following error messages from your system logs can indicate a problem with the cluster heartbeat interconnects:

Dec 12 15:26:20 serverb llt: [ID 194859 kern.notice] LLT:10019: delayed hb 935350 ticks from 1 link 0 (qfe:0)
Dec 12 15:26:20 serverb llt: [ID 761530 kern.notice] LLT:10023: lost 18706 hb seq 3448194 from 1 link 0 (qfe:0)
Dec 12 15:26:20 serverb llt: [ID 194859 kern.notice] LLT:10019: delayed hb 935350 ticks from 1 link 1 (eri:0)
Dec 12 15:26:20 serverb llt: [ID 761530 kern.notice] LLT:10023: lost 18706 hb seq 3448194 from 1 link 1 (eri:0)

These types of messages can be seen when you are running two LLT links over the same physical network. This is bad from a design point of view, as it may introduce a single point of failure. However, there are situations where you may have two physical connections into your cluster servers and have the links run over the same VLAN. If you are sure your interconnects are working properly and you are experiencing this error due to the issue described above then you should be able to solve it by changing your /etc/llttab file on all cluster members.

By default, on Solaris, your /etc/llttab file will look something like this:

set-node servera
set-cluster 1
link eri0 /dev/eri:0 - ether - -
link qfe0 /dev/qfe:0 - ether - -
link-lowpri ce0 /dev/ce:0 - ether - -

The second to last field for each of the links is the SAP field, or ethernet type used for the LLT link. This defaults (when specified using -) to 0xCAFE. Two LLT links on the same physical broadcast domain for a cluster cannot share the same SAP ID. If you do this, you may get the above error messages. Assuming this to be your problem (eg, if you run your eri0 and qfe0 links over the same broadcast domain) you can work around the problem by changing your /etc/llttab file to the following:

set-node servera
set-cluster 1
link eri0 /dev/eri:0 - ether 0xCAFE -
link qfe0 /dev/qfe:0 - ether 0xCAFF -
link-lowpri ce0 /dev/ce:0 - ether - -

This tells LLT to use different SAP types for the two links. All cluster members need to have this change made on them and have the cluster node restarted or have llt restarted.

Sun Blade 100 and Registered/buffered memory

By default, the Sun Blade 100 comes with 133Mhz Sync ECC CL3 unbuffered Dimms. By setting jumper JP6 (which is next to the memory slots) you can use registered/buffered memory in the Sun Blade 100.

The SunSolve handbook for the Sunblade 100 shows where the jumper JP6 is located.

You can mix registered and unregistered memory on the system board and it still appears to work ok.


Sun Blade 100, power management and ECC errors

The Sun Blade 100 workstations have a problem with their power mangement circuitry. If power management is enabled within Solaris (or Linux I guess) you can get uncorrectable ECC memory errors or other random hangs.

To work around this, you can edit /etc/power.conf and edit the autopm line to be
autopm disable

Obviously you can just uninstall the power management packages.

SunSolve has document #47042 on this issue. Also searching for “Sun blade 100 Alert” reveals a few more tidbits about this machine.


Well today I’ve been upgrading a couple of my servers from VMware ESXi 3.5 and ESXi 4.1 to ESXi 5.0. For the most part this went smoothly and without any drama.

The HP DL360 G5 upgrade from ESXi 4.1 to 5.0 went smoothly and the upgrade process maintained all the settings and configuration properly. The hardware health monitors were working before and after the upgrade without the need for any additional fiddling. I used the VMware ESXi 5.0U1 ISO from for this server.

The HP ML110 G5 needed to be a reinstalled as it was running ESXi 3.5 and there is no direct upgrade path to 5.0. After recreating the vSwitches and associated VM port groups I was up and running. I used the image once more and to my surprise the hardware health monitoring now shows the RAID status of the SmartArray E200 controller. In the past, when using the HP providers on ML110G5 hardware, purple screens were common. Now, the server seems stable and displays the storage health status. A win for the day!

Note that this server needed a further tweak as the SCSI passthrough of the SCSI attached LTO3 drive stopped working after the installation of ESXi5.0. A bit of Googling revealed that the following would solve this problem:

esxcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --vendor="HP" --model="Ultrium 3-SCSI"

So the VM could now see the attached tape drive. However VMware appear to have changed their passthrough or SCSI subsystem since ESXi3.5 and as a result I’ve had to reduce my tape block size. In the past I was able to read and write 512kB blocks (tar -b 1024)  however I’ve had to drop this to 128kB blocks (tar -b 256). If I get some time, I will attempt  to work out the exact limit and update this post.

For the Dell PE840 upgrade, I used the Dell ESXi 5.0 customised ISO. Again, the upgrade from 4.1 preserved the configuration of the server. To my dismay the RAID status of the PERC 5/i was now missing. Turns out the Dell ISO is lacking the providers for storage health. Long story short, after some searching I got the health status back. I initially tried the Dell OpenManage VIB ( which didn’t appear to change much. The useful info was here on the RebelIT website which referred to using the VIB from This made sense as the Dell PERC 5/i is basically a LSI MegaRAID SAS 8480E. I downloaded the VIB ( from Note that the 8480E is not listed as supported by this release, but it works – PHEW! I guess the Perc 5/i is getting old in the tooth now, but given it works like a champ there is no need to upgrade. Note that I had to extract the .zip file and then install the VIB from the server’s console as:

esxcli software vib install -v /vmfs/volumes/datastore1/vmware-esx-provider-LSIProvider.vib

So now all three servers have been upgraded to ESXi 5.0 and have full hardware health status available which is being monitored via Nagios. Now the fun begins, upgrading the hardware version and VMware Tools for all the VMs….

Well, I’m still vetting my HTC Sensation to see if it is a worthy replacement for my BlackBerry… Still not convinced as it happens. However, I have resolved one of my problems with it…

IMAP sent/trash/draft folders don’t work properly if you have a “typical” IMAP server with a folder prefix of “INBOX.”. The IMAP NAMESPACE RFC (RFC2342) defines how this works and how clients should handle it. As usual, Google developers figured they knew better than or just chose to ignore the RFC! As a result, the Andoird mail application doesn’t support folder prefixes, let alone automatically working correctly based on the namespace command.

If you are lucky and are using a IMAP server which does not have a personal namespace (Dovecot??) you might find the mail app works OK for you.

You can use the Android K9 mail client. I briefly tried it and it seemed to work. However, it doesn’t seem as “integrated” as the stock client so I gave up on it. It’s not a bad app – it just didn’t grab my attention. Luckily, there is a work around which is partially documented on the Android issues page at (thanks to those who posted there as it got me working on this workaround): in particular comment 83 and comment 84.

The work around is to reconfigure the mail application’s SQLite database to point to IMAP provided folders rather than the default folders which do not work. On the surface it is pretty easy if your phone is rooted – no idea how I’d do this if the phone was not rooted.

The method I used in the end, to get around the SQLite database files being in use, was to boot to clockworkmod-recovery, mount /data, adb the necessary files to the PC, edit the datafiles using SQLite and push them back using ADB.

The updating of the database is the trick. First, the ID of the account to be “fixed” needs to be determined. This account ID is then used to get a list of corresponding folders’ IDs from the mailboxs [sic]. These IDs and names are then used to update the accounts table which is the “magic bit”.

Here is the code snippet of the steps I followed. If you have a vague idea of technical stuff in this area, you should be able to follow the steps.

**boot into clockworkmod recovery
** mount /data on the phone
** connect USB cable to phone and PC and select HTC Sync mode
** check device can be seen by adb:

adb devices

** copy the data files from the phone to your PC (might not have the -shm and -wal files):

adb pull /data/data/
adb pull /data/data/
adb pull /data/data/

** run sqlite on the data files

sqlite3.exe mail.db
sqlite>  pragma wal_checkpoint;            -- to clear the sqlite logfiles...
sqlite>  pragma wal_checkpoint(FULL);      -- to clear the sqlite logfiles...
sqlite> .headers on       -- to make it easier to see whats happening
sqlite> .mode column      -- or .mode list

sqlite> --view the current folders for all accounts
sqlite> select _id, _name, _sentfolder, _sentfoldertext, _sentfolderid from accounts;
sqlite> select _id, _name, _trashfolder, _trashfoldertext, _trashfolderid from accounts;
sqlite> select _id, _name, _draftfolder, _draftfoldertext, _draftfolderid from accounts;

** based on output from above, determine the _id value for the account to modify (1 is used below in example):

sqlite> select * from mailboxs where _account=1;       --where 1 is from _ID above
sqlite> --below command is more succinct and useful...
sqlite> select _id,_undecodename,_decodename,_shortname from mailboxs where _account=1;   --where 1 is _ID above

** the above will give a list of folder IDs and corresponding names. Hopefully, the _undecodename, _decodename and _shortname fields are equal. If not, then I’m not sure which you’d use in the step below – some trial and error might be needed, but I’d start with the _decodename values.

** if you have a large number of IMAP folders and a “typical” IMAP server, the following should list the three needed folders and their associated IDs:

sqlite> select _id,_undecodename,_decodename,_shortname from mailboxs
   ...> where _account=1 and
   ...> _shortname in ("INBOX.Drafts","INBOX.Sent","INBOX.Trash");

** Once you have the folders’ _id values you use them in the commands below, changing the _sentfolderid, _trashfolderid and _draftfolderid values to correspond with those shown in the above step. Note to change the where clause to reflect the correct account _id used above.

sqlite> --update the folder details
sqlite> update accounts set _sentfolder = 'INBOX.Sent', _sentfoldertext = 'INBOX.Sent', _sentfolderid = 23 where _id=1;
sqlite> update accounts set _trashfolder = 'INBOX.Trash', _trashfoldertext = 'INBOX.Trash', _trashfolderid = 9 where _id=1;
sqlite> update accounts set _draftfolder = 'INBOX.Drafts', _draftfoldertext = 'INBOX.Drafts', _draftfolderid = 27 where _id=1;

sqlite> --exit from sqlite
sqlite> .quit

** back at your DOS prompt, delete the files on your phone (maybe mv instead for backup purposes?? – your choice) and push the updated files

adb shell rm /data/data/
adb shell rm /data/data/
adb shell rm /data/data/
adb push mail.db /data/data/
adb push mail.db-wal /data/data/
adb push mail.db-shm /data/data/

**reboot the phone and hopefully all is working with the correct folders!!

adb reboot


Note: If your phone has sqlite3 installed on it and it is rooted, you may be able to issue the above sqlite commands without having to boot into recovery mode. I’m not sure how well the mail app would behave with it’s configuration changing while it is running.

Note2: You do this at your own risk, etc etc. It worked for me. It might work for you – or it might not.


Last night I was upgrading some ESX 3.5 VMs from “flexible” NICs to “VMXNET Enhanced” NICs and ran into a problem with on of the servers. As an aside, doing some rudimentary throughput testing using the iperf tool I was surprised to see the significant throughput increase and CPU usage decrease when switching from flexible (i.e. VMXNET) to VMXNET Enhanced (i.e. VMXNET2) NICs. Well worth doing – apart from the gotcha I ran into…

This one particular VM which showed a problem is running Quagga (a BGP daemon) to inject routes for a local AS112 server. Once I switched over the NICs the remote Cisco router started logging the following errors

  • %TCP-6-BADAUTH: No MD5 digest from
  • %TCP-6-BADAUTH: Invalid MD5 digest from

and would not bring up the BGP peering session. BGP MD5 authentication is enabled between the router and the Quagga daemon. The use of “debug ip tcp transactions” also shows the invalid MD5 signatures.

I initially suspected it was related to a offloaded checksumming issue which I previously observed ( Turns out it was not incorrect TCP checksums being calculated (related to the above post’s incorrect UDP checksums).

Digging a bit deeper I came across a post on the quagga-users mailing list describing a similar problem to the one I was observing. The usage of MD5 checksums “complicates” the process of offloading checksumming to NICs.

The VMXNET Enhanced NIC enables additional offloading from the VM to the NIC to enhance performance. This works well for many use cases but causes problems when using TCP MD5 checksums. In my case, turning off “TCP Segmentation Offload” has worked around the problem. Adding a command such as

ethtool -K eth0 tso off
ethtool -K eth0 sg off

to a startup script has worked around the issue to some degree. In an ideal world, the VMXNET driver should allow “tx-checksumming” to be turned off using ethtool aswell.

In fairness to VMware on this one, this issue appears to not be specific to virtual machines but may in fact be observed on physical hardware with NICs providing offload functions.

Having used the above two ethtool commands, to allow the BGP session to be established, I still continue to see the following errors on the Cisco router:

TCP0: bad seg from x.x.x.x -- no MD5 string: port 179 seq
%TCP-6-BADAUTH: No MD5 digest from

%TCP-6-BADAUTH: Invalid MD5 digest from


Interestingly, using the flexible and hence original VMXNET VNIC within the VM (but oddly the same actual VMXNET driver binary!!) tx-checksumming and scatter-gather is enabled for the NIC:

$ ethtool -k eth0
Offload parameters for eth0:
Cannot get device rx csum settings: Operation not supported
rx-checksumming: off
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation offload: off

For the record, all the above is using the latest ESX 3.5 VMware tools build 317866 from VMwareTools-3.5.0-317866.tar.gz.


Doing a tcpdump (from a physical server) of the traffic between the router and the VM reveals that packets which leave the VM with a valid MD5 signature (as per tcpdump from within the VM) arrive on the wire with an invalid MD5 signature (tcpdump -s0 -M md5password port 179). This indicates that VMware ESXi may infact be altering the packets between the VM and the wire in some way which is invalidating the MD5 signature 🙁

For now, some errors with an established session is better than lots of errors and no BGP session. Ideally there would be no errors being logged by the Cisco router.


For lab scenarios (or when you are just being cheap) some switches can be convinced to allow non-Cisco SFPs to work using the following IOS commands:

service unsupported-transceiver
no errdisable detect cause gbic-invalid
errdisable recovery cause gbic-invalid

As usual, your mileage may vary.

Well it has been a while since my last blog post. Figured I had better write a quick few lines to let you know I’m still alive!

Firstly, here is a link to a great article by Joe Friel about recovery. Definitely worth a read:

Since my last serious post I have completed a few races – Ironman Swiss 70.3, London Triathlon, and the National Club Relays. All good fun. Managed to PB in Swiss 70.3 and London Triathlon so am pleased with my recent performances.

My training leading up to the Windsor Half-Marathon and the New York Marathon is progressing well. Touch wood my legs are holding out with the increased running volume. This weekend I have the HSBC Standard Distance Triathlon at Dorney Lake. My main goal is sub-2h30 – with the secondary goal of beating my work colleagues who are doing a relay team race.

My swimming is holding steady and doesn’t appear to be improving significantly. I have increased my swim session duration, but have unfortunately reduced my swim frequency. Over the winter I hope to maintain my swimming and hopefully get a bit faster.

My cycling is going well – my last few weekly 10mile time-trials have definitely improved my 30minute (CP30) power output.  Over the winter I am looking at trying to not loose too much power while I rest up.

I will hopefully write up some brief race reports over the coming days.