I recently had the need to update the firmware on an HP  LSI22320-HP U320 SCSI card. The HP firmware was resulting in Domain Validation (a SCSI technique to determine appropriate/supported SCSI speed for each SCSI device) errors. I wanted a more up to date version. I turned to the Broadcom LSI22320SE driver site and found an updated firmware download, in a file named Fusion-MPT_IT_FW10334_BIOS_50703pt_FLASH_10304.zip.

I created a USB boot disk with FreeDOS using Rufus (a great tool I only just found). I copied the relevant files (dos4gw.exe, flsh1030.exe, mptps.rom and it_1030.fw) to the USB stick. I booted up and tried to update. I managed to apply the firmware (it_1030.fw) but not the BIOS (mptps.rom). The error displayed was:

 Error: Attempting to download a generic BIOS image to
 a adapter meant for a customer specific BIOS image!

As this is an HP card I was a little disappointed but not surprised. Looking at the help output from “flsh1030.exe -?” I saw no obvious way to override this. Many Google searches later I came across a post on a Russian forum in Google’s cache that gave the hint I needed – the undocumented “-g” flag. Rerunning the following command overwrote the firmware on the card.

flsh1030 -a -g -b mptps.rom

The new flash gave a new set of menus. More importantly the domain validation errors preventing the drives from working were gone.

I was doing a clear out of some paper work and came across these write performance tests which I did a few years back on an HP DL380 G5 with a p400 RAID controller.

  • 4 drive RAID10 = 144MB/s (~72MB/s per drive)
  • 4 drive RAID0 = 293MB/s (~73MB/s per drive)
  • 8 drive RAID0 = 518MB/s (~64MB/s per drive)
  • 8 drive RAID5 = 266MB/s (~38MB/s per drive)
  • 8 drive RAID6 = 165MB/s (~28MB/s per drive)
  • 8 drive RAID10 = 289MB/s (~72MB/s per drive)

The “per drive” is data being written per active data drive, excluding any RAID overheads.

I inferred from the above that the p400 RAID controller maxes out at around 518MB/s as it is unable to saturate 8 drives in a RAID0 array (64MB/s vs 72MB/s per drive). Not sure if this is a controller throughput or PCIe bus limitation.

Eitherway, this testing was (I think) done on Ubuntu 10.x or 12.x with pretty bog standard settings using a simple command such as:

dd if=/dev/zero of=/dev/cciss/c0d0 bs=1024k

I thought I’d capture these figures here since I have no where else to save them.

As I’m sure most of the active VMware users and enthusiasts are aware, vSphere 5.5 was released to the masses last weekend. I eagerly downloaded a copy and have installed it on a lab machine. I’ve not played with the full suite yet – just the ESXi 5.5 hypervisor.

The install went smoothly on the HP DL360G5 I was using. Unfortunately, the server only has 32GB RAM so I cannot test for myself that the 32GB limit for the “free” hypervisor is removed. I can confirm that under the “licensed features” heading the “Up to 8-way virtual SMP” entry is still there but the “Up to 32 GB of memory” entry is removed (when using a “freebie” license key). So that looks good 🙂 As I said, I’ve not installed the entire suite yet, only the hypervisor, so I am only using the Windows client currently. Don’t do what I did and upgrade a VM’s hardware version – you won’t be able to manage it via the Windows client – which does not support the latest features (including newer VM hardware versions).

Anyway, one of the first things I check when I install ESXi onto a machine is that the hardware status is correctly reported under the Configuration tab. Disks go bad, PSUs fail or get unplugged and fans stop spinning so I like to ensure that ESXi is reporting the server hardware health correctly. To my dismay I found that the disk health was not being reported for the P400i attached storage, after installing from the HP OEM customised ESXi 5.5 ISO. Now this is not entirely unexpected, as the HP G5 servers are not supported with ESXi 5.5. Drat!

By following the VMware Twitteratti, I’ve learnt that various ESXi 5.0 and 5.1 drivers have been successfully used on ESXi 5.5 (specifically for Realtek network cards, the drivers for which have been dropped from ESXi 5.5). So I figured I’d give it a go at using the ESXi 5.0/5.1 HP providers on this ESXi 5.5 install.

I downloaded “hp-esxi5.0uX-bundle-1.4-16.zip” from HP’s website, which is contained on the “HP ESXi Offline Bundle for VMware ESXi 5.x” page which can be navigated to from http://h18000.www1.hp.com/products/servers/software/vmware-esxi/offline_bundle.html.

This ZIP file contains a few .vib files, intended for VMware ESXi 5.0 or 5.1. The VIB we are looking for is called “hp-smx-provider-500.03.02.00.23-434156.vib”. Extract this .VIB, and upload it to your favorite datastore. Now, enable the ESXi shell (or SSH) and connect onto the ESXi host’s console. Use the following command:


esxcli software vib install -v file:///vmfs/volumes/datastore1/hp-smx-provider-500.03.02.00.23-434156.vib

and reboot the host. You should now see this software component listed under Software Components within the Health Status section. You should also see the health of the P400i and its associated storage listed. So far so good. However, on my server the HP P400i controller was showing as a yellow “Warning”. Hmm. Not sure why.

So, I figured maybe there was an incompatibility between these older HP agents and the newer versions from the HP OEM CD. So, I decided to reinstall ESXi from the plain VMware ESXi 5.5 ISO.

So, a fresh install results in fan status, temperature readings and power supply status being reported and no (as expected) P400i storage health.

So, let’s install “hp-esxi5.0uX-bundle-1.4.5-3.zip”. Yes it’s a newer version than I used above, only because I found it after I’d reinstalled the vanilla ESXi.


esxcli software vib install -d file:///vmfs/volumes/datastore/hp/hp-esxi5.0uX-bundle-1.4.5-3.zip
reboot

Hey presto! Green health status. I pulled a drive from a RAID array and the status indicated the failure and then the subsequent rebuild. Certainly seems to be a workable solution to extend the life of these perfectly serviceable lab machines 🙂

I would expect this status monitoring to work for P800 controllers too.

One can also install hp-HPUtil-esxi5.0-bundle-1.5-31.zip to get access to some HP utilities at the ESXi command line.

 

I recently encountered a problem while upgrading the iLO2 firmware on a HP DL360G5 server. It seems if you reopen the remote console window while remotely updating the iLO2 firmware you can corrupt the firmware upgrade process. Hmm. This caught me out. When I rebooted my server it sat there not beeping or doing anything for a couple of minutes. I thought I had toasted it. I started attempting ROM recovery process (flipping dip switches on the motherboard btw) but this didn’t help. Removing the RAM caused the server to beep so it was obviously not totally dead. During one of my “boot- nothing-google-retry” cycles I left the server running. After about 10 minutes the server beeped and continued booting up – BUT the POST results showed no iLO2. So I figured it was a iLO firmware problem (which until then I had thought it was a ROM problem) which changed my google terms. I eventually found HP document c01850906.

The document outlines a process for recovering from a corrupt iLO2 firmware update.

The document describes it better but the steps are pretty much the following:

Boot off maintenance CD (have patience, the boot might take 10 or so minutes)

run firmware update
ctl-alt dbx

alt-ctrl-1  enter
cd /mnt/bootdevice/compaq/swpackages
rmmod hpilo
rmmon ilo
sh CP012108.scexe –direct

Message about firmware 1.81 and programming flash

reboot

The full process is at:

 http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01850906

Document c01850906