Even though Chrome, IE and Firefox support certificates with a Subject Alternative Name (subjectAltName) extension, it appears that only Firefox uses the “iPAddress” extension correctly for verifying URLs with IP addresses. Chrome and IE both return warnings about invalid domain names, if the IP address of the URL is in the certificate as an iPAddress SAN extension.

If the IP address from the URL is also in the certificate as a dNSName then Chrome and IE stop with their warnings.

If the IP address from the URL is only in the certificate as a dNSName then Chrome and IE stop with their warnings but Firefox does warn about an untrusted certificate. Ironically for the user, the error message is “The certificate is only valid for the following names:” followed by the list of entries (including both dNSName and iPAddress fields). A user could hardly be blamed for being confused if they compared the name in the browser URL with the IP address name and wondered why they were getting a warning.

So, my recommendation, certainly for usability purposes, is to include any IP addresses in the SAN extension as both “iPAddress” and “dNSName” values. This should allow Firefox, IE and Chrome to operate successfully. Of course, the neater option is to use DNS names for your servers…

To me, it is pretty clear from RFC 5280 section what the definitively correct interpretation is. Obviously, entering an IP address in the URL means you are connecting to that IP address and verifying it as an IP address could be considered correct. Interpreting an IP address within the URL as a dNSName is questionable. The dNSName field is defined within RFC 5280 as

When the subjectAltName extension contains a domain name system
label, the domain name MUST be stored in the dNSName (an IA5String).
The name MUST be in the “preferred name syntax”, as specified by
Section 3.5 of [RFC1034] and as modified by Section 2.1 of

My interpretation of this excludes textual representations of IP addresses from dNSName values. I guess Chrome and Internet Explorer went for the “easy” option or simply did not read and interpret the RFC correctly. #FAIL!

Note that a bug about this is filed against Chromium, but nothing seems to have been done about it yet…

This post follows on from my previous post titled “Internet Explorer breaks with TLS1.2 and cert chains containing an MD5 hash“. It turns out that if a website’s certificate chain contains a SHA-512 hash then Internet Explorer fails too. In my previous post, I linked to an article describing the IE error when an MD5 hash is present in the certificate chain. It turns out that article has the necessary information to conclude that a SHA-512 signature would result in the same error.

Note that this error is presented if the web server presents a certificate containing an unsupported signature. If a server only presents a certificate signed with an acceptable signature (eg SHA-256) and presents no other certificates in the chain (acceptable or not signature algorithms) then things work. The problem only highlights itself when an unsupported signature algorithm is presented over the wire. So, you can have a CA’s older cert, which is using an MD5 hash, in your trusted roots and can access a site fine if the web server administrator does not pass the certificate chain. If the certificate chain is then passed over the wire the website will become inaccessible. Note that this is only the case over TLS1.2 connections.

Here is a screenshot of a section of the article I linked to showing the relevant information:


We can see the signature algorithms which IE will accept. Notice that SHA512 is missing, although it is listed within RFC5246 section with a code of 6 and is apparently supported by Microsoft (obviously need KB968730 for Windows XP or Windows 2003). Given that you can import a root certificate into Internet Explorer with a SHA512 signature I find it odd that IE will not accept a certificate with a SHA512 signature over the wire.

The frustrating thing with this error is that no real error message is presented. Just a broken web page. Internet Explorer really should highlight this with an error message that indicates the problem – something as simple as “The certificate chain contains certificates with signature algorithms which we do not support”.

My recommendation is thus to not issue certificates to end users with MD5 or SHA512 signatures and to not setup CAs with a signature of SHA512. For new CA certificates a signature of SHA256 should be considered the minimum with SHA384 being recommended. I would also suggest using a 4096-bit RSA key if possible, although 2048-bit keys should be acceptible for a while yet.


EDIT: 2013/06/29 = Note that you can apparently enable the use of SHA512 with RSA certificates (and SHA512 with ECDSA certs too) by editing the registry at:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010003\Functions (REG_MULTI_SZ)

Add: RSA/SHA512 (or ECDSA/SHA512) to the list of values.
I wonder why Microsoft decided to exclude them by default??

I came across this recently – quite tough to troubleshoot. If you use IE to connect with TLS1.2 (TLS1.1 and TLS1.0 are not enabled by default in IE) to an SSL website which has a certificate in the chain with an MD5 hash, IE just breaks the connection. This is due to the way the schannel.dll interacts over TLS1.2. Getting the chain of certificates to use certs with a SHA hash gets things working. IE really should handle this more gracefully!

Here is a page which describes the problem quite well, saving me some typing 🙂

Sendmail has done it again – proved just how powerful it is, as long as you know what you’re doing.

While investigating the configuration of the ciphers to used by Apache (SSLCipherSuite) and the associated SSLHonorCipherOrder option (to ensure the server’s cipher preference order is used), I realised that although I enable TLS on my Sendmail instances I don’t configure the cipher options. Given I’d spent some time coming up with my preferred cipher order for Apache (unfortunately RC4-SHA is fairly high on the list) I decided I may as well put it in place for other daemons which perform OpenSSL based encryption (Sendmail and IMAP for instance). Given the available Sendmail documentation is light on this subject I had to go digging.

The “standard” encryption related options (enabled with the STARTTLS define at compilation time) for Sendmail are pretty well documented and understood:

There are some other useful options available when the STARTTLS define is combined with the _FFR_TLS_1 define at compile time.


Note: You can determine the compile settings used for your version of Sendmail by running:

sendmail -d0.14 -bt < /dev/null


These four options do not appear to be documented properly anywhere – even the Sendmail source code is pretty light on their configuration syntax and use. The CipherList option is mentioned as an available option on a page titled “Tips and Tricks for Sendmail Hackers“, dated 2006-03-31. There are a few other web pages and blog posts which mention or show one how to use the CipherList option. No mention is made of the remaining three.

These four options are configured in the LOCAL_CONFIG section of your sendmail.mc file. The following is an example (which may or may not be suitable for you) of such a section:

O CipherList=HIGH:RC4-SHA:RC4-MD5


The options described:

CipherList : This option configures the available cipher list for encrypted connections. Your cipher list can be tuned by using the openssl ciphers -v command. Stronger ciphers are obviously better. Excluding weak ciphers may mean that very old clients will be unable to connect. Note that with SSLv3 and TLS1.x the client, by default, will select its preferred cipher from the server’s list.

ServerSSLOptions : This option configures the OpenSSL connection flags used for the SSL/TLS connections into Sendmail. By default Sendmail, and most other applications using the OpenSSL library, uses the SSL_OP_ALL composite flag for its connections. This option allows these flags to be altered. The first option to consider using is SSL_OP_CIPHER_SERVER_PREFERENCE. This option causes the server, rather than the client, to choose the cipher based on its preference order. The next option to consider is SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS. This option disables a countermeasure against a SSLv3/TLSv1 protocol vulnerability. This flag disables the countermeasure and is set by default when SSL_OP_ALL is used. Thus, if one wishes to have the vulnerability countermeasure enabled, this flag needs to be disabled. Depending on the clients and servers of your Sendmail instance you may wish to consider the use of SSL_OP_NO_SSLv2, SSL_OP_NO_SSLv3 and SSL_OP_NO_TLSv1. Note that the current version of Sendmail does not have support for OpenSSL’s SSL_OP_NO_TLS_v1_1 nor for SSL_OP_NO_TLSv1_2. These two could be quite useful and I have submitted a patch to Sendmail for these to be included. The value of this parameter is used to manipulate the bits passed to OpenSSL. Note that Sendmail starts with a value of SSL_OP_ALL and this option modifies that value – it does not reset it from scratch. You manipulate the value using [+]SSL_OP_XXX to SET the bits and using -SSL_OP_XXX to CLEAR the bits. Thus a value of +SSL_OP_ALL would have no effect (since those bits are already set. A value of -SSL_OP_ALL would result in no bits being set. A useful value might be +SSL_OP_NO_SSLv2 +SSL_OP_CIPHER_SERVER_PREFERENCE.

ClientSSLOptions : This option configures the OpenSSL connection flags used for the SSL/TLS connections initiated by Sendmail. The parameter’s value works the same as for ServerSSLOptions.

DHParameters512 : This option does not appear to actually be used. It is a valid configuration option which will be parsed. However, it appears as if this parameter is not used by the Sendmail source code at all!


As an aside, DHParameters is another odd configuration option. The documentation implies this is a file containing DH paramters. However, the source code in sendmail/tls.c has this to say:

** valid values for dhparam are (only the first char is checked)
** none no parameters: don't use DH
** 512 generate 512 bit parameters (fixed)
** 1024 generate 1024 bit parameters
** /file/name read parameters from /file/name
** default is: 1024 for server, 512 for client (OK? XXX)

So in fact, it is slightly more flexible than the documentation makes out. Note too, that should you wish to use a DH parameter of more than 1024 bits you will need to use an external file.

So with all that said, a useful set of parameters might be:
O CipherList=HIGH:RC4-SHA:RC4-MD5
O ClientSSLOptions=+SSL_OP_NO_SSLv2

So, based on this configuration, we are only using “high” strength ciphers and also two RC4 ciphers. You may want to remove the RC4-MD5 one if you are concerned about MD5’s strength. For maximum compatibility with old clients, you may want to keep it included. We don’t allow SSLv2 and we request that the server (i.e. our Sendmail instance) chooses the mutual cipher.

Hope this helps.

Another useful thing to note, is that when _FFR_TLS_1 is used you can use two certificates and key files for ServerCertFile and ServerKeyFile – with their names separated with simply a , (i.e. a comma and no spaces). This is useful if you have both an RSA and DSA certificate you wish to use. For example, the configured option within sendmail.cf would be:

O ServerCertFile=/etc/mail/tls/server-rsa.crt,/etc/mail/tls/server-dsa.crt
O ServerKeyFile=/etc/mail/tls/server-rsa.key,/etc/mail/tls/server-dsa.key

Due to constraints within OpenSSL’s SSL_CTX_use_PrivateKey_file and SSL_CTX_use_certificate_file calls, both certificates should use the same certificate chain. More information can be found on OpenSSL’s website.

Just a quick post to remind folks that Microsoft released a hotfix back in 2008 (based on the file time-stamps)  to add AES ciphers to the built in cipher options. This is a step up from the standard RC4 ciphers and offers 256bit encryption.

This is documented in http://support.microsoft.com/kb/948963.

The added ciphers are:

  • AES128-SHA
  • AES256-SHA

Applications using the schannel.dll for security will be able to use these additional ciphers.

Yes, I know Windows 2003 is a bit long in the tooth, but it still has a fairly large installed user base.

Just a quick note to remind folks who upgrade to FTTC Broadband that using an MTU of 1500 bytes (Ethernet standard size) is possible if your PPPoE router supports RFC4638 and supports “baby jumbo frames”.

In short – PPPoE needs 8 bytes of each Ethernet frame resulting in an effective MTU of 1492 for the IP layer between your modem/router and the DSL endpoint at your provider. These 8 bytes can be “reclaimed” if your router has the ability to transmit baby jumbo frames up to 1508 bytes excluding Ethernet headers. Andrews and Arnold has a good write up available describing this in further detail along with a specific PPPoE/FTTC page.

If you are using a Cisco router with appropriate hardware you can add something like the following to a working FTTC/PPPoE configuration.

conf t
bba-group pppoe global
tag ppp-max-payload minimum 1492 maximum 1500
int fa 0
mtu 1508
int dialer 0
mtu 1508

The ppp-max-payload min and max defaults are 1492 and 1500 respectively, so all you actually need to do is configure the MTU to 1508 on the FastEthernet 0 (assuming this is your PPPoE interface) and dialer 0 interfaces. Note that even though the Fa0 interface is used to connect to the FTTC modem, you do not appear to need to change its configured MTU size. You can check that it can support frames larger than 1500 by checking the maximum using “mtu ?” in the config mode.

Once these changes are made, you can do “debug ppp negotiation” to track the negotiated options, including the negotiation of a MRU of 1500 bytes. Once this is working correctly you will not need to do any fiddling to get pMTUd working or messing with client MTUs to avoid the inevitable odd browser hangs.


UPDATE: Just an FYI, you can check which BT cabinet serves your telephone/postcode along with expected FTTC speed uplift at http://fttc-check.alc.im/ or at http://www.dslchecker.bt.com/

UPDATE2: You do in fact need to set the Fa0 (or what ever your PPPoE interface is) MTU to 1508!



Well this is going to be a bit of a ramble as opposed to an articulate article. It comes from my frustration with Android’s bugs and Google’s seemingly unwillingness to respond. If one takes a look at


one can see a number of long lingering bugs which have not been addressed yet. Sure, loads of bugs have been fixed and Android continues to improve. I’ll come back to “improve” below. My gripe is with the bugs, which have been around for a while, that break functionality or standards.

For instance – proper handling of IMAP folders, IMAP drafts, IMAP IDLE support, Exchange sync if you delete e-mail messages while disconnected from your Exchange server, IPSEC L2TP PSK VPNs broken. These are bugs in features that are needed for mainstream business use in a smartphone. Sure not all consumers will run into all these bugs but with the recent #PRISM leaks and continuing concerns around privacy I’m sure more people will start using “their own” (for some definition of own as opposed to BIG providers such as Gmail/Hotmail/etc) IMAP servers and IPSec VPNs.

Business users will typically either use Exchange or IMAP (OK there are still some Lotus Notes users too – ever seen that mentioned on a stock Android device??) coupled with VPNs.  Both of these key features are crippled or lacking in one or more ways. Additionally, Android’s handling of recurring calendar appointments is poor to lacking resulting in potential frustration for business users. I guess haves “issues” in these area and offering a working Gmail app plays right into Google’s plan of owning everyone’s data and obviously mining said data for ad revenue!!!

Android is no longer a “new” operating system and has been through various updates along the way. Simple things like working, reliable e-mail should “just be there”. One should not need to install app (K9 mail for instance) after app (OpenVPN) to get working functionality that is supposedly built in to the OS. As Android ages it appear to be more of a way to make more money for Google than a concerted effort at a proper phone OS.

  • “improve” – yes new releases of Android come out every year or two with new wizzy-flashy graphics to dazzle the eyes. But come on – lets get the basic underlying functionality fixed before rolling out even more buggy code!


Well – this one stung a little and took a few mins to come up with a work around. The apt-cacher included with Ubuntu Precise 12.04 is at version 1.7.3. Unfortunately this version of apt-cacher has a bug when using the “allowed_hosts” /etc/apt-cacher/apt-cacher.conf parameter to restrict access to IPv4 clients when running on a machine with an IPv6 enabled network stack. There is a Debian bug report at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=659669. This bug is fixed, apparently, in apt-cacher 1.7.4.

Due to the nature of the dual IPv4/IPv6 stack the apt-cacher code fails to correctly compare IPv4 addresses in the allowed_hosts access list, resulting in clients receiving HTTP 403 errors when trying to use the cache. One workaround is to use “allowed_hosts = *”,  which allows all clients to use the cache, coupled with an IPTables rule to restrict access.

The workaround I am testing, which appears to work, is to use the IPv4 mapped IPv6 addressing notation for the access list. This form of notation is described here and here. In this notation the IPv4 address is represented as ::ffff: We can use slash notation to indicate a subnet mask. So with IPv6 addresses being 128 bit – we could represent this example IP address as ::ffff: For a standard IPv4 mask on this example network, which is 8 bits for the host portion, we use a “/24” for IPv4 notation and can use “/120” for IPv6 nation. This would be ::ffff:

So, for example, if we originally wanted an allowed_hosts for apt-cacher of:

allowed_hosts =,,,

we could replace it with

allowed_hosts = ::ffff:, ::ffff:, ::ffff:, ::ffff:

to work around this bug.

This appears to work with the limited testing I did. Of course, it would be preferable if the Ubuntu apt-cacher package was upgraded to one which actually works on a default Ubuntu 12.04 install 🙂


Well, another tidbit (and maybe I’m slow coming to find this out)… from the VMware vCenter Server 5.1.0b release notes:

vSphere Client. In vSphere 5.1, all new vSphere features are available only through the vSphere Web Client. The traditional vSphere Client will continue to operate, supporting the same feature set as vSphere 5.0, but not exposing any of the new features in vSphere 5.1.

vSphere 5.1 and its subsequent update and patch releases are the last releases to include the traditional vSphere Client. Future major releases of VMware vSphere will include only the vSphere Web Client.

For vSphere 5.1, bug fixes for the traditional vSphere Client are limited to security or critical issues. Critical bugs are deviations from specified product functionality that cause data corruption, data loss, system crash, or significant customer application down time where no workaround is available that can be implemented.


Firstly, so, none of the new features of 5.1 are available through the current vSphere client. Not really checked this in detail, or which features this includes, but this suprises me.

Secondly, in the course of studying for the VCP5 exam the existing web client is described as something of a lightweight, not for general use by VMware admins tool. If the existing Windows vSphere client is being done away with, VMware will need to do some SERIOUS work in getting the future web client up to scratch. Not only that but they will have the tough task of choosing what web browsers to support and the various versions thereof.

Thirdly, what about all the third party plugins? Many of those will all need to be updated and rewritten to run on the vCenter server/vCSA. Now this is quite a big one since it will require plugins to be changed from running on the Windows based vSphere client to running on a Linux appliance. Not a totally trivial task I would wager.

Forthly, upgrading from 4.x/5.x to 6.x will probably be frought with some challenges in needing to switch back and forth between the “old” Windows client and the “new” web client. I expect some serious planning and testing will need to be performed to ensure that all operational tasks can be completed before, during and after the migrations/upgrades.

Can’t say I’m pleased about this. I can see massive challenges in keeping web browsers working smoothly with such a core and critical application. Even getting relatively simple websites to render equivalently across browsers can be challenging, let alone something as complex as a vSphere administration console.

However, I can see why VMware would want to do this – for their big vCloud push. “One client to rule them all” for the administrators/providers of clouds all the way down to their end customers. Quite a vision, but I wonder if they can pull it off.

There is a risk here that Microsoft’s Hyper-V will gain a foothold when this comes to pass. I imagine coupled with the removal of the thick client will be the removal of the installable vSphere Server. If this comes to pass then some Microsoft shops are likely to question the introduction of a Linux appliance into their server estate when a Hyper-V and Microsoft based platform will be available (and quite possibly already included in their existing MS licenses).

Customers are fickle and can switch allegance quite quickly… I hope VMware has considered this and doesn’t shoot it self in the foot!