This post follows on from my previous post titled “Internet Explorer breaks with TLS1.2 and cert chains containing an MD5 hash“. It turns out that if a website’s certificate chain contains a SHA-512 hash then Internet Explorer fails too. In my previous post, I linked to an article describing the IE error when an MD5 hash is present in the certificate chain. It turns out that article has the necessary information to conclude that a SHA-512 signature would result in the same error.

Note that this error is presented if the web server presents a certificate containing an unsupported signature. If a server only presents a certificate signed with an acceptable signature (eg SHA-256) and presents no other certificates in the chain (acceptable or not signature algorithms) then things work. The problem only highlights itself when an unsupported signature algorithm is presented over the wire. So, you can have a CA’s older cert, which is using an MD5 hash, in your trusted roots and can access a site fine if the web server administrator does not pass the certificate chain. If the certificate chain is then passed over the wire the website will become inaccessible. Note that this is only the case over TLS1.2 connections.

Here is a screenshot of a section of the article I linked to showing the relevant information:

tls12-md5

We can see the signature algorithms which IE will accept. Notice that SHA512 is missing, although it is listed within RFC5246 section 7.4.1.4.1 with a code of 6 and is apparently supported by Microsoft (obviously need KB968730 for Windows XP or Windows 2003). Given that you can import a root certificate into Internet Explorer with a SHA512 signature I find it odd that IE will not accept a certificate with a SHA512 signature over the wire.

The frustrating thing with this error is that no real error message is presented. Just a broken web page. Internet Explorer really should highlight this with an error message that indicates the problem – something as simple as “The certificate chain contains certificates with signature algorithms which we do not support”.

My recommendation is thus to not issue certificates to end users with MD5 or SHA512 signatures and to not setup CAs with a signature of SHA512. For new CA certificates a signature of SHA256 should be considered the minimum with SHA384 being recommended. I would also suggest using a 4096-bit RSA key if possible, although 2048-bit keys should be acceptible for a while yet.

 

EDIT: 2013/06/29 = Note that you can apparently enable the use of SHA512 with RSA certificates (and SHA512 with ECDSA certs too) by editing the registry at:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010003\Functions (REG_MULTI_SZ)

Add: RSA/SHA512 (or ECDSA/SHA512) to the list of values.
I wonder why Microsoft decided to exclude them by default??

I came across this recently – quite tough to troubleshoot. If you use IE to connect with TLS1.2 (TLS1.1 and TLS1.0 are not enabled by default in IE) to an SSL website which has a certificate in the chain with an MD5 hash, IE just breaks the connection. This is due to the way the schannel.dll interacts over TLS1.2. Getting the chain of certificates to use certs with a SHA hash gets things working. IE really should handle this more gracefully!

Here is a page which describes the problem quite well, saving me some typing 🙂

Sendmail has done it again – proved just how powerful it is, as long as you know what you’re doing.

While investigating the configuration of the ciphers to used by Apache (SSLCipherSuite) and the associated SSLHonorCipherOrder option (to ensure the server’s cipher preference order is used), I realised that although I enable TLS on my Sendmail instances I don’t configure the cipher options. Given I’d spent some time coming up with my preferred cipher order for Apache (unfortunately RC4-SHA is fairly high on the list) I decided I may as well put it in place for other daemons which perform OpenSSL based encryption (Sendmail and IMAP for instance). Given the available Sendmail documentation is light on this subject I had to go digging.

The “standard” encryption related options (enabled with the STARTTLS define at compilation time) for Sendmail are pretty well documented and understood:
ServerCertFile
ServerKeyFile
ClientCertFile
ClientKeyFile
CACertFile
CACertPath
DHParameters
TLSSrvOptions
RandFile
CRLFile

There are some other useful options available when the STARTTLS define is combined with the _FFR_TLS_1 define at compile time.
DHParameters512
CipherList
ServerSSLOptions
ClientSSLOptions

 

Note: You can determine the compile settings used for your version of Sendmail by running:

sendmail -d0.14 -bt < /dev/null

 

These four options do not appear to be documented properly anywhere – even the Sendmail source code is pretty light on their configuration syntax and use. The CipherList option is mentioned as an available option on a page titled “Tips and Tricks for Sendmail Hackers“, dated 2006-03-31. There are a few other web pages and blog posts which mention or show one how to use the CipherList option. No mention is made of the remaining three.

These four options are configured in the LOCAL_CONFIG section of your sendmail.mc file. The following is an example (which may or may not be suitable for you) of such a section:

LOCAL_CONFIG
O CipherList=HIGH:RC4-SHA:RC4-MD5
O ServerSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_CIPHER_SERVER_PREFERENCE

 

The options described:

CipherList : This option configures the available cipher list for encrypted connections. Your cipher list can be tuned by using the openssl ciphers -v command. Stronger ciphers are obviously better. Excluding weak ciphers may mean that very old clients will be unable to connect. Note that with SSLv3 and TLS1.x the client, by default, will select its preferred cipher from the server’s list.

ServerSSLOptions : This option configures the OpenSSL connection flags used for the SSL/TLS connections into Sendmail. By default Sendmail, and most other applications using the OpenSSL library, uses the SSL_OP_ALL composite flag for its connections. This option allows these flags to be altered. The first option to consider using is SSL_OP_CIPHER_SERVER_PREFERENCE. This option causes the server, rather than the client, to choose the cipher based on its preference order. The next option to consider is SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS. This option disables a countermeasure against a SSLv3/TLSv1 protocol vulnerability. This flag disables the countermeasure and is set by default when SSL_OP_ALL is used. Thus, if one wishes to have the vulnerability countermeasure enabled, this flag needs to be disabled. Depending on the clients and servers of your Sendmail instance you may wish to consider the use of SSL_OP_NO_SSLv2, SSL_OP_NO_SSLv3 and SSL_OP_NO_TLSv1. Note that the current version of Sendmail does not have support for OpenSSL’s SSL_OP_NO_TLS_v1_1 nor for SSL_OP_NO_TLSv1_2. These two could be quite useful and I have submitted a patch to Sendmail for these to be included. The value of this parameter is used to manipulate the bits passed to OpenSSL. Note that Sendmail starts with a value of SSL_OP_ALL and this option modifies that value – it does not reset it from scratch. You manipulate the value using [+]SSL_OP_XXX to SET the bits and using -SSL_OP_XXX to CLEAR the bits. Thus a value of +SSL_OP_ALL would have no effect (since those bits are already set. A value of -SSL_OP_ALL would result in no bits being set. A useful value might be +SSL_OP_NO_SSLv2 +SSL_OP_CIPHER_SERVER_PREFERENCE.

ClientSSLOptions : This option configures the OpenSSL connection flags used for the SSL/TLS connections initiated by Sendmail. The parameter’s value works the same as for ServerSSLOptions.

DHParameters512 : This option does not appear to actually be used. It is a valid configuration option which will be parsed. However, it appears as if this parameter is not used by the Sendmail source code at all!

 

As an aside, DHParameters is another odd configuration option. The documentation implies this is a file containing DH paramters. However, the source code in sendmail/tls.c has this to say:

/*
** valid values for dhparam are (only the first char is checked)
** none no parameters: don't use DH
** 512 generate 512 bit parameters (fixed)
** 1024 generate 1024 bit parameters
** /file/name read parameters from /file/name
** default is: 1024 for server, 512 for client (OK? XXX)
*/

So in fact, it is slightly more flexible than the documentation makes out. Note too, that should you wish to use a DH parameter of more than 1024 bits you will need to use an external file.

So with all that said, a useful set of parameters might be:
LOCAL_CONFIG
O CipherList=HIGH:RC4-SHA:RC4-MD5
O ServerSSLOptions=+SSL_OP_NO_SSLv2 +SSL_OP_CIPHER_SERVER_PREFERENCE
O ClientSSLOptions=+SSL_OP_NO_SSLv2

So, based on this configuration, we are only using “high” strength ciphers and also two RC4 ciphers. You may want to remove the RC4-MD5 one if you are concerned about MD5’s strength. For maximum compatibility with old clients, you may want to keep it included. We don’t allow SSLv2 and we request that the server (i.e. our Sendmail instance) chooses the mutual cipher.

Hope this helps.

Another useful thing to note, is that when _FFR_TLS_1 is used you can use two certificates and key files for ServerCertFile and ServerKeyFile – with their names separated with simply a , (i.e. a comma and no spaces). This is useful if you have both an RSA and DSA certificate you wish to use. For example, the configured option within sendmail.cf would be:


O ServerCertFile=/etc/mail/tls/server-rsa.crt,/etc/mail/tls/server-dsa.crt
O ServerKeyFile=/etc/mail/tls/server-rsa.key,/etc/mail/tls/server-dsa.key

Due to constraints within OpenSSL’s SSL_CTX_use_PrivateKey_file and SSL_CTX_use_certificate_file calls, both certificates should use the same certificate chain. More information can be found on OpenSSL’s website.

Just a quick post to remind folks that Microsoft released a hotfix back in 2008 (based on the file time-stamps)  to add AES ciphers to the built in cipher options. This is a step up from the standard RC4 ciphers and offers 256bit encryption.

This is documented in http://support.microsoft.com/kb/948963.

The added ciphers are:

  • TLS_RSA_WITH_AES_128_CBC_SHA
  • AES128-SHA
  • TLS_RSA_WITH_AES_256_CBC_SHA
  • AES256-SHA

Applications using the schannel.dll for security will be able to use these additional ciphers.

Yes, I know Windows 2003 is a bit long in the tooth, but it still has a fairly large installed user base.

Just a quick note to remind folks who upgrade to FTTC Broadband that using an MTU of 1500 bytes (Ethernet standard size) is possible if your PPPoE router supports RFC4638 and supports “baby jumbo frames”.

In short – PPPoE needs 8 bytes of each Ethernet frame resulting in an effective MTU of 1492 for the IP layer between your modem/router and the DSL endpoint at your provider. These 8 bytes can be “reclaimed” if your router has the ability to transmit baby jumbo frames up to 1508 bytes excluding Ethernet headers. Andrews and Arnold has a good write up available describing this in further detail along with a specific PPPoE/FTTC page.

If you are using a Cisco router with appropriate hardware you can add something like the following to a working FTTC/PPPoE configuration.

conf t
bba-group pppoe global
tag ppp-max-payload minimum 1492 maximum 1500
int fa 0
mtu 1508
int dialer 0
mtu 1508

The ppp-max-payload min and max defaults are 1492 and 1500 respectively, so all you actually need to do is configure the MTU to 1508 on the FastEthernet 0 (assuming this is your PPPoE interface) and dialer 0 interfaces. Note that even though the Fa0 interface is used to connect to the FTTC modem, you do not appear to need to change its configured MTU size. You can check that it can support frames larger than 1500 by checking the maximum using “mtu ?” in the config mode.

Once these changes are made, you can do “debug ppp negotiation” to track the negotiated options, including the negotiation of a MRU of 1500 bytes. Once this is working correctly you will not need to do any fiddling to get pMTUd working or messing with client MTUs to avoid the inevitable odd browser hangs.

 

UPDATE: Just an FYI, you can check which BT cabinet serves your telephone/postcode along with expected FTTC speed uplift at http://fttc-check.alc.im/ or at http://www.dslchecker.bt.com/

UPDATE2: You do in fact need to set the Fa0 (or what ever your PPPoE interface is) MTU to 1508!

 

 

Well – this one stung a little and took a few mins to come up with a work around. The apt-cacher included with Ubuntu Precise 12.04 is at version 1.7.3. Unfortunately this version of apt-cacher has a bug when using the “allowed_hosts” /etc/apt-cacher/apt-cacher.conf parameter to restrict access to IPv4 clients when running on a machine with an IPv6 enabled network stack. There is a Debian bug report at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=659669. This bug is fixed, apparently, in apt-cacher 1.7.4.

Due to the nature of the dual IPv4/IPv6 stack the apt-cacher code fails to correctly compare IPv4 addresses in the allowed_hosts access list, resulting in clients receiving HTTP 403 errors when trying to use the cache. One workaround is to use “allowed_hosts = *”,  which allows all clients to use the cache, coupled with an IPTables rule to restrict access.

The workaround I am testing, which appears to work, is to use the IPv4 mapped IPv6 addressing notation for the access list. This form of notation is described here and here. In this notation the IPv4 address 10.1.2.3 is represented as ::ffff:10.1.2.3. We can use slash notation to indicate a subnet mask. So with IPv6 addresses being 128 bit – we could represent this example IP address as ::ffff:10.1.2.3/128. For a standard IPv4 255.255.255.0 mask on this example network, which is 8 bits for the host portion, we use a “/24” for IPv4 notation and can use “/120” for IPv6 nation. This would be ::ffff:10.1.2.0/120.

So, for example, if we originally wanted an allowed_hosts for apt-cacher of:

allowed_hosts = 10.11.12.0/24, 10.32.0.0/16, 10.128.0.0/15, 10.250.1.1/32

we could replace it with

allowed_hosts = ::ffff:10.11.12.0/120, ::ffff:10.32.0.0/112, ::ffff:10.128.0.0/111, ::ffff:10.250.1.1/128

to work around this bug.

This appears to work with the limited testing I did. Of course, it would be preferable if the Ubuntu apt-cacher package was upgraded to one which actually works on a default Ubuntu 12.04 install 🙂

 

Well, another tidbit (and maybe I’m slow coming to find this out)… from the VMware vCenter Server 5.1.0b release notes:

vSphere Client. In vSphere 5.1, all new vSphere features are available only through the vSphere Web Client. The traditional vSphere Client will continue to operate, supporting the same feature set as vSphere 5.0, but not exposing any of the new features in vSphere 5.1.

vSphere 5.1 and its subsequent update and patch releases are the last releases to include the traditional vSphere Client. Future major releases of VMware vSphere will include only the vSphere Web Client.

For vSphere 5.1, bug fixes for the traditional vSphere Client are limited to security or critical issues. Critical bugs are deviations from specified product functionality that cause data corruption, data loss, system crash, or significant customer application down time where no workaround is available that can be implemented.

WOAH!

Firstly, so, none of the new features of 5.1 are available through the current vSphere client. Not really checked this in detail, or which features this includes, but this suprises me.

Secondly, in the course of studying for the VCP5 exam the existing web client is described as something of a lightweight, not for general use by VMware admins tool. If the existing Windows vSphere client is being done away with, VMware will need to do some SERIOUS work in getting the future web client up to scratch. Not only that but they will have the tough task of choosing what web browsers to support and the various versions thereof.

Thirdly, what about all the third party plugins? Many of those will all need to be updated and rewritten to run on the vCenter server/vCSA. Now this is quite a big one since it will require plugins to be changed from running on the Windows based vSphere client to running on a Linux appliance. Not a totally trivial task I would wager.

Forthly, upgrading from 4.x/5.x to 6.x will probably be frought with some challenges in needing to switch back and forth between the “old” Windows client and the “new” web client. I expect some serious planning and testing will need to be performed to ensure that all operational tasks can be completed before, during and after the migrations/upgrades.

Can’t say I’m pleased about this. I can see massive challenges in keeping web browsers working smoothly with such a core and critical application. Even getting relatively simple websites to render equivalently across browsers can be challenging, let alone something as complex as a vSphere administration console.

However, I can see why VMware would want to do this – for their big vCloud push. “One client to rule them all” for the administrators/providers of clouds all the way down to their end customers. Quite a vision, but I wonder if they can pull it off.

There is a risk here that Microsoft’s Hyper-V will gain a foothold when this comes to pass. I imagine coupled with the removal of the thick client will be the removal of the installable vSphere Server. If this comes to pass then some Microsoft shops are likely to question the introduction of a Linux appliance into their server estate when a Hyper-V and Microsoft based platform will be available (and quite possibly already included in their existing MS licenses).

Customers are fickle and can switch allegance quite quickly… I hope VMware has considered this and doesn’t shoot it self in the foot!

 

Well while studying for my VCP5 I have discovered that the “Solution/Database Interoperability” support matrix for vCenter databases is not quite as straight forward as I would have expected. For instance, MS SQL Server 2008 Enterprise R2 SP2 is only supported with VMware vCenter Server 5.0U2.  For MS SQL Server shops, it appears as if the currently “safe” DB options are MS SQL Server 2008 R2 (no SP) and MS SQL Server 2008 SP2. These appear to have the broadest support – of course if you are installing vSphere 5.0U1 or U2 (which you probably should be) then you can use R2 SP1. Once more, it pays to check the HCLs carefully. You can determine the version and service pack level of MS SQL Servers using the information in KB321185 from Microsoft.

Also, the matrix has a footnote for some of the MS SQL Server versions stating they are not supported with vCSA – but not all the MS SQL Server entries have this note implying some versions of MS SQL Server are supported with vCSA.

And for Oracle support – that is a bit of a minefield too. Various versions of 10gR2 and 11gR2 are supported with various patch sets. Once again, do you homework carefully!

 

Well, in short the GIGABYTE G1.Sniper M3 motherboard does support Intel VT-d and both ESXi 5 and 5.1 can use it. I have tested this with BIOS version f9 and “beta” BIOS versions f10c, f10d and f10e and all show VT-d as an option when a compatible processor is installed. Note that this option is not shown unless a suitable (generally Intel i5 or i7 non-k CPU) processor is installed. The “VT-d” option is shown below the “Intel Virtualization Technology” option on the “BIOS Features” page of the BIOS setup.

I have had mixed success with actually passing through devices to VMs. Generally the cards in PCI-E slots and configured for pass through worked as expected within a VM during my testing (USB3, NICs, Hauppauge 1700 cards). However, devices on the motherboard (SATA, Audio, LAN, Video) and PCI-E graphics cards do not work. For the most part, these devices pass through but the devices don’t start under Windows, drivers fail to attach or cause blue screens when accessed (yes, Mr ATI graphics card with your atikmpag.sys driver BSOD).

Until I actually did these tests I was not sure if this motherboard did or did not support VT-d /Intel Virtualization Technology for Directed I/O / VMware VMDirectPath I/O. I already had this motherboard in a HTPC with an Intel i3 (dual core with hyper threading) which, by the way, ran ESXi adequately. I wanted to play with VT-d so  I took a punt on an Intel i7 processor and luckily it worked. If not, my backup plan was to also procure an ASRock motherboard, most of which seem to have working VT-d support.

I had hoped to run a virtual HTPC with an ATI graphics card passed through on this computer. Unfortunately the virtualisation gods do not seem to be happy with this idea at the moment. Still, this box makes a decent whitebox ESXi host, apart from the onboard Intel 82579V NIC which ESXi does not support out the box. A custom driver needs to be injected into the ESXi installation ISO, unless you have a supported PCI-E NIC in which case the driver can be installed post-install.

Note1: While playing with passthrough and various options of internal graphics and PCI-E graphics BIOS configurations I got to the point where I could no longer get graphics from the onboard graphics card. I found a couple of posts on the Internet about this too. Even resetting/clearing CMOS did not resolve this. As per the other posts, I reflashed the BIOS and it sorted it out. Weird behaviour and unexpected – I could not get the BIOS to save the option to use IGFX (Internal graphics) rather than PEG (PCI-E graphics) as the “Init Display First” option.

Note2: The following are the graphics cards I attempted to pass through to the VMs. Note that I tried both VMware ESXi 5.0U2 build 914586 and 5.1 build 799733 and 914609 with motherboard BIOS f9, f10d and f10e.

Asus ATI Radeon HD 5450 – passed through and seen by VM but has atikmpag.sys BSOD 0x0116 “Attempt to reset the display driver and recover from timeout failed.” whenever I connected the monitor to the graphics card or tried to enable the display on the card.

Asus ATI Radeon HD 6450 – exactly as above.

Asus NVIDIA Geforce GT610 – passed through and seen by the VM. However the device driver is unable to start in Windows.

Note3: While trying to get the graphics cards to work properly I tried various combinations of additional/advanced settings including:

On the VM:

pciHole.start 1200
pciHole.end   2200
pciPassthru0.maxMSIXvectors  16
pciPassthru0.msiEnabled   FALSE

On the host, edit of: /etc/vmware/passthru.map by adding

#ATI Radeon HD
1002 ffff bridge false
#tried with flr/d3d0/link/bridge in the third column

Note4: In ESXi5.1 ACS checking is enforced more strictly resulting in quad-port NICs (and other devices apparently) not successfully getting configured for passthrough. After each reboot the devices still show as needing a reboot. The console logs show a message similar to:

WARNING: PCI: ssss: nnn:nnn:nn.n: Cannot change ownership to PASSTHRU 
(non-ACS capable switch in hierarchy)
This can be bypassed (at your own peril) using the host advanced option: disableACSCheck=true
Use the vSphere console: Configuration/Software/Advanced Settings/VMKernel/VMKernel.Boot.DisableAcsCheck
More info can be found at this informative post. This option got the quad port NICs passed through successfully but did not make any difference to the ATI or NVIDIA cards.
Note5: Currently ESXi 5.1 upto and including build 914609 does not seem to allow USB controller passthrough via VMDirectPath I/O. You can select the device for passthrough but once the host is rebooted, the device is unselected. I am not sure if this is a bug or a conscious decision by VMware. A cynic like myself might think this is intentional, as without the ability to pass through a USB controller there is no way to pass through a real keyboard and mouse into a VM and hence no need to get GPUs working with passthrough. (Hmm – maybe a bluetooth USB device passed into a VM and then paired with a BT keyboard and mouse?? Something for another day).

 

 

In what possible dimension are the following not complex enough passwords for a user time-sheet system?

uQyl'z^Ml@?uG00VQ,"d
Zm$8tnb+lMeYWU`:"45Z

How is anyone meant to remember passwords of this complexity? This is a security oversight as these passwords will get written down somewhere thereby bypassing all the intended security! Oh, and I need to find a new password of this complexity every 28 days!

#securityfail

ps: I won’t be, and neither should you, using these passwords anywhere.