By Richi Jennings. June 13, 2010.
Got Microsoft Exchange Server in your shop? Well listen up: last week, at its Tech-Ed IT fest, Microsoft offered some fascinating insights into its internal use of Exchange. Notably, from a storage and high-availability perspective. Microsoft IT's 'dogfood' projects often have important lessons for other users of the company's products, and Exchange is no exception.
Here are some highlights of Microsoft IT's new strategy:
That's some radical differences from typical Exchange Server architecture. Let's break it down...
No disk arrays: Exchange, like all email servers, is limited by storage speed. Exchange 2007 and 2010 each reduced the I/O load required for a given number of simultaneous active users. This has allowed MSIT to move to much less expensive, direct-attached storage: using cheapo nearline SAS drives, rather than a traditional SAN architecture.
The biggest gain in Exchange 2007 was thanks to improvements in the use of disk cache memory. Moving exclusively to 64-bit code allowed Exchange access to far more memory, permitting it to keep more of the message store in core.
With Exchange 2010, Microsoft realized that disk space is cheap, but disk performance isn't. This philosophical shift allowed the Exchange team to ditch a message store feature that it's had since the very first version: the single-instance store. Where previous versions would try to only store all copies of a message in one place on disk, version 2010 simply stores another copy. There's no financial point trying to save a little disk space if the single-instance strategy causes more I/O load.
This reduction in I/O requirements allows MSIT to move to less expensive storage. The switch to nearline DAS at a server hardware refresh will reduce Microsoft's capital costs, versus replacing dedicated disk array hardware. There's probably a decent reduction in electrical power, too.
But if you're going to switch to DAS, why use SAS drives? Why not go the whole hog and switch to cheaper SATA drives? Well, the typical difference between SAS and SATA drives isn't just about the interface. although there's usually some commonality, SATA drives are built to a price and you can probably assume that they'll be less reliable.
Also, SAS drives usually use Tagged Command Queuing, as opposed to SATA's Native Command Queuing. TCQ copes better with high loads -- i.e., deep I/O queues. This is mainly due to the ability to simultaneously queue more requests, and therefore to reorder those operations for best performance -- NCQ can only have up to 31 queued requests, whereas SAS has no such limitation.
According to MSIT, SAS drives are only 5% more expensive than nearline SATA drives, but they perform 25% better under peak loads.
In the second part of this series, I talk about how MSIT has ditched redundancy: no RAID or clustering.
Would you rely on SAS for your exchange store? Leave a comment below...
|Richi Jennings is an independent analyst/consultant, specializing in blogging, email, and security. A cross-functional IT geek since 1985, you can follow him as @richi on Twitter, pretend to be richij's friend on Facebook, or just use good old email: TLV@richij.com.|