Server Guide Part 1: Introduction to the Server World
by Johan De Gelas on August 17, 2006 1:45 PM EST- Posted in
- IT Computing
Lower Acquisition costs?
In theory, the purchase price of a blade server should be lower than equivalent number of rack servers, due to the reduction in duplicate components (DVD, power supplies etc.) and the saving on KVM cabling, Ethernet cabling, etc. However, there is a complete lack of standardization between IBM, HP and Dell; each has a proprietary blade architecture. As there are no standards, it is pretty hard for other players to enter this market, allowing the big players to charge a big premium for their blade servers.
Sure, many studies - mostly sponsored by one of the big players - show considerable savings, as they compare a fully populated blade server with the most expensive rack servers of the same vendor. If the blade chassis will only get populated gradually in time, and you consider the fact that the competition in the rack server market is much more aggressive, you get a different picture. Most blade server offerings are considerably more expensive than their rack server alternative. Blade Chassis easily cost between $4000 and $8000 and blades are hardly/not less expensive than their 1U counterparts.
In the rack server space the big OEMs have to compete with many well established players such as Supermicro, Tyan, Rackable Systems, etc. At the same time, the big Taiwanese hardware players such as MSI, ASUS and Gigabyte have also entered this market, putting great pressure on the price of a typical rack server.
That kind of competition is only a very small blip on the radar in the blade market, and it is definitely a reason why HP and IBM are putting so much emphasis on their blade offerings. Considering that acquisition costs are still easily about 40-50% of the total TCO, it is clear that the market needs more open standards to open the door to more competition. Supermicro plans to enter the market at the end of year, and Tyan has made a first attempt with their Typhoon series, which is more a HPC solution. It will be interesting to see how flexible these solutions will be compared to those of the two biggest players, HP and IBM.
Flexibility
Most Blade servers are not as flexible as rack servers. Quickly installing an extra RAID card to attach a storage rack for a high performance database application is not possible. And what if you need a database server with a huge amount of RAM, and you don't want to use a clustered database? Rack servers with 16 DIMM slots can easily be found, while blades are mostly limited to 4 or 8 slots. Blades cannot offer that flexibility, or at best they only offer it at a very high price.
In most cases, blades use 2.5 inch hard disks, which are more expensive and offer lower performance than their 3.5 inch counterparts. That is not really surprising as blades have been built with a focus on density, trying to fit as much processing power in a certain amount of rack space as possible. A typical blade today has at most about 140 GB of raw disk capacity, while quite a few 1U racks can have 2 TB (4x500 GB) available.
Finally, of course, there is the lack of standardization, which prevents you from mixing the best solutions of different vendors together in one chassis. Once you buy a chassis from a certain vendor, server blades and option blades must be bought from the same vendor.
Hybrid blades
The ideas behind blades - shared power, networking, KVM and management - are excellent and it would be superb if they could be combined with the flexibility that the current rack servers offer. Rackable Systems seem to be taking the first steps toward enabling customers to use 3.5 inch hard disks and normal ATX motherboards in their "Scale Out" chassis, which makes it a lot more flexible and most likely less expensive too.
The alternative: the heavy rack server with "virtual blades"
One possible solution which could be a serious competitor for blade servers is a heavy duty rack server with VMWare's ESX server. We'll explore virtualization in more detail in a coming article, but for now remember that ESX server has very little overhead, contrary to Microsoft's Virtual Server and VMWare Virtual Server (GSX Server). Our first measurement shows a 5% performance decrease, which can easily be ignored.
Two physical CPU's, but 8 VMs with 1 CPU
Using a powerful server with a lot of redundancy and running many virtual machines is in theory an excellent solution. Compared to a blade server, the CPU and RAM resources will be much better utilized. For example, if you have 8 CPU cores and 10 applications you want to run in 10 different virtual machines. You can give a demanding application 4 cores, and the other 9 applications get only 2 cores. The number of cores you attribute to a certain VM is only maximum amount of CPU power they will be able to use.
As stated earlier, we'll look into virtualization much deeper and report our findings in a later installment of our Server Guide series of articles.
Conclusion
In this first part we explored what makes a server different and we focused on the different server chassis out there. The ideal server chassis form factor is not yet on the market. Rack servers offer great flexibility, but a whole rack of rack servers contains more cables, power supplies, DVDs and other components than necessary.
Blade servers have the potential to push rack servers completely of the market, but lack flexibility as the big OEMs do not want to standardize on a blade chassis as it would open the market to stiff competition and lower their high profit margins. Until then, blade servers offer good value to datacenters with High Performance Computing (HPC) applications, telecommunication applications and massive web server hosting companies.
Hybrid blade servers and big rack servers with virtual machines are a step in the right direction which combine a very good use of resources with the flexibility to adapt the machine to the different needs of the different server applications. We'll investigate this further with practical examples and benchmarks in the upcoming articles.
Special thanks to Angela Rosario (Supermicro), Michael Kalodrich (Supermicro), Geert Kuijken (HP Belgium), and Erwin vanluchene (HP Belgium).
References:
[1] TCO Study Ranks Rackable #1 For Large Scale Server Deployments
http://www.rackable.com/ra_secure/Rackable_TCO_CStudy.pdf
[2]. Making the Business Case for Blade Servers
Sponsored by: IBM Corporation
John Humphreys, Lucinda Borovick, Randy Perry
http://www-03.ibm.com/servers/eserver/bladecenter/pdf/IBM_nortel_wp.pdf
In theory, the purchase price of a blade server should be lower than equivalent number of rack servers, due to the reduction in duplicate components (DVD, power supplies etc.) and the saving on KVM cabling, Ethernet cabling, etc. However, there is a complete lack of standardization between IBM, HP and Dell; each has a proprietary blade architecture. As there are no standards, it is pretty hard for other players to enter this market, allowing the big players to charge a big premium for their blade servers.
Sure, many studies - mostly sponsored by one of the big players - show considerable savings, as they compare a fully populated blade server with the most expensive rack servers of the same vendor. If the blade chassis will only get populated gradually in time, and you consider the fact that the competition in the rack server market is much more aggressive, you get a different picture. Most blade server offerings are considerably more expensive than their rack server alternative. Blade Chassis easily cost between $4000 and $8000 and blades are hardly/not less expensive than their 1U counterparts.
In the rack server space the big OEMs have to compete with many well established players such as Supermicro, Tyan, Rackable Systems, etc. At the same time, the big Taiwanese hardware players such as MSI, ASUS and Gigabyte have also entered this market, putting great pressure on the price of a typical rack server.
That kind of competition is only a very small blip on the radar in the blade market, and it is definitely a reason why HP and IBM are putting so much emphasis on their blade offerings. Considering that acquisition costs are still easily about 40-50% of the total TCO, it is clear that the market needs more open standards to open the door to more competition. Supermicro plans to enter the market at the end of year, and Tyan has made a first attempt with their Typhoon series, which is more a HPC solution. It will be interesting to see how flexible these solutions will be compared to those of the two biggest players, HP and IBM.
Flexibility
Most Blade servers are not as flexible as rack servers. Quickly installing an extra RAID card to attach a storage rack for a high performance database application is not possible. And what if you need a database server with a huge amount of RAM, and you don't want to use a clustered database? Rack servers with 16 DIMM slots can easily be found, while blades are mostly limited to 4 or 8 slots. Blades cannot offer that flexibility, or at best they only offer it at a very high price.
In most cases, blades use 2.5 inch hard disks, which are more expensive and offer lower performance than their 3.5 inch counterparts. That is not really surprising as blades have been built with a focus on density, trying to fit as much processing power in a certain amount of rack space as possible. A typical blade today has at most about 140 GB of raw disk capacity, while quite a few 1U racks can have 2 TB (4x500 GB) available.
Finally, of course, there is the lack of standardization, which prevents you from mixing the best solutions of different vendors together in one chassis. Once you buy a chassis from a certain vendor, server blades and option blades must be bought from the same vendor.
Hybrid blades
The ideas behind blades - shared power, networking, KVM and management - are excellent and it would be superb if they could be combined with the flexibility that the current rack servers offer. Rackable Systems seem to be taking the first steps toward enabling customers to use 3.5 inch hard disks and normal ATX motherboards in their "Scale Out" chassis, which makes it a lot more flexible and most likely less expensive too.
The alternative: the heavy rack server with "virtual blades"
One possible solution which could be a serious competitor for blade servers is a heavy duty rack server with VMWare's ESX server. We'll explore virtualization in more detail in a coming article, but for now remember that ESX server has very little overhead, contrary to Microsoft's Virtual Server and VMWare Virtual Server (GSX Server). Our first measurement shows a 5% performance decrease, which can easily be ignored.
Two physical CPU's, but 8 VMs with 1 CPU
Using a powerful server with a lot of redundancy and running many virtual machines is in theory an excellent solution. Compared to a blade server, the CPU and RAM resources will be much better utilized. For example, if you have 8 CPU cores and 10 applications you want to run in 10 different virtual machines. You can give a demanding application 4 cores, and the other 9 applications get only 2 cores. The number of cores you attribute to a certain VM is only maximum amount of CPU power they will be able to use.
As stated earlier, we'll look into virtualization much deeper and report our findings in a later installment of our Server Guide series of articles.
Conclusion
In this first part we explored what makes a server different and we focused on the different server chassis out there. The ideal server chassis form factor is not yet on the market. Rack servers offer great flexibility, but a whole rack of rack servers contains more cables, power supplies, DVDs and other components than necessary.
Blade servers have the potential to push rack servers completely of the market, but lack flexibility as the big OEMs do not want to standardize on a blade chassis as it would open the market to stiff competition and lower their high profit margins. Until then, blade servers offer good value to datacenters with High Performance Computing (HPC) applications, telecommunication applications and massive web server hosting companies.
Hybrid blade servers and big rack servers with virtual machines are a step in the right direction which combine a very good use of resources with the flexibility to adapt the machine to the different needs of the different server applications. We'll investigate this further with practical examples and benchmarks in the upcoming articles.
Special thanks to Angela Rosario (Supermicro), Michael Kalodrich (Supermicro), Geert Kuijken (HP Belgium), and Erwin vanluchene (HP Belgium).
References:
[1] TCO Study Ranks Rackable #1 For Large Scale Server Deployments
http://www.rackable.com/ra_secure/Rackable_TCO_CStudy.pdf
[2]. Making the Business Case for Blade Servers
Sponsored by: IBM Corporation
John Humphreys, Lucinda Borovick, Randy Perry
http://www-03.ibm.com/servers/eserver/bladecenter/pdf/IBM_nortel_wp.pdf
32 Comments
View All Comments
JarredWalton - Thursday, August 17, 2006 - link
Fixed.Whohangs - Thursday, August 17, 2006 - link
Great stuff, definitely looking forward to more in depth articles in this arena!saiku - Thursday, August 17, 2006 - link
This article kind of reminds me THG's recent series of articles on how computer graphics cards work.For us techies who don't get to peep into our server rooms much, this is a great intro. Especially for guys like me who work in small companies where all we have are some dusty Windows 2000 servers stuck in a small server "room".
Thanks for this cool info.
JohanAnandtech - Friday, August 18, 2006 - link
Thanks! Been in the same situation as you. Then I got a very small budget for upgrading our serverroom (about $20000) at the university I work for and I found out that there is quite a bit of information about servers but all fragmented, and mostly coming from non-independent sources.splines - Thursday, August 17, 2006 - link
Excellent work pointing out the benefits and drawbacks of Blades. They are mighty cool, but not this second coming of the server christ that IBM et al would have you believe.Good work all round. It looks to be a great primer for those new to the administration side of the business.
WackyDan - Thursday, August 17, 2006 - link
Having worked with blades quite a bit, I can tell you that they are quite a significant innovation.I'll disagree with the author of the article that there is no standard. Intel co-designed the IBM bladecenter and licensed it's manufacture to other OEMS. Together, IBM and Intel have/had over 50% share inthe blade space. THat share along with Intel's collaboration is by default considered the standard int he industry.
Blades, done properly, have huge advantages over their rack counterparts. ie; far less cables. In the IBM's the mid-plane replaces all the individual network and optical cables as the networking modules (copper and fibre) are internal and you can get several flavors... Plus I only need one cable drop to manage 14 servers....
And if you've never seen 14 blades in 7u of space fully redundant, your are missing out. As for VMware, I've seen it running on blades with the same advantages as it's rack mount peers... and FYI... Blades are still considered rack mount as well...No you are not going to have any 16/32 ways as of yet.... but still, Blades really could replace 80%+ of all traditional rack mount servers.
splines - Friday, August 18, 2006 - link
I don't disagree with you on any one point there. Our business is in the process of moving to multiple blade clusters and attached SANs for our excessively large fileservers.But I do think that virtualisation does provide a great stepping-stone for business not quite ready to clear out the racks and invest in a fairly expensive replacement. We can afford to make this change, but many cannot. Even though the likes of IBM are pushing for blades left right and centre I wouldn't discount the old racks quite yet.
And no, I've haven't had the opportunity to see such a 7U blade setup. Sounds like fun :)
yyrkoon - Friday, August 18, 2006 - link
Wouldnt you push a single system that can run into the tens of thousands, to possibly hundreds of thousands for a single blade ? I know i would ;)Mysoggy - Thursday, August 17, 2006 - link
I am pretty amazed that they did not mention the cost of power in the TCO section.The cost of powering a server in a datacenter can be even greater than the TCA over it's lifetime.
I love the people that say...oh a got a great deal on this dell server...it was $400 off of the list price. Then they eat through the savings in a few months with shoddy PSUs and hardware that consume more power.
JarredWalton - Thursday, August 17, 2006 - link
Page 3:"Facility management: the space it takes in your datacenter and the electricity it consumes"
Don't overhype power, though. There is no way even a $5,000 server is going to use more in power costs over its expected life. Let's just say that's 5 years for kicks. From http://www.anandtech.com/IT/showdoc.aspx?i=2772&am...">this page, the Dell Irwindale 3.6 GHz with 8GB of RAM maxed out at 374W. Let's say $0.10 per kWHr for electricity as a start:
24 * 374 = 8976 WHr/Day
8976 * 365.25 = 3278484 WHr/Year
3278484 * 5 = 16392420 WHr over 5 years
16392420 / 1000 = 16392.42 kWHr total
Cost for electricity (at full load, 24/7, for 5 years): $1639.24
Even if you double that (which is unreasonable in my experience, but maybe there are places that charge $0.20 per kWHr), you're still only at $3278.48. I'd actually guess that a lot of businesses pay less for energy, due to corporate discounts - can't say for sure, though.
Put another way, you need a $5000 server that uses 1140 Watts in order to potentially use $5000 of electricity in 5 years. (Or you need to pay $0.30 per kWHr.) There are servers that can use that much power, but they are far more likely to cost $100,000 or more than to cost anywhere near $5000. And of course, power demands with Woodcrest and other chips are lower than that Irwindale setup by a pretty significant amount. :)
Now if you're talking about a $400 discount to get an old Irwindale over a new Woodcrest or something, then the power costs can easily eat up thost savings. That's a bit different, though.