Nicholas Carr has a post entitled Sun and the data center meltdown which has an insightful excerpt on the kind of problems that sites facing scalability issues have to deal with. He writes
a recent paper on electricity use by Google engineer Luiz André Barroso. Barroso's paper, which appeared in September in ACM Queue, is well worth reading. He shows that while Google has been able to achieve great leaps in server performance with each successive generation of technology it's rolled out, it has not been able to achieve similar gains in energy effiiciency: "Performance per watt has remained roughly flat over time, even after significant efforts to design for power efficiency. In other words, every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. The result of these trends is that power-related costs are an increasing fraction of the TCO [total cost of ownership]." He then gets more specific: A typical low-end x86-based server today can cost about $3,000 and consume an average of 200 watts (peak consumption can reach over 300 watts). Typical power delivery inefficiencies and cooling overheads will easily double that energy budget. If we assume a base energy cost of nine cents per kilowatt hour and a four-year server lifecycle, the energy costs of that system today would already be more than 40 percent of the hardware costs. And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin ... For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract. The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet. If energy consumption is a problem for Google, arguably the most sophisticated builder of data centers in the world today, imagine where that leaves your run-of-the-mill company. As businesses move to more densely packed computing infrastructures, incorporating racks of energy-gobbling blade servers, cooling and electricity become ever greater problems. In fact, many companies' existing data centers simply can't deliver the kind of power and cooling necessary to run modern systems. That's led to a shortage of quality data-center space, which in turn (I hear) is pushing up per-square-foot prices for hosting facilities dramatically. It costs so much to retrofit old space to the required specifications, or to build new space to those specs, that this shortage is not going to go away any time soon.
a recent paper on electricity use by Google engineer Luiz André Barroso. Barroso's paper, which appeared in September in ACM Queue, is well worth reading. He shows that while Google has been able to achieve great leaps in server performance with each successive generation of technology it's rolled out, it has not been able to achieve similar gains in energy effiiciency: "Performance per watt has remained roughly flat over time, even after significant efforts to design for power efficiency. In other words, every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. The result of these trends is that power-related costs are an increasing fraction of the TCO [total cost of ownership]."
He then gets more specific:
A typical low-end x86-based server today can cost about $3,000 and consume an average of 200 watts (peak consumption can reach over 300 watts). Typical power delivery inefficiencies and cooling overheads will easily double that energy budget. If we assume a base energy cost of nine cents per kilowatt hour and a four-year server lifecycle, the energy costs of that system today would already be more than 40 percent of the hardware costs. And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin ... For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract. The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.
A typical low-end x86-based server today can cost about $3,000 and consume an average of 200 watts (peak consumption can reach over 300 watts). Typical power delivery inefficiencies and cooling overheads will easily double that energy budget. If we assume a base energy cost of nine cents per kilowatt hour and a four-year server lifecycle, the energy costs of that system today would already be more than 40 percent of the hardware costs.
And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin ... For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract.
The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.
If energy consumption is a problem for Google, arguably the most sophisticated builder of data centers in the world today, imagine where that leaves your run-of-the-mill company. As businesses move to more densely packed computing infrastructures, incorporating racks of energy-gobbling blade servers, cooling and electricity become ever greater problems. In fact, many companies' existing data centers simply can't deliver the kind of power and cooling necessary to run modern systems. That's led to a shortage of quality data-center space, which in turn (I hear) is pushing up per-square-foot prices for hosting facilities dramatically. It costs so much to retrofit old space to the required specifications, or to build new space to those specs, that this shortage is not going to go away any time soon.
When you are providing a service that becomes popular enough to attract millions of users, your worries begin to multiply. Instead of just worrying about efficient code and optimal database schemas, things like power consumption of your servers and data center capacity become just as important.
Building online services requires more than the ability to sling code and hack databases. Lots of stuff gets written about the more trivial aspects of building an online service (e.g. switch to sexy, new platforms like Ruby on Rails) but the real hard work is often unheralded and rarely discussed.