Neeraj Bhatia's Blog

August 7, 2012

Green Capacity Planning Part-4: Power-Performance Benchmarks

Filed under: Green Capacity Planning,Green IT — neerajbhatia @ 07:31

Power-performance benchmarks play very important role during the design phase of a data center. At the time of new data center build or upgrade, data center designers analyze the peak capacity of IT equipments to be installed based on the SLA requirements and anticipated future business demands. Peak power usage is also determined and based on that infrastructure equipments (cooling, power delivery etc) sizing is designed. In order to gain power efficiency across the data center energy consumption of both infrastructure and IT equipments should be accessed.

While assessing the power efficiency of Infrastructure equipments is easy, it was not as easy for IT equipments as standard metrics were not available earlier.  Because power consumption of IT equipments is directly related to its utilization, power efficiency can be assessed based on its utilization and energy consumption. Now before the actual deployment it is difficult and one has to rely on vendor provided data or standard benchmarks. The problem with vendor provided energy efficiency figures is that these are often not directly comparable due to differences in workload, configuration, test environment, etc.  This way benchmarks come real handy by facilitating IT managers to compare specific models of servers and other equipments being considered for selection and thus enable them to make informed server choices and help in deploying energy-efficient data centers. Though benchmark data is based on standardized synthetic workload which may not represent your actual usage, it sufficiently serves as a proxy for a specific workload type and enables server comparisons without actually purchasing them.

After server deployment, in the operational phase of a data center, power-performance benchmarks are of little practical use. Because after server deployment it makes no sense to measure power efficiency based on standard workload. An important consideration in this phase should be to measure the power efficiency and productivity of installed IT equipment and an attempt to improve this in the future. This can be done using the standard metrics which we will consider in the next section.

SPEC Power-Performance Benchmark

SPEC is a non-profit organization that establishes, maintains and endorses standardized benchmarks to evaluate performance for the newest generation of computing systems.  Its membership comprises more than 80 leading computer hardware and software vendors, educational institutions, research organizations, and government agencies worldwide.  For more information, visit

In order to enable IT managers to make better-informed server choices and help in deploying energy-efficient data centers, SPEC started development of power and performance benchmarks. In December of 2007, SPECpower_ssj2008 was released, which was the first industry-standard SPEC benchmark that evaluates the power and performance characteristics of volume server class computers. The initial benchmark addresses the performance of server-side Java. It exercises the CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs) as well as the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system.

SPECpower_ssj2008 reports power consumption for servers at different performance levels – from 100% to idle in 10% segments over a period of time.  To compute a power-performance metric across all levels, measured transaction throughputs for each segment are added together, and then divided by the sum of the average power consumed for each segment including active idle. The result is the overall score of the SPECpower_ssj2008 benchmark and this metric is known as overall ssj_ops/watt.

Among other aspects of the benchmark results it is important to see the power used while generating the maximum performance metric and the power used during an Active-Idle period where the system is Idle and doing nothing. These are the logical best and worst cases for work done per unit of power and can serve as high and low bounds to define the power characteristics of a system. SPEC power and performance benchmarks are available for different hardware manufacturers which indicate that typically a server uses up to 40% of its maximum power when doing nothing. For example,  figure given below shows the SPEC Power and Performance benchmark results summary for Dell PowerEdge R610 (Intel Xeon X5670, 2.93 GHz). The server at 100 percent load uses 242 watts while in idle the server still uses 61.9 watts which is around 26% of the power at 100 percent load.

The role of Active-Idle in the performance per power metric is dependent on the benchmark and its associated business model. For example, if the system has typical daytime activities followed by idle nighttime periods, the role of Active-idle becomes important. In such scenarios server virtualization plays an important role where we configure multiple virtual servers on a single physical machine with the aim to minimize the idle time. Recently server vendors have started to enable servers to optionally go to sleep mode when it’s not in use.


Figure: SPECpower_ssj2008 Benchmark Results Summary for Dell Inc. PowerEdge R610 (Intel Xeon X5670, 2.93 GHz)

TPC Energy Benchmark

Transaction Processing Performance Council most commonly known as TPC is a non-profit corporation founded to define transaction processing and database benchmarks. Typically the TPC produces benchmarks that measure transaction processing and database performance in terms of how many transactions a given system and database can perform per unit of time, e.g., transactions per second or transactions per minute.

TPC release three types of benchmarks each for a different type of workload. TPC-C is an on-line transaction processing benchmark and measured in transactions per minute (tpmC). TPC-E is a new online transaction processing (OLTP) workload which simulates the OLTP workload of a brokerage firm. The TPC-E metric is given in transactions per second (tps). TPC-H is an ad-hoc, decision support benchmark. The TPC-H is a decision support benchmark and it is reported as Composite Query-per-Hour Performance Metric (QphH@Size). For more information please visit

TPC-Energy is a new TPC specification which augments the existing TPC Benchmarks with Energy Metrics developed by the TPC. The primary metric reported as defined by TPC-Energy is in the form of “Watts per performance” where the performance units are particular to each TPC Benchmark. For example in case of TPC-E benchmark the metric would be Watts/tpsE.

Following table shows TPC-E energy benchmarks available at the time of writing this post.

In addition to watt-per-performance metric TPC benchmark also provides energy secondary metrics corresponding to the energy consumption for each of the subsystems.

The Idle Power is also reported which defines the amount of energy consumption of a reported energy configuration (REC) in a state ready to accept work.  This is important in scenarios where systems that have idle periods but need to respond to a request (can’t be turn off). This is reported in watts and calculated as the energy consumption in watt-seconds divided by the idle measurement period in seconds.

Figure given below shows energy secondary metrics for Fujitsu PRIMERGY RX300 S6 12×2.5 benchmark (first benchmark result in the table mentioned above). Apart from watt per tpsE at subsystem level it also includes energy consumption at subsystem level at both full load and idle load levels.

May 1, 2012

Green Capacity Planning Part-3: Monitoring & Measurement

Filed under: Green Capacity Planning,Green IT — neerajbhatia @ 17:53

In last posts (part-1 and part-2) we have discussed background and driving forces for Green Capacity Planning. Today we will discuss the monitoring aspects of it. Monitoring is very important for any capacity planning process and Green capacity planning is not an exception. For the sake of management and easiness the post is further divided into two parts: monitoring basics and major monitoring tools available in the market.

Monitoring Basics

IT equipments were rarely monitored for their energy consumption. The efficiency of system administrators and other IT people is decided by the availability and performance of IT Infrastructure. They are responsible for meeting the performance SLAs and availability around 99.99% in complex 24 X 7 environments. That is the reason that monitoring focus is mainly on these aspects of the Infrastructure and most of the metrics being captured by native utilities or third-party tools fall into availability or performance categories.

On the other hand facilities management people are responsible for energy, cooling, lighting and other aspects of a data center. It’s their responsibility to ensure that sufficient supporting infrastructure is always available to support the data center. As we have seen that power consumption is linearly related to device’s utilization there need to be synergy between IT management and facilities disciplines. However that is not the case. IT capacity planners analyze the impact of business demand on underlying infrastructure and forecast the capacity requirements. The impact of forecast on the supporting Infrastructure is not in the scope of their work. On the other hand, facilities teams usually don’t consider the impact of business demand on the supporting infrastructure. To summarize both disciplines work in isolation and rarely feed information to each other and this results in situations where you run out of power and data center migration becomes the need. Other than financial implications this impacts business services and unnecessary overhead which could have been avoided.

To overcome this situation a holistic approach is required where we take inputs from facilities management and IT management and come up with complete picture of data center infrastructure usage and impact of business demand. Peter Drucker rightly said “If you can’t measure it, you can’t manage it”. This is very true for Capacity Management process.  To effectively manage a data center IT managers should be able to see what is happening on both the IT and Infrastructure sides.

It makes more sense for new-generation server hardware which has significantly improved over the years where power consumption of a server is dynamic and depends on the workload it carries out. This is good in terms of power efficiency but anticipating the data center’s energy requirements has become challenging. Also with the ever increasing cost of energy, the operating cost of these components are significant comparing to the total operating cost of a data center. According to Gartner report published on March’10, energy savings from well-managed data centers can reduce operating expenses by as much as 20%.

There are broadly two ways to measure the power consumption:


The Conventional Way

Conventionally IT managers base their energy planning on fundamentally flawed power calculations: vendor faceplate power specifications or the de-rating of these specifications. Both lead to an inaccurate energy requirements.

Historically server power benchmarks were not available and thus the only way for initial data center power planning was to rely on power data provided by system vendors in the form of faceplate values. But the use of faceplate value is flawed at the first place as it indicates the maximum power requirements for each component irrespective of its configuration or utilization. But the power consumption of a system is linearly correlated with its utilization. Because of this, a huge gap exists between the data center’s anticipated power require­ment and the actual power required by its equipment. Another option which is used is fixed de-rating where an arbitrarily percentage or number is subtracted from the nameplate value considering that the system’s faceplate rating is higher than its actual use. For example a 1,000 watt rated server would be de-rated by a fixed 20 percent which means you are assuming that it would consume 800 watts. However this is not true as its power consumption would be dependent on the utilization and estimated value most often is grossly inflated. As you might think, finding the correct percentage by which de-rating should be done is nearly impossible without any measurement tool. Two servers of the same manufacturer and model can consume different powers because of the utilization.

Figure given below depicts power usage of IBM x3630 M3 system and its relation with server CPU utilization. The red line shows constant power usage at 675 watts as per the server faceplate value. The Spec benchmark data reveal different story where even 100% CPU utilization the maximum power draw of the server is 259 watts.

Now if we blindly use 675 watts as basis of our data center’s energy requirements it will result in huge unused energy.  Other than financial ramifications there is a huge risk of replacing or building a new data center assuming that your existing data center has reached out of gas when the fact is it has lots of unused power available. Over-provisioning of power not only increases operational expenditure, but leads to unnecessarily high capital expenditure (Capex).

An Intelligent Way

Considering an important fact that server power consumption is dynamic we need to have a sophisticated way to measure actual power draw of a server based on the configuration and utilization. This is where DCIM tools play an important role. DCIM which is commonly known as Data center infrastructure management provides performance and utilization data for IT assets and physical infrastructure throughout the data center. The data collected at low-level infrastructure level aid domain experts (e.g., capacity planners and facilities planners), to conduct intelligent analysis. According to Gartner report “DCIM: Going Beyond IT” published in March 2010, DCIM tools are expected to grow to 60% in 2014 from 1% market penetration in 2010. DCIM doesn’t replace systems performance management or facilities management systems however it takes facets of each and apply them to data center infrastructures. It drives performance throughout the data center by monitoring and collection of low-level infrastructure data to enable intelligent analysis by IT capacity planners and facilities planners and thus enable holistic analysis of the overall infrastructure.

Now based on the technology it uses to collect the data we can categorize DCIM tools into hardware-based and software-based. In hardware-based approach power meters or sensors are installed with every device which measure the power usage and send the data to the centralized server. However hardware-based solutions are intrusive, expensive and time consuming to install the device in a large complex data centers. Software-based solutions on the other hand, are also available which monitors the device over the network through Simple Network Management Protocol (SNMP) protocol.

DCIM vendors are emerging very fast and in recent two years vendor market has become crowded. Also existing vendors are integrating their products to offer a common tool for data center management. By capturing power consumption data at the device level data center managers can gain a more-detailed view of their data centers and thus make informed decisions about equipment placement, cooling efficiency, power consumption and upgrades, and capacity planning. Predictive modeling is also an important component of these tools which provided cost effective and accurate solution for many designed data centers.

That’s it for now. In the next post we will further dive into monitoring aspects and discuss major market players and pros and cons of them.

March 14, 2012

Green Capacity Planning Part-2: Driving forces

In my last blog post we have discussed about the background of Green Capacity Planning and what it is about. We have briefly touched various regulatory authorities which are actively working to promote Green IT practices and to lay down guidelines to measure, report and improve the energy efficiency of a data center. We call them driving forces and today we will discuss about these forces and their work.


Let’s start with US Environmental Protection Agency (EPA) which was established in 1970 to consolidate in one agency a variety of federal research, monitoring, standard-setting and enforcement activities to ensure environmental protection. Along with other initiatives, ENERGY STAR is most popular and successful program carried out by EPA and the U.S. Department of Energy (DOE) jointly to promote energy efficient products and practices and thus helping us save money and protect the environment. You must have seen a ENERGY STAR rating while buying an electronic product. Earning ENERGY STAR certification means these products meet energy efficiency guidelines set by the EPA and DOE. With ENERGY STAR and other initiatives like Environmentally Preferable Purchasing (EPP), EPA is helping businesses to buy Green IT products. It enables green vendors, businesses and consumers to evaluate information about green products and services and calculate the costs and benefits of their choices.


The European Environment Agency (EEA) is an agency of the European Union. With currently 32 member countries its goal is to help in developing, adopting, implementing and evaluating environmental policy. European Union European Environment Agency (EEA) and US EPA has released code of conduct for data centers to reduce the energy consumption in a cost-effective manner without hampering the functions of data centers. These codes of conduct give guidelines to constantly measure power usage effectiveness (PUE) and to attain an average PUE of 2.0 (more details about PUE will be discussed in a later blog post). Similarly organizations are advised to report and put efforts to reduce carbon emission levels.

The Green Grid

The Green Grid is a non-profit, open industry consortium of end-users, policy-makers, technology providers, facility architects, and utility companies collaborating to improve the resource efficiency of data centers. With more than 175 member companies around the world, The Green Grid seeks to unite global industry efforts, create a common set of metrics, and develop technical resources and educational tools to further its goals.

The Green Grid was formed in February, 2007 with headquarter in Oregon. Currently the board has following members: AMD, Dell, EMC, Emerson Network Power, HP, IBM, Intel, Microsoft, Oracle, Schneider Electric, and Symantec.

The Green Grid proposed several metrics to report and increase the efficiency of a data center. As discussion about these metrics is self-contained in it’s own we will discuss them in a later blog post.


Leadership in Energy and Environmental Design (LEED) is basically a third-party certification program. It is responsible for design, operation and construction of high performance green buildings. This ensures the buildings are environmentally compatible, provide a healthy work environment and are profitable. Developed by the U.S. Green Building Council (USGBC) LEED is intended to provide building owners and operators a concise framework for identifying and implementing practical and measurable green building design, construction, operations and maintenance solutions. LEED is not specific to only data centers but all buildings.

LEED New Construction buildings are awarded points for sustainability for things like energy-efficient lighting, low-flow plumbing fixtures and collection of water to name a few. Recycled construction materials and energy efficient appliances also impact the point rating system.

That’s it for now. This lays the foundation for most interesting part of the process which is monitoring and measurement where we will discuss various techniques to measure power utilization data.



Link to part-1: Green Capacity Planning: Background & Concepts

February 27, 2012

Green Capacity Planning Part-1: Background & Concepts

Filed under: Green Capacity Planning,Green IT — neerajbhatia @ 19:17

Last June I started writing a technical paper on Green Capacity Planning. I felt satisfied with my work and was able to cover the topic in a 30-pages document. Then unfortunately the hard disk of my laptop met the failure and I lost all the work. This costs me more than that. Situation was like attending the same course twice and I couldn’t start it until recently.

In these 6-8 months the awareness about Green Capacity Planning has improved a lot and everyone is talking about it. Some questions from my professional network triggered me to once again start writing about it. I assume (forgive me If I am wrong!) that for 90% of the readers it is a new road to travel and those are my very target audience. As the topic is comparatively stale now, which means some of you already knew it; still I want you to assure that you will get something out of it. The reason for blog posts instead of a paper is that backups are automatically taken and who knows that at the end I will release a paper with more details. Another advantage is that you don’t have to wait for fully-fledged paper to be released.

So let’s step up a gear and discuss what is Green Capacity Planning. With the increasing electricity prices and tougher business conditions businesses are scrutinizing the power consumption and other operational expenses too closely and IT people have started concerning about the inflationary spiral operational expenditures. In the recent years a new market and practices have emerged which consider energy, cooling, space etc aspects during capacity review process for a data center. Because these practices are related to a much bigger “Green IT” initiative, it is commonly known as Green Capacity Planning or Intelligent Capacity Planning.

Why Green Capacity Planning?

Frankly speaking eye-popping electricity prices and increasing operational costs of a data center are the major driving forces pushing towards adopting Green Capacity Planning discipline. Today’s data center infrastructure is beyond the traditional IT equipments which include servers, network devices, and storage sub-systems. Now it also includes cooling systems, uninterrupted power supplies (UPS), lighting etc. The operational cost of these equipments is a significant part of total data center operational expenditures (Opex).

In the recent years due to the tough economic conditions there is an increasing pressure on IT teams to implement cost cutting measures. IT teams are already cashing in on the technologies like virtualization, server consolidation, and cloud computing. The awareness about green initiatives had given data center managers a food for thought, a way to go beyond the traditional Infrastructure performance management, to further cut their operational expenditures and thus improve the efficiency. As the data center managers continue to be challenged by the business to increase the efficiency in a cost-effective way, companies are beginning to realize that being “green” isn’t just good from a PR perspective; it can also make good financial sense.

Technology advancement is another reason for the need to go green. CPU in data center servers has been truly sophisticated in terms of power distribution. It dynamically switches to low-power mode and turn off cores in the system when there is low volume of work at hand. Despite of these improvements supporting infrastructure is still the same which had been made for older servers without dynamic power. As a result anticipating the power requirements for a data center has become challenging and results in either overloading or over-provisioning the electrical infrastructure.

Apart from cost and technology improvement there is an increasing pressure from regulatory authorities. Various agencies like US EPA (Environment Protection Agency), DOE (Department of Energy), EEA (European Environment Agency), LEED (Leadership in Energy and Environmental Design), The Green Grid, IEA (International Energy Agency) have been constantly working to encourage organizations to use sustainable energy. Similarly organizations are advised to report and put efforts to reduce carbon emission levels. It is not surprising that many organizations have already started measuring and reporting these metrics under their Corporate Social Responsibility (CSR) programs. After 2011 Japan’s devastating nuclear disaster world is looking for alternative green energy resources such as wind power, solar power and geothermal power. Governments are encouraging companies to source environmentally friendly electricity by means of tax reliefs, recognition. After climate change conference held in Durban in Dec 2011 world seems to agree on legal-bounded deal to limit carbon emissions. It means there would be more strict laws to reduce the carbon emissions by data centers.

Green Capacity Planning (What it is about?)

Typically IT Capacity planning includes collection of relevant workload and resource utilization metrics for IT components and analyzing it with the business demand to see its impact on the IT components and give business a view of when capacity upgrade/downgrade is required and most cost-effective way  to achieve this without affecting the agreed SLAs.

The scope of traditional capacity planning includes:

  • Computer power or CPU
  • Memory- physical and secondary
  • I/O
  • Network
  • Space: Internal and external like SAN

On the other hand a parallel stream of specialized people take care of power, lighting and cooling aspects of a data center and they are known as facilities management or building management. However being IT as their customers they need to know the IT demand and impact of any changes in IT Infrastructure and demand on the facilities infrastructure. This missing link results in under-provisioning or over-provisioning of facilities infrastructure, poor time to market of IT services because of slow change management process.

Green Capacity Planning is a coordination effort of IT and facility teams to enable informed decisions about data center capacity and is a natural extension of IT Capacity Planning. This synergy brings optimal infrastructure sizing, cost-savings, and effort to save our natural resource – energy. The aim of Green Capacity Planning is to extend the scope of traditional capacity planning practice and include power consumption of the individual IT equipment and overall energy usage profile of the site. It also includes carbon emission reporting and analyzing cascaded effects of any IT capacity upgradation on underlying Infrastructure. Because under newer approach the scope is wider and more intelligent thus it is also known as Intelligent Capacity Planning and specialized professionals skilled to apply these practices are referred to as Intelligent Capacity Planners.

The benefits of Green Capacity Planning lie in the collection of power consumption metrics. Facilities management is solely responsible for collection of power consumption data at individual component level and data center as a whole using specialized DCIM (Data Center Infrastructure Management) tool or using power strips. Together with configuration items data (through Configuration Management database CMDB) and resource management data, workload data, performance data (through Performance Management database PMDB and/or monitoring tools) and business demand data (from business); IT Capacity Planner would be able to forecast future capacity requirements in terms of IT infrastructure and energy. Having understood all facets of a data center, it will become possible to perform predictive analysis. For example for a new project Intelligent Capacity Planner will be able to predict how much IT Infrastructure, energy and cooling would be required. Also periodic capacity reports including energy consumption, energy efficiency metrics, and carbon emission will enable IT and facilities teams to monitor their efficiency and feed back the data to external regulatory authorities.


So in this post we have discussed the concept of Green Capacity Planning, background and it’s evolution. That lays the foundation for the various other aspects of the topic. In the next post we will discuss various authorities working towards energy efficient data centers.

The Rubric Theme. Create a free website or blog at


Get every new post delivered to your Inbox.

Join 159 other followers