Neeraj Bhatia's Blog

August 7, 2012

Green Capacity Planning Part-4: Power-Performance Benchmarks

Filed under: Capacity Management — neerajbhatia @ 07:31

Power-performance benchmarks play very important role during the design phase of a data center. At the time of new data center build or upgrade, data center designers analyze the peak capacity of IT equipments to be installed based on the SLA requirements and anticipated future business demands. Peak power usage is also determined and based on that infrastructure equipments (cooling, power delivery etc) sizing is designed. In order to gain power efficiency across the data center energy consumption of both infrastructure and IT equipments should be accessed.

While assessing the power efficiency of Infrastructure equipments is easy, it was not as easy for IT equipments as standard metrics were not available earlier.  Because power consumption of IT equipments is directly related to its utilization, power efficiency can be assessed based on its utilization and energy consumption. Now before the actual deployment it is difficult and one has to rely on vendor provided data or standard benchmarks. The problem with vendor provided energy efficiency figures is that these are often not directly comparable due to differences in workload, configuration, test environment, etc.  This way benchmarks come real handy by facilitating IT managers to compare specific models of servers and other equipments being considered for selection and thus enable them to make informed server choices and help in deploying energy-efficient data centers. Though benchmark data is based on standardized synthetic workload which may not represent your actual usage, it sufficiently serves as a proxy for a specific workload type and enables server comparisons without actually purchasing them.

After server deployment, in the operational phase of a data center, power-performance benchmarks are of little practical use. Because after server deployment it makes no sense to measure power efficiency based on standard workload. An important consideration in this phase should be to measure the power efficiency and productivity of installed IT equipment and an attempt to improve this in the future. This can be done using the standard metrics which we will consider in the next section.

SPEC Power-Performance Benchmark

SPEC is a non-profit organization that establishes, maintains and endorses standardized benchmarks to evaluate performance for the newest generation of computing systems.  Its membership comprises more than 80 leading computer hardware and software vendors, educational institutions, research organizations, and government agencies worldwide.  For more information, visit

In order to enable IT managers to make better-informed server choices and help in deploying energy-efficient data centers, SPEC started development of power and performance benchmarks. In December of 2007, SPECpower_ssj2008 was released, which was the first industry-standard SPEC benchmark that evaluates the power and performance characteristics of volume server class computers. The initial benchmark addresses the performance of server-side Java. It exercises the CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs) as well as the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system.

SPECpower_ssj2008 reports power consumption for servers at different performance levels – from 100% to idle in 10% segments over a period of time.  To compute a power-performance metric across all levels, measured transaction throughputs for each segment are added together, and then divided by the sum of the average power consumed for each segment including active idle. The result is the overall score of the SPECpower_ssj2008 benchmark and this metric is known as overall ssj_ops/watt.

Among other aspects of the benchmark results it is important to see the power used while generating the maximum performance metric and the power used during an Active-Idle period where the system is Idle and doing nothing. These are the logical best and worst cases for work done per unit of power and can serve as high and low bounds to define the power characteristics of a system. SPEC power and performance benchmarks are available for different hardware manufacturers which indicate that typically a server uses up to 40% of its maximum power when doing nothing. For example,  figure given below shows the SPEC Power and Performance benchmark results summary for Dell PowerEdge R610 (Intel Xeon X5670, 2.93 GHz). The server at 100 percent load uses 242 watts while in idle the server still uses 61.9 watts which is around 26% of the power at 100 percent load.

The role of Active-Idle in the performance per power metric is dependent on the benchmark and its associated business model. For example, if the system has typical daytime activities followed by idle nighttime periods, the role of Active-idle becomes important. In such scenarios server virtualization plays an important role where we configure multiple virtual servers on a single physical machine with the aim to minimize the idle time. Recently server vendors have started to enable servers to optionally go to sleep mode when it’s not in use.


Figure: SPECpower_ssj2008 Benchmark Results Summary for Dell Inc. PowerEdge R610 (Intel Xeon X5670, 2.93 GHz)

TPC Energy Benchmark

Transaction Processing Performance Council most commonly known as TPC is a non-profit corporation founded to define transaction processing and database benchmarks. Typically the TPC produces benchmarks that measure transaction processing and database performance in terms of how many transactions a given system and database can perform per unit of time, e.g., transactions per second or transactions per minute.

TPC release three types of benchmarks each for a different type of workload. TPC-C is an on-line transaction processing benchmark and measured in transactions per minute (tpmC). TPC-E is a new online transaction processing (OLTP) workload which simulates the OLTP workload of a brokerage firm. The TPC-E metric is given in transactions per second (tps). TPC-H is an ad-hoc, decision support benchmark. The TPC-H is a decision support benchmark and it is reported as Composite Query-per-Hour Performance Metric (QphH@Size). For more information please visit

TPC-Energy is a new TPC specification which augments the existing TPC Benchmarks with Energy Metrics developed by the TPC. The primary metric reported as defined by TPC-Energy is in the form of “Watts per performance” where the performance units are particular to each TPC Benchmark. For example in case of TPC-E benchmark the metric would be Watts/tpsE.

Following table shows TPC-E energy benchmarks available at the time of writing this post.

In addition to watt-per-performance metric TPC benchmark also provides energy secondary metrics corresponding to the energy consumption for each of the subsystems.

The Idle Power is also reported which defines the amount of energy consumption of a reported energy configuration (REC) in a state ready to accept work.  This is important in scenarios where systems that have idle periods but need to respond to a request (can’t be turn off). This is reported in watts and calculated as the energy consumption in watt-seconds divided by the idle measurement period in seconds.

Figure given below shows energy secondary metrics for Fujitsu PRIMERGY RX300 S6 12×2.5 benchmark (first benchmark result in the table mentioned above). Apart from watt per tpsE at subsystem level it also includes energy consumption at subsystem level at both full load and idle load levels.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at

%d bloggers like this: