Efficiency (Energy and Operational)

Businessman touching performance visual screen
Your PUE May Be Good Enough Already

Businessman touching performance visual screen

I bet you never thought you would hear anything like this from someone who sits on the board of The Green Grid. The truth of the matter is if you have been following industry recommendations, adopting Green Grid’s data center maturity model (DCMM), our Energy Logic 2.0 strategies, and have a PUE in the 1.3 to 1.5 range you may be approaching the practical limit for your data center. But please do keep measuring your PUE.

Wait a minute – what about Google, Facebook, Microsoft and others who are claiming PUEs of 1.1 or even lower? Shouldn’t we all be chasing these levels of ‘efficiency’?

That depends a great deal on your business model. If you are an Internet driven organization with a billion+ connected customers, globally distributed data centers with in-region redundancy, and little to no concern over data center outages then chase the lowest possible PUE. In all likelihood you are not competing with the above entities. Your executives and shareholders don’t want to consider sharing the risk levels associated with those types of data center facilities that are operated with the overarching philosophy of failure-is-an-option.

With today’s technology, advancements in material science (IGBTs), innovations in power / cooling systems, circuits, controls, etc. we can design, build, and safely operate robust Tier IV facilities with a PUE in the 1.2 – 1.3 range. For existing facilities we can deploy new power (UPS), thermal (direct / indirect evaporative or pumped refrigerant), bolt-on upgrades, and DCIM controls to bring ‘legacy’ facilities into a whole new range of PUE performance without giving up one iota of resiliency and availability.

Trying to drive to a lower PUE from here requires a whole new architecture. One best achieved with a new construction project. However this architecture will be less robust and fall outside your current operational practices placing the entire facility and operations on the steep slope of the universal learning curve. Statistics indicate you will experience a much higher failure rate. This move to further reduce PUE comes at a potential significant cost. Do the arithmetic – will you recoup enough in energy savings to warrant perhaps a 10X increase in outages?

The next time someone asks you why you aren’t pursuing a PUE of 1.1 tell them Jack said your PUE may be good enough already.

High-power modular solutions associated with limited-autonomy back-up sources (1)
High-power modular solutions associated with limited-autonomy back-up sources

High-power modular solutions associated with limited-autonomy back-up sources (1)

Continuity of the power supply inside a data center has always been (and probably always will be) one of the trickiest aspects of creating an infrastructure. That is why this issue has often been the subject of in-depth studies, proposals, and discussions.

Prompted by what appears to be a new trend, I would like to briefly present the solution that is capable of providing limited autonomy, compared with what has been used up to now. This can be achieved by adopting modular high-power static UPS’s combined with back-up energy sources, maintenance-free lead batteries.

The first aspect to address is the question of the power levels involved.

Without going into a long discussion about trends in the uninterruptible power supplies that data centers want, it is fair to say that today, the data center world is still faced with high power density values per rack (20 – 30 kW per rack in high-density areas), which require large infrastructures to be designed. So, to take a benchmark, for data centers with rooms on the order of one thousand square meters, the nominal power rating of at least 1 MW for a single static UPS has become a standard value over the years.

However, one aspect that is becoming very challenging is the constantly evolving software environment, which is dynamic, inserted within an infrastructure which, at the same time and by definition, makes the quality of being static one of its main characteristics. The challenge, then, is to find a way to make these two aspects coexist, keeping all the requirements related to security, reliability, and maintainability unchanged.

This is the context in which the choice of a modular UPS must fit; a UPS be able to supply a high active power (> MW) in its maximum available configuration. The high rated power allows centralized equipment solutions to be adopted, which facilitates all aspects related to monitoring and operation, unlike a distributed solution, which turns out to be more complicated.

Moreover, if the single module is capable of supplying sufficient power (200-400 kW), the modular solution provides UPSs with much higher rated power than what is available with traditional monolithic solutions.

Finally, modularity lets you optimize the initial capex and respond in a truly dynamic manner (only when the need actually arises) to the many plant requirements, thereby supporting the business.

For the same type of accumulator and equal autonomy, when the uninterrupted power is measured in MW, you have non-standard battery installations.

A radically limited autonomy, approximately one minute under nominal conditions, has a significantly lower cost and reduces the complexity of the installation.

On the other hand, however, people who make this type of choice must be aware of its consequences. It is clear that this solution can only be adopted in situations where the sole aim is to cope with problems related to micro-outages, or where a standby generator is sure to be available very quickly (start-up and changeover within 20 to 30 seconds).

When the back-up source provides very short autonomy, its availability and efficiency become even more important, if that is possible.

That is why advanced battery monitoring systems are also used in this type of installation. These are capable of measuring the voltage, temperature, and internal resistance of every single monobloc.

The continuous monitoring of these parameters via appropriate dedicated software provides a way of checking on their state and the behaviour of the accumulators in every operating situation. It also lets you determine, to a certain extent, when battery performance eventually starts to flag due to normal wear. This can be done by comparing the actual measurements taken in the field to the initial values measured at the time of installation and to the manufacturer’s specifications. This information lets you decide when the installed batteries should be replaced, before the decline in their performance starts to create a risk to the users being supplied.

Obviously, monitoring systems like this can also notify you of any operating errors in real time, by generating signals and/or alarms according to the type of situation.

Will this type of UPS solution become standard in the near future?

What is your experience?

Five Ways Toward a Sustainable Data Center
Five Ways Toward a Sustainable Data Center

Five Ways Toward a Sustainable Data Center

The importance of efficiency is nothing new to the data center industry, but the rise of sustainability awareness is causing some businesses to view their data centers through a social responsibility lens, rather than aiming solely for efficiency. For example, the Natural Resources Defense Council released data saying U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity in 2013 – enough electricity to power all the households in New York City twice over – and are on-track to reach 140 billion kilowatt-hours by 2020.

So, what can your business do to become part of the solution? Here are five important things to consider to achieve not only efficiency but sustainability in your data center ecosystem:

1. One of the largest opportunities for energy savings is identifying and decommissioning comatose servers. An energy efficiency audit from a trusted IT service partner can help you put a program in place to take care of comatose servers and make improvements overall.

2. Businesses should start taking a more aggressive approach to data center efficiency—adopting, for example, cooling with maximum economization and UPS systems that apply active inverter eco mode and move seamlessly to high-efficiency mode—while also pushing for increased use of alternative energy, such as wind and solar, to power data center operations and achieve carbon neutrality.

3. We are seeing the advent of zero carbon data centers through a combination of on-site renewable energy generation in concert with near-site, grid-delivered renewbale energy resources. The next step in sustainability is the emergence of the extremely low water and no water use data centers.

4. PUE (power usage effectiveness) is becoming less of an ‘efficiency’ concern as the business community recognizes the lost time, revenue, and the sheer amount of energy required to bring a data center back on-line after an outage far surpasses the minimal savings associated with risky efficiency plays in quest for a lower PUE.

5. The impact of sustainability will not be limited to on-premise technology decisions. To be meaningful, your business’s reporting must include the full data center ecosystem, including colocation and cloud providers. As this practice grows, sustainability will rise to the level of availability and security as must-have attributes of a high-performing data center.

As sustainability gathers more attention, the prediction made by Data Center 2025 participants that solar energy would account for 21 percent of data center power by 2025, which seemed extremely aggressive to some experts, may prove accurate. Do you think solar energy will play a larger role in the data centers of the future?

Productivity_Crop
Time for a Server Idle Performance Standard – Introducing 10 Minus

A recent study on real-world server utilization found that upwards of 30 percent of data center servers are ‘comatose’ meaning they produced no useful work within the last six months. A sad state of affairs made all the more worse by the industry’s poor average utilization rates that although somewhat improved since the 2007 EPA Report to Congress still range on the order of 8 to 15 percent. That means an awful lot of energy is being wasted in data centers around the world that could easily be saved through lower server idle power levels.

Today Emerson Network Power proposes a new standard – the 10 Minus server idle energy standard. A standard modeled off of the very successful 80 Plus power supply specification originally developed by Ecos Consulting and championed by the Climate Savers Computing Initiative and a host of other leading hardware vendors, OEMs, data center owners / operators as well as the EPA/DOE and EPRI. The 80 Plus specification for embedded server power supplies (and other IT/networking devices) has become so well entrenched within our industry that it is almost impossible to purchase a new piece of an IT kit that doesn’t achieve 90 percent efficiency on AC to DC power conversion.

With 10 Minus we now address the server’s idle energy performance with the initial target established at 10 percent of full rated power when the server is idle. Further like the 80 Plus specification we introduce Silver, Gold, and Platinum ratings to recognize those devices capable of exceeding the minimum 10 percent level with Silver for those below 8.5 percent, Gold for those below 7.5 percent, and Platinum for those below 5 percent. As with the 80 Plus program these levels will be revisited every three years and adjusted downward once 50 percent of the qualified devices achieve Gold or better performance. Of course we could just turn the servers OFF but data shows us that this idea is not palatable to the vast majority of IT and data center professionals.

Therefore forcing idle power to drop to 10 percent or lower appears to be the most viable and achievable solution. Let’s take a look at what the new 10 Minus specification would mean for the average data center. We will apply a few conservative assumptions to our model so your potential savings will likely be greater.

Applying the 10 Minus specification to a model data center with 500 servers each rated at 300 watts under ‘normal’ operation and 100 watts when idle (you should be so lucky) with a better-than-average comatose server rate of only 20 percent would demonstrate:

A. Without 10 Minus
Comatose servers alone consume 100 watts x 100 comatose servers x 8760 hours/year for a total of 87,600 kWh/year

B. With 10 Minus – Instant Savings in Excess of 60,000 kWH/year
A minimum standard of 10 percent would be 30 watts x 100 comatose servers x 8760 hours/year for a total of 26,280 kWh/year

However the total savings from applying the 10 Minus standard across the entire IT kit would be significantly higher as our average server utilization rate even if as high as 50 percent means the rest of the time we would see forced savings through reduced idle energy consumption. Using the above model data center our 500 servers are idle at least 50 percent of the time with idle energy consumption of:

A. Without 10 Minus
100 watts x 400 servers (backed out comatose servers as accounted for above) x 4380 hours/year = 175,200 kWh/year

B. With 10 Minus – Net Savings in Excess of 120,000 kWh
The numbers improve significantly to 30 watts x 400 servers x 4380 hours/year = 52,500 kWh/year

By adding the production and comatose server energy savings together, this model data center would realize energy savings in excess of 180,000 kWh a year on a modest deployment of 500 production servers. Extrapolated across a larger enterprise, cloud, or Hyperscale environment and the savings would be monumental.

How do we turn the 10 Minus concept into an industry standard? Climate Savers is now gone and The Green Grid has taken over the mantle. Currently The Green Grid has an open activity request to address comatose servers. Both SPEC and the EPA under their Energy Star® initiatives are slowly addressing the power profile of servers albeit with industry pushback when it comes to idle performance. Perhaps EPRI, along with their EMEA, Japan, and Asia-Pac counterparts are in the best position to drive this standard as the electrical utilities would greatly appreciate an instant and permanent 10 to 40 percent energy reduction from an individual data center with something as simple as deploying 10 Minus certified servers and IT kits.

As many experts within the data center community say: ‘We need to start working on the one side of PUE, the IT load.’ 10 Minus is one easy approach to deploy to address the one side of PUE while also reducing the negative impact of comatose servers on data center energy consumption and productivity per watt.

Let’s face it, if our motor vehicles operated with idle performance like servers there likely wouldn’t be enough oil capacity to quench their thirst. And today we have numerous motor vehicles that not only idle on less than 1 percent of normal operation energy use but also auto-shutoff when the vehicle is stopped. It is time we initiated a standard like 10 Minus to address the deficiencies in server idle performance.

Come visit with me at the upcoming Green Grid Forum 2016 in Seattle, WA, USA, March 8 and 9 where we can discuss this in more detail or post your comments below.

Four Emerging Archetypes to Impact Data Center Industry
Four Emerging Archetypes to Impact Data Center Industry

The data center industry is constantly evolving, but we already knew that. What we don’t know, however, is the shape and scope for the data center of the future. Trends such as cloud computing and cybersecurity are redirecting the once predictable course of the industry toward unprecedented opportunities and challenges. In order to prepare data center professionals for this new landscape, we’ve developed four emerging archetypes that will reshape the way the data center of the future looks and operates.

The Data Fortress
According to The Ponemon Institute of Data Breach, the total cost of a privacy-related data security breach is now at $3.8 million, and the number of security-related downtime incidents rose from two percent in 2010 to 22 percent in 2015. Due to the cost and frequency of these cyberattacks, organizations are taking a security-first approach to data center design. We’re seeing this in the deployment of out-of-network data pods for highly sensitive information—in some cases with separate, dedicated power and thermal management equipment.

The Cloud of Many Drops
Recent studies show enterprise data centers only deliver between five and 15 percent of their maximum computing output over the course of a year, and 30 percent of the country’s 12 million servers are actually “comatose,” meaning they have not delivered computing services in six months or more. In order to make up for some of that excess capacity, we see a future where organizations explore shared service models, similar to Uber or Airbnb. This model would allow data centers to sell some of their unused computing capacity on the open market.

Fog Computing
Introduced by Cisco, fog computing is a distributed computing architecture developed as a response to the Internet of Things. As computing at the edge of the network becomes more critical, fog computing connects multiple small networks of industrial systems into one large network across an enterprise to improve efficiency and concentrate data processing closer to devices and networks.

The Corporate Social Responsibility Compliant Data Center
As an industry facing efficiency challenges, some organizations are starting to reevaluate how data centers fit into their corporate sustainability plans. This increased attention on sustainability has organizations focused on issues such as carbon footprint, alternative energy use and responsible disposal. In the future, we see these challenges leading to a more aggressive approach to data center efficiency, potentially using alternative energy to power data center operations and achieve carbon neutrality.

Which emerging archetype do you think will have the biggest impact on the data center industry?

Businessman touching performance visual screen
Your PUE May Be Good Enough Already

Businessman touching performance visual screen

I bet you never thought you would hear anything like this from someone who sits on the board of The Green Grid. The truth of the matter is if you have been following industry recommendations, adopting Green Grid’s data center maturity model (DCMM), our Energy Logic 2.0 strategies, and have a PUE in the 1.3 to 1.5 range you may be approaching the practical limit for your data center. But please do keep measuring your PUE.

Wait a minute – what about Google, Facebook, Microsoft and others who are claiming PUEs of 1.1 or even lower? Shouldn’t we all be chasing these levels of ‘efficiency’?

That depends a great deal on your business model. If you are an Internet driven organization with a billion+ connected customers, globally distributed data centers with in-region redundancy, and little to no concern over data center outages then chase the lowest possible PUE. In all likelihood you are not competing with the above entities. Your executives and shareholders don’t want to consider sharing the risk levels associated with those types of data center facilities that are operated with the overarching philosophy of failure-is-an-option.

With today’s technology, advancements in material science (IGBTs), innovations in power / cooling systems, circuits, controls, etc. we can design, build, and safely operate robust Tier IV facilities with a PUE in the 1.2 – 1.3 range. For existing facilities we can deploy new power (UPS), thermal (direct / indirect evaporative or pumped refrigerant), bolt-on upgrades, and DCIM controls to bring ‘legacy’ facilities into a whole new range of PUE performance without giving up one iota of resiliency and availability.

Trying to drive to a lower PUE from here requires a whole new architecture. One best achieved with a new construction project. However this architecture will be less robust and fall outside your current operational practices placing the entire facility and operations on the steep slope of the universal learning curve. Statistics indicate you will experience a much higher failure rate. This move to further reduce PUE comes at a potential significant cost. Do the arithmetic – will you recoup enough in energy savings to warrant perhaps a 10X increase in outages?

The next time someone asks you why you aren’t pursuing a PUE of 1.1 tell them Jack said your PUE may be good enough already.

High-power modular solutions associated with limited-autonomy back-up sources (1)
High-power modular solutions associated with limited-autonomy back-up sources

High-power modular solutions associated with limited-autonomy back-up sources (1)

Continuity of the power supply inside a data center has always been (and probably always will be) one of the trickiest aspects of creating an infrastructure. That is why this issue has often been the subject of in-depth studies, proposals, and discussions.

Prompted by what appears to be a new trend, I would like to briefly present the solution that is capable of providing limited autonomy, compared with what has been used up to now. This can be achieved by adopting modular high-power static UPS’s combined with back-up energy sources, maintenance-free lead batteries.

The first aspect to address is the question of the power levels involved.

Without going into a long discussion about trends in the uninterruptible power supplies that data centers want, it is fair to say that today, the data center world is still faced with high power density values per rack (20 – 30 kW per rack in high-density areas), which require large infrastructures to be designed. So, to take a benchmark, for data centers with rooms on the order of one thousand square meters, the nominal power rating of at least 1 MW for a single static UPS has become a standard value over the years.

However, one aspect that is becoming very challenging is the constantly evolving software environment, which is dynamic, inserted within an infrastructure which, at the same time and by definition, makes the quality of being static one of its main characteristics. The challenge, then, is to find a way to make these two aspects coexist, keeping all the requirements related to security, reliability, and maintainability unchanged.

This is the context in which the choice of a modular UPS must fit; a UPS be able to supply a high active power (> MW) in its maximum available configuration. The high rated power allows centralized equipment solutions to be adopted, which facilitates all aspects related to monitoring and operation, unlike a distributed solution, which turns out to be more complicated.

Moreover, if the single module is capable of supplying sufficient power (200-400 kW), the modular solution provides UPSs with much higher rated power than what is available with traditional monolithic solutions.

Finally, modularity lets you optimize the initial capex and respond in a truly dynamic manner (only when the need actually arises) to the many plant requirements, thereby supporting the business.

For the same type of accumulator and equal autonomy, when the uninterrupted power is measured in MW, you have non-standard battery installations.

A radically limited autonomy, approximately one minute under nominal conditions, has a significantly lower cost and reduces the complexity of the installation.

On the other hand, however, people who make this type of choice must be aware of its consequences. It is clear that this solution can only be adopted in situations where the sole aim is to cope with problems related to micro-outages, or where a standby generator is sure to be available very quickly (start-up and changeover within 20 to 30 seconds).

When the back-up source provides very short autonomy, its availability and efficiency become even more important, if that is possible.

That is why advanced battery monitoring systems are also used in this type of installation. These are capable of measuring the voltage, temperature, and internal resistance of every single monobloc.

The continuous monitoring of these parameters via appropriate dedicated software provides a way of checking on their state and the behaviour of the accumulators in every operating situation. It also lets you determine, to a certain extent, when battery performance eventually starts to flag due to normal wear. This can be done by comparing the actual measurements taken in the field to the initial values measured at the time of installation and to the manufacturer’s specifications. This information lets you decide when the installed batteries should be replaced, before the decline in their performance starts to create a risk to the users being supplied.

Obviously, monitoring systems like this can also notify you of any operating errors in real time, by generating signals and/or alarms according to the type of situation.

Will this type of UPS solution become standard in the near future?

What is your experience?

Five Ways Toward a Sustainable Data Center
Five Ways Toward a Sustainable Data Center

Five Ways Toward a Sustainable Data Center

The importance of efficiency is nothing new to the data center industry, but the rise of sustainability awareness is causing some businesses to view their data centers through a social responsibility lens, rather than aiming solely for efficiency. For example, the Natural Resources Defense Council released data saying U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity in 2013 – enough electricity to power all the households in New York City twice over – and are on-track to reach 140 billion kilowatt-hours by 2020.

So, what can your business do to become part of the solution? Here are five important things to consider to achieve not only efficiency but sustainability in your data center ecosystem:

1. One of the largest opportunities for energy savings is identifying and decommissioning comatose servers. An energy efficiency audit from a trusted IT service partner can help you put a program in place to take care of comatose servers and make improvements overall.

2. Businesses should start taking a more aggressive approach to data center efficiency—adopting, for example, cooling with maximum economization and UPS systems that apply active inverter eco mode and move seamlessly to high-efficiency mode—while also pushing for increased use of alternative energy, such as wind and solar, to power data center operations and achieve carbon neutrality.

3. We are seeing the advent of zero carbon data centers through a combination of on-site renewable energy generation in concert with near-site, grid-delivered renewbale energy resources. The next step in sustainability is the emergence of the extremely low water and no water use data centers.

4. PUE (power usage effectiveness) is becoming less of an ‘efficiency’ concern as the business community recognizes the lost time, revenue, and the sheer amount of energy required to bring a data center back on-line after an outage far surpasses the minimal savings associated with risky efficiency plays in quest for a lower PUE.

5. The impact of sustainability will not be limited to on-premise technology decisions. To be meaningful, your business’s reporting must include the full data center ecosystem, including colocation and cloud providers. As this practice grows, sustainability will rise to the level of availability and security as must-have attributes of a high-performing data center.

As sustainability gathers more attention, the prediction made by Data Center 2025 participants that solar energy would account for 21 percent of data center power by 2025, which seemed extremely aggressive to some experts, may prove accurate. Do you think solar energy will play a larger role in the data centers of the future?

Productivity_Crop
Time for a Server Idle Performance Standard – Introducing 10 Minus

A recent study on real-world server utilization found that upwards of 30 percent of data center servers are ‘comatose’ meaning they produced no useful work within the last six months. A sad state of affairs made all the more worse by the industry’s poor average utilization rates that although somewhat improved since the 2007 EPA Report to Congress still range on the order of 8 to 15 percent. That means an awful lot of energy is being wasted in data centers around the world that could easily be saved through lower server idle power levels.

Today Emerson Network Power proposes a new standard – the 10 Minus server idle energy standard. A standard modeled off of the very successful 80 Plus power supply specification originally developed by Ecos Consulting and championed by the Climate Savers Computing Initiative and a host of other leading hardware vendors, OEMs, data center owners / operators as well as the EPA/DOE and EPRI. The 80 Plus specification for embedded server power supplies (and other IT/networking devices) has become so well entrenched within our industry that it is almost impossible to purchase a new piece of an IT kit that doesn’t achieve 90 percent efficiency on AC to DC power conversion.

With 10 Minus we now address the server’s idle energy performance with the initial target established at 10 percent of full rated power when the server is idle. Further like the 80 Plus specification we introduce Silver, Gold, and Platinum ratings to recognize those devices capable of exceeding the minimum 10 percent level with Silver for those below 8.5 percent, Gold for those below 7.5 percent, and Platinum for those below 5 percent. As with the 80 Plus program these levels will be revisited every three years and adjusted downward once 50 percent of the qualified devices achieve Gold or better performance. Of course we could just turn the servers OFF but data shows us that this idea is not palatable to the vast majority of IT and data center professionals.

Therefore forcing idle power to drop to 10 percent or lower appears to be the most viable and achievable solution. Let’s take a look at what the new 10 Minus specification would mean for the average data center. We will apply a few conservative assumptions to our model so your potential savings will likely be greater.

Applying the 10 Minus specification to a model data center with 500 servers each rated at 300 watts under ‘normal’ operation and 100 watts when idle (you should be so lucky) with a better-than-average comatose server rate of only 20 percent would demonstrate:

A. Without 10 Minus
Comatose servers alone consume 100 watts x 100 comatose servers x 8760 hours/year for a total of 87,600 kWh/year

B. With 10 Minus – Instant Savings in Excess of 60,000 kWH/year
A minimum standard of 10 percent would be 30 watts x 100 comatose servers x 8760 hours/year for a total of 26,280 kWh/year

However the total savings from applying the 10 Minus standard across the entire IT kit would be significantly higher as our average server utilization rate even if as high as 50 percent means the rest of the time we would see forced savings through reduced idle energy consumption. Using the above model data center our 500 servers are idle at least 50 percent of the time with idle energy consumption of:

A. Without 10 Minus
100 watts x 400 servers (backed out comatose servers as accounted for above) x 4380 hours/year = 175,200 kWh/year

B. With 10 Minus – Net Savings in Excess of 120,000 kWh
The numbers improve significantly to 30 watts x 400 servers x 4380 hours/year = 52,500 kWh/year

By adding the production and comatose server energy savings together, this model data center would realize energy savings in excess of 180,000 kWh a year on a modest deployment of 500 production servers. Extrapolated across a larger enterprise, cloud, or Hyperscale environment and the savings would be monumental.

How do we turn the 10 Minus concept into an industry standard? Climate Savers is now gone and The Green Grid has taken over the mantle. Currently The Green Grid has an open activity request to address comatose servers. Both SPEC and the EPA under their Energy Star® initiatives are slowly addressing the power profile of servers albeit with industry pushback when it comes to idle performance. Perhaps EPRI, along with their EMEA, Japan, and Asia-Pac counterparts are in the best position to drive this standard as the electrical utilities would greatly appreciate an instant and permanent 10 to 40 percent energy reduction from an individual data center with something as simple as deploying 10 Minus certified servers and IT kits.

As many experts within the data center community say: ‘We need to start working on the one side of PUE, the IT load.’ 10 Minus is one easy approach to deploy to address the one side of PUE while also reducing the negative impact of comatose servers on data center energy consumption and productivity per watt.

Let’s face it, if our motor vehicles operated with idle performance like servers there likely wouldn’t be enough oil capacity to quench their thirst. And today we have numerous motor vehicles that not only idle on less than 1 percent of normal operation energy use but also auto-shutoff when the vehicle is stopped. It is time we initiated a standard like 10 Minus to address the deficiencies in server idle performance.

Come visit with me at the upcoming Green Grid Forum 2016 in Seattle, WA, USA, March 8 and 9 where we can discuss this in more detail or post your comments below.

Four Emerging Archetypes to Impact Data Center Industry
Four Emerging Archetypes to Impact Data Center Industry

The data center industry is constantly evolving, but we already knew that. What we don’t know, however, is the shape and scope for the data center of the future. Trends such as cloud computing and cybersecurity are redirecting the once predictable course of the industry toward unprecedented opportunities and challenges. In order to prepare data center professionals for this new landscape, we’ve developed four emerging archetypes that will reshape the way the data center of the future looks and operates.

The Data Fortress
According to The Ponemon Institute of Data Breach, the total cost of a privacy-related data security breach is now at $3.8 million, and the number of security-related downtime incidents rose from two percent in 2010 to 22 percent in 2015. Due to the cost and frequency of these cyberattacks, organizations are taking a security-first approach to data center design. We’re seeing this in the deployment of out-of-network data pods for highly sensitive information—in some cases with separate, dedicated power and thermal management equipment.

The Cloud of Many Drops
Recent studies show enterprise data centers only deliver between five and 15 percent of their maximum computing output over the course of a year, and 30 percent of the country’s 12 million servers are actually “comatose,” meaning they have not delivered computing services in six months or more. In order to make up for some of that excess capacity, we see a future where organizations explore shared service models, similar to Uber or Airbnb. This model would allow data centers to sell some of their unused computing capacity on the open market.

Fog Computing
Introduced by Cisco, fog computing is a distributed computing architecture developed as a response to the Internet of Things. As computing at the edge of the network becomes more critical, fog computing connects multiple small networks of industrial systems into one large network across an enterprise to improve efficiency and concentrate data processing closer to devices and networks.

The Corporate Social Responsibility Compliant Data Center
As an industry facing efficiency challenges, some organizations are starting to reevaluate how data centers fit into their corporate sustainability plans. This increased attention on sustainability has organizations focused on issues such as carbon footprint, alternative energy use and responsible disposal. In the future, we see these challenges leading to a more aggressive approach to data center efficiency, potentially using alternative energy to power data center operations and achieve carbon neutrality.

Which emerging archetype do you think will have the biggest impact on the data center industry?