Power tools: turning sustainable IT into a business catalyst

How new monitoring and management tools can help datacenters root-out infrastructure inefficiencies

Sponsored Feature Despite concerns about the environmental impact of their continued proliferation, datacenters account for a relatively low proportion of total global carbon emissions (estimates put it variously at between 0.5 percent and 2 percent) relative to IT's critical contribution to economic growth around the world.

Nonetheless, industry initiatives toward more sustainable enterprise computing, such as Europe's Climate Neutral Datacenter Pact, face estimable challenges given the relentless demands placed on IT infrastructures to meet the needs of existing and emergent markets like HPC, AI/Machine Learning, edge computing, and IoT. With facilities packed with equipment that, in operation, represents varying degrees of performance efficiency, calibrating operations to run at optimal carbon efficiency can itself drain IT resources without the right approach.

The soaring cost of electricity – some datacenters report price hikes of 40-50 percent since 2019 – and governmental regulation are other spurs to efficiency. While many businesses have implemented carbon reduction programs voluntarily, this obligation is becoming increasingly mandated as existing codes of conduct and best practice guidelines are turned into sustainability-related legislation. The UK, for instance, has effectuated several new sustainability reporting rules and regulations so far in 2023.

Regulation assumes that businesses subject to its mandates have the wherewithal to make themselves compliant, but this is often not so. There are multifarious reasons for this.

Lack of power monitoring and control tools

One of the most prevalent is lack of tools that enable liable entities to monitor and control their IT operations power usage at the point of operation. Traditionally, energy metering tools track broad usage across an IT estate, and have not been able to single out the component functionalities that exist across IT infrastructures.

Tech chiefs now want to view and calibrate usage with greater granularity, according to Dr John Frey, Chief Technologist at Hewlett Packard Enterprise (HPE), right down to software platform and workload levels.

The choice of computing environments can complicate this task, especially where an organization has adopted, or plans to adopt, a hybrid IT model, with some workloads running on premises and some in public clouds, Frey explains: "Repeatedly, customers have told us that setting and attaining sustainability targets in a hybrid IT environments is complex – and therefore daunting. We set out to bring the IT sustainability challenge within the scope of routine IT management with a strategic approach to efficiency areas."

Frey adds: "By enabling customers to focus on specific functions, interactions and services within their IT estates, they can leverage more holistic methods to sustainability improvement initiatives. It begins with tools that give them visibility and metrics of what is happening across the entirety of their IT assets and workloads."

HPE's GreenLake portfolio of cloud and as-a-service solutions is designed to simplify and accelerate business IT processes, operations and outcomes, says Frey, so was "already well placed to further help customers to gain better understanding and control over their energy usage".

Earlier this year HPE previewed its sustainability dashboard for HPE GreenLake. This delivers visibility, monitoring and management of IT energy consumption and carbon profile by providing ops-specific insight into IT energy consumption/costs and carbon implications.

The dashboard leverages analytics from HPE portfolio solutions across its compute, storage and networking products to inform decisions and actions that improve sustainability performance overall. Supplementary technology from HPE acquisition OpsRamp will provide additional sustainable IT capabilities by delivering a unified approach to manage multivendor infrastructure and application resources in hybrid and multi-cloud environments.

"For sustainability to act as a catalyst for business means taking a more comprehensive approach toward understanding the implications of 'sustainability' itself, which is an evolving concept," Frey says.

The change controls of this understanding – 'levers', Frey designates them – assess not only power usage at point-of-consumption, but pervade the entire IT function to discover how component set-up may be contributing to wasteful power usage across multiple areas, as Frey explains.

Zombies in our midst

The initial and arguably most important lever for tech teams to focus on is rooting-out undetected IT equipment inefficiencies. For instance, does their IT estate harbor unused hardware devices – such as so-called 'zombie servers' – that should be decommissioned and removed?

Zombie servers are physical servers that are switched on but may have no external communications or network visibility, while consuming electricity but serving no purpose. Zombies can possibly pose a cyber security risk, and certainly add to the electricity bill.

The likelihood that a zombie server may be lurking somewhere within their IT estates is well known to IT professionals, says Frey, with past studies suggesting that zombies servers and zombie VMs could comprise up to 20 percent of server estates at a given time. But finding and decommissioning zombies is more challenging than it might appear.

Nor is technological zombiism confined to servers that exist in a 'comatose' state - no functional value, yet switched on and drawing energy, reports Frey.

"We often come across zombie storage devices and even zombie routers, that are connected to a network, but no active part of it," he says. "And again, it's disused kit that's taking-up space and using-up power with no practical utility for the business. HPE GreenLake can help root-out those."

Next up should be a delve into the extent to which the equipment that is retained could be worked harder to deliver better value, and whether a technology refresh that installs newer, more energy-efficient kit that pays its way by bringing down electricity supply bills, is a compelling option.

For actively managed servers Frey's message is again simple – ensure that their utilization is appropriate to resource availability and workload, and avoid overestimating the amount of 'buffer' capacity needed to accommodate workload surges.

"Some studies around compute utilization indicate that server utilization levels are, on average, wastefully low – sometimes less than 30 percent for some of the time, virtualized applications on an annualized basis," Frey says. "This comes as a surprise to many customers, who seem under the default impression that their IT assets are being 'fully utilized – when they are anything but!"

Low utilization can prove a symptom of resource over-provisioning, Frey adds. Right-sizing a server estate is also an effective way to address the issue of overprovisioning, which can be due to a one-time provisioning exercise intended to address a specific surge in loading that is now history.

The 'buy in more than is needed' principle still permeates much decision-making around strategic IT investments, thinks Frey. "When performance drops increases occur there's a tendency to throw compute power at it to mitigate the issue. Oftentimes those extra resources are not switched off when no longer needed," he says. "Still network-attached, but not being utilized, devices are incrementally pulling power and bandwidth, and are adding to the strain on facilities resources like cooling units."

Conspicuous consumption

In 2020, HPE commissioned an international cloud perceptions study. The results indicated that respondent companies were wasting budget through over-provisioning cloud and underutilizing on-prem infrastructure. Stakeholders indicate that they 'waste', on average, around 15 per cent of their IT budgets on misaligned infrastructure.

The report also found that, independently of the location of the infrastructure, on average 33 percent of respondents' available capacity stays unused due to overprovisioning in the cloud or underutilization on-prem. A revealing data point is that only 39 percent of IT in the public cloud, and 35 percent of on-prem infrastructure, is used in a typical day.

HPE GreenLake's is designed as a pay-per-use, self-service platform which uses metering technologies to enable its users to match consumption to workloads, helping to minimize the inefficiencies associated with both over- and under-provisioning of infrastructure.

"Too often it's been accepted as an immovable fact that IT needs as much electricity as it needs – but if that really is the case, how do IT departments ensure that they get the optimal amount of work per-watt?" asks Frey. "That's somewhat challenging. We know at least in compute and to some great extent in storage, performance per-watt increases generation-over-generation, refresh cycle-over-refresh cycle."

But some companies that want that high performance decide that they're going to keep those assets in train for a longer period of time. An interesting insight in this context comes from the Uptime Institute's latest 'Global Data Center Survey', which polled 600 datacenter operators. It found that more than 40 percent of the compute devices in the infrastructure were over five years old, accounted for 66 percent of the total power consumption, but then only performed 7 percent of the work.

Frey adds: "Resource efficiency is also a challenge. The average power conversion chain in a datacenter - colocation or owned - is five conversions before power reaches the CPU of the device in operation. So here again, IT managers also need tools that enable them to look meaningfully at the resources that each piece of equipment takes, and understand the implications there as well".

Software's role in upping energy efficiency

Software efficiency is another optimization area that many organizations have not spent much time focusing on. But HPE has increasingly been working with customers in this area, reports Frey: "At HPE we think of software efficiency in a number of ways," he says. "I urge IT engineers who have not already done so to ask some fundamental questions regarding software efficiency. First, do they need all of the applications running in their environment? Do their applications and workloads perform most efficiently in the environment that they have them running in? If it's a cloud-based application, is it being run in a cloud-based environment? Or are they trying to run a non-cloud-based application, and force it into a cloud environment?"

All these factors will have a bearing on how efficient software runs, both in terms of performance thresholds and energy consumption.

Approaching the issue from the converse direction poses more searching questions to consider. How organizations make sure that the hardware stack the application sits on can work together as efficiently as possible, for example? And why have an application sitting on hardware with GPUs and a high number of cores, if the application can only use one core and cannot take advantage of a GPU? Meanwhile, there is likely other software that could really take advantage of that extra compute power which is kept from gaining access to it.

Frey concludes: "Clearly, in such situations that's a mismatch. So how do we get to a 'right match', so to speak – not just between software and CPU type, but between all component functionalities found across the IT estate?"

That's the question which HPE GreenLake developers have now addressed says the company, . helping customers gain better understanding and control over their energy usage while simultaneously improving the efficiency of their IT infrastructure.

Sponsored by HPE.

More about

More about

More about

TIP US OFF

Send us news