Feature Story


More feature stories by year:

2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998

Return to: 2017 Feature Stories

CLIENT: TUANGRU

Dec. 25, 2017: DatacenterDynamics

Identifying Data Center Workloads and Costs

Shifting workloads and ever-escalating costs can quickly strangle a data center’s efficiency and performance. CIOs and data center managers must keep their ears to the ground and their eyes peeled in the face of changing needs. Without capacity planning, sudden upswings in demand can easily crater current systems. Result? A nosedive in revenue, productivity, and customer service. Lacking the clairvoyance to predict cyclical industry and client needs, data center managers must ensure systems, service capacities and resources remain elastic enough to meet demands. This starts with properly managing and optimizing infrastructure, applications, and business services.

From reactive to proactive

Too many data centers operate in reactive mode, ignoring “canary in the mine” warnings only to be overwhelmed by sudden upswings in service demand. Forecasts and reports should be updated weekly and daily, if necessary. This gives managers the real-time vision to react proactively so that capacity is always ahead of service.

Efforts to predict future needs based on one or two metrics invariably fail. Far more revealing are things like current and historical server configurations, the depth of consumed resources—like memory, CPU and storage, and user-agented business transactions.

Evaluating this data can provide the predictive signposts managers need to add or withdraw CPUs to enhance performance. Managers should be warned that servers with little or no historical data can skew results. A sudden variance off baselines—like growth rates and resource consumption forking away from each other—can affect forecasts, which should parallel current as well as historical business transactions.

The goal here is to determine how a data center’s business eats up its resources and how fluctuating markets might drive changes that affect them. This is where today’s powerful analytical tools come in to play, giving managers a periscopic view of cyclical trends, baseline shifts, deleting anomalies, hardware upgrades (or downgrades), correlating costs and grouping reports. Tools at this level can smooth the waters for seasonal demand spikes and other unpredictables. It can provide much-needed control over capital expensing in that purchases can be based on real business demands.

Bye-bye traditional capacity planning

Committing a data center’s resources can be risky without the right planning tools. Today’s more modular, distributed infrastructures leave traditional capacity planning efforts in the slow lane. To ensure resources keep pace with demand, data centers must automate forecasting and institute weekly or even daily reporting, Data points and metrics must be monitored and analyzed to predict capacity and system availability at any given time.

Managers must be able to run various what-if scenarios to give them a heads up on the exact requirements their center needs to reduce cost and risk. It’s critically important that managers be able to make sense of the silos of data migrating through their hardware. Tools that help partition and present this information are needed, so that capacity planners can dashboard it with metrics that the organization can strategically use.

Leaner, more efficient assets

It’s no surprise that many data centers are morphing into lean, efficient assets, especially as they embrace cloud computing, self-provisioning entities, colocation, and emerging technologies. To fully exploit these technologies, CIOs will have to increasingly rely on powerful DCIM tools. These will be needed to monitor and optimize the new data center work environment built around less human intervention.

Understandably, business functions have differing measurable service units. Data centers will generally enable a specific unit of capacity based on the needs of a user. Armed with the right DCIM tools, managers can continually improve their data center’s cost efficiency. The goal is not to reduce spending but to get more performance for what is being spent. Expenditures may fall for a given number of service units, but the goal is to improve service per workload.

Return to: 2017 Feature Stories