Data Center Containment Best Practices That Won't Bust Your Budget

Understanding Airflow Management
[ Page 4 of 5 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 next page
Provided by Mission Critical
This test is no longer available for credit

Managing Airflow at the Room Level

Data center containment best practices that won’t bust your budget — Part 3

By Rob Huttemann and Lars Strong P.E.

Room-level airflow management is fraught with misconceptions and half-truths, making it the least understood aspect of airflow management. Ironically, it’s the most important.

Room-level airflow management is fraught with misconceptions and half-truths, making it the least understood aspect of airflow management. Ironically, it’s the most important. While it’s fairly well understood that the first three levels — or R’s — of airflow management refer to implementing solutions such as brush grommets, blanking panels, and containment for the raised floor, rack, and row levels, respectively, the room level isn’t quite as simple. This is primarily because the necessary changes are invisible (except for the occasional displays on cooling units).

For clarity, room-level airflow management is better defined as cooling optimization, referring to the process of making adjustments to cooling system controls. If done well, this process will improve energy efficiency (resulting in reduced operating costs), improve cooling capacity, improve IT equipment reliability, and defer capital expenditure. It’s important to note here that without cooling optimization (i.e., room-level airflow management), any other airflow management solution that has or will be implemented — including the products listed above — is an expense. While these solutions may have improved IT intake air temperatures, the financial and capacity benefits are left on the table. The only way to realize energy savings from airflow management improvements made at the raised floor, rack, and row levels, is through cooling optimization.

And while the process of cooling optimization is typically a manual and iterative process, utilizing solutions such as IR thermometers or environmental monitoring will ensure IT intake temperatures do not exceed their recommended or allowable thresholds. Some monitoring solutions can even provide advisement on specific optimization steps that can be taken, but more on that later.

Matching Cooling Capacity With IT Load

Airflow management alone doesn’t save money on cooling energy costs, instead it improves IT equipment intake air temperatures and creates the conditions where changes to the cooling infrastructure are possible. Reason being, with correctly implemented airflow management solutions at the raised floor, rack, and row levels, there should now be an excess of conditioned supply air in the cold aisles, and all IT equipment intake air temperatures will be excessively low. This is because exhaust air is no longer mixing with conditioned air and vice versa. The next step is to match the flow rate of conditioned air as closely as possible with the demand flow rate required by the IT equipment. This is done by lowering fan speeds, raising cooling unit temperature set points, or turning cooling units off altogether. This is often an iterative process of making adjustments to controls, allowing the system to equalize and then making additional adjustments if needed. Since data centers are dynamic environments, this will be an ongoing process — not a one-time event. Each time additional airflow management improvements are implemented or significant IT equipment changes occur, there’s an opportunity to optimize the cooling infrastructure.

Room-Level Airflow Management (Cooling Optimization) Best Practices

As mentioned above, the typical steps that need to be taken to properly match the cooling capacity of the data center with the IT load (i.e., optimize the cooling infrastructure) are:

  • Reduce fan speeds for units with variable frequency drives (VFDs) as much as possible without exceeding the maximum allowable IT equipment intake air temperature.
  • Raise cooling unit temperature set points as high as possible without exceeding the maximum allowable IT equipment intake air temperature.
  • Expand the allowable relative humidity (Rh) band to prevent cooling units from "fighting" with each other (wasting energy because one unit is trying to dehumidify while another is trying to humidify).
  • Turn off excess cooling if cooling units don't have VFDs. If cooling units are equipped with VFDs, energy savings are greater with 10 cooling units running at 50% fan speed than if five cooling units run at 100% fan speed.

After any significant airflow management improvement or IT load installation/removal, there’s an opportunity to evaluate these room-level controls to ensure efficient operation and sufficient redundant capacity.

Utilizing Monitoring Solutions to Inform Optimization Decisions

A wise man by the name of Ken Brill once said that when it comes to data center cooling, it’s as simple as “power in, heat out always.” In other words, every kW of power consumed in a computer room becomes a kilowatt of heat that needs to be removed from the computer room and, ultimately, the building. This includes all power conversion and distribution losses as well as every kW of electricity consumed by the IT equipment.

While data center cooling is a science in its own right and can be quite complex, it can be boiled down to what goes in, in terms of power, must come out, in terms of heat. Therefore, it’s important to monitor the power being demanded by IT equipment (the IT load) to properly match it with the cooling being supplied by cooling units (cooling capacity). Utilizing solutions that can monitor both the power and cooling infrastructure will help in creating this balance.

Secondly, monitoring the thermal performance of a computer room is essential in the cooling optimization process. When making adjustments to the cooling infrastructure, closely monitor the IT equipment intake temperatures to make sure they do not exceed their recommended or allowable limits as outlined by ASHRAE and/or the manufacturer. While this can be done with an IR thermometer or IR camera, it may be time consuming as it can only be done on a per cabinet or aisle basis. One big cautionary remark about using an IR thermometer or IR camera is that these infrared tools measure surface temperature, not air temperature, which means they can indicate temperature issues, but they are not monitoring the temperature of the airflow itself. Incorrect readings can also occur as a result of highly reflective surfaces.

Due to these circumstances, the easiest way to do this at the full room level is to utilize a monitoring solution with sensors placed at the tops and bottoms of all cabinets. This will provide a sitewide view of a data center’s thermal performance. Some monitoring solutions have taken this a step further with 3-D visualizations that will display a digital twin of the data center and its real-time thermal performance.

Optimization With AI and Machine Learning Algorithms

AI and machine learning are buzzwords that have made the rounds throughout the industry as a panacea to many data center and IT-related issues. While they may come up short on many of their perceived or marketed promises, airflow management and cooling optimization is where this technology really has a chance to deliver.

AI can visualize airflow management improvements at the raised floor, rack, and row levels; analyze data being collected from sensors; and provide advisement on cooling optimization decisions at the room level. There are solutions available that have pioneered this use-case for AI and machine learning that couple the monitoring aspect with cooling optimization in the form of a virtual cooling advisor. It’s worth taking a look at these solutions to take out some of the guesswork in the cooling optimization process.

Conclusion

Room-level airflow management is not actually airflow management in the literal sense — it’s more about cooling optimization. Nonetheless, it is a necessary step and the only way to realize energy savings from airflow management improvements made at the raised floor, rack, and row levels. Don’t forget that every solution that has been implemented up until the room level is an expense. It’s at the room level where cooling infrastructure changes transform into cooling energy savings, improved cooling capacity, improved IT equipment reliability, and deferred capital expenditure. Efficiency cannot be purchased; it must be managed.

Also remember that cooling optimization is an iterative and ongoing process, boiling down to matching cooling capacity with the IT load of the computer room. Although this will be a manual process in most cases, utilizing monitoring solutions that look at the power, cooling, and thermal performance of the data center will help guide optimization decisions and, in some cases, advise on specific steps that can be taken.

Rob Huttemann is the senior vice president of operations for Critical Environments Group (CEG). He has more than 30 years of industry experience and familiarity with data center and supporting infrastructure management, with a specific focus on power, space and storage, cooling, and overall best practices.

Lars Strong, P.E., a thought leader and recognized expert on data center optimization, leads Upsite Technologies’ EnergyLok Cooling Science Services, which originated in 2001 to optimize data center operations. He is a certified U.S. Department of Energy Data Center Energy Practitioner (DCEP) HVAC Specialist. Lars has delivered and continues to deliver value-added services to domestic and international Fortune 100 companies through the identification and remediation of dilemmas associated with the fluid mechanics and thermodynamics of their data center cooling infrastructure. Lars brings his knowledge and ability to teach the fundamentals of cooling science to numerous U.S. and international private and public speaking events annually.

 

[ Page 4 of 5 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 next page
Originally published in Mission Critical
Originally published in October 2020

Notice