In recent years, there has been a significant growth in number, size and power densities of data centers. A significant part of data center power consumption is attributed to the cooling infrastructure, consisting of computer air conditioning units (CRACs), chillers and cooling towers. For energy efficient operation and management of the cooling resources, data centers are beginning to be extensively instrumented with temperature sensors. While this allows cooling actuators, such as CRAC set point temperature, to be dynamically controlled and data centers operated at higher temperatures to save energy, it also increases chances of thermal anomalies. Furthermore, considering that large data centers can contain thousands to tens of thousands of such sensors, it is virtually impossible to manually inspect and analyze the large volumes of dynamic data generated by these sensors, thus necessitating autonomous mechanisms for thermal anomaly detection. Also, in addition to threshold-based detection methods, other mechanisms of anomaly detection are also necessary. In this paper, we describe the commonly occurring thermal anomalies in a data center. Furthermore, we describe — with examples from a production data center — techniques to autonomously detect these anomalies. In particular, we show the usefulness of a principal component analysis (PCA) based methodology to a large temperature sensor network. Specifically, we examine thermal anomalies such as those related to misconfiguration of equipment, blocked vent tiles, faulty sensor and CRAC related anomalies. Furthermore, several of these anomalies normally go undetected since no temperature thresholds are violated. We present examples of the thermal anomalies and their detection from a real data center.
Heat transfer phenomena in complex physical systems like multiphase environments, multidimensional geometries can be difficult to capture in terms of correlations, analytical functions or numerical models using conventional techniques. Such systems are designed based on approximations, thumb-rules or semi-empirical correlations between parameters based on averaged values and are operated likewise using another set of rules derived from bulk thermodynamic performance parameters. With the development of nano-scale sensors and advanced data aggregation techniques, there is a need for analytical techniques that can discover the complex interrelationships between the thermodynamic parameters of the process, geometry constraints and the governing outcomes of the process. Such techniques can leverage the possibility of deployment of thousands of sensors to extract the key relationships that drive the transport phenomena for advanced development of process control tools and methodologies. Heat and mass transfer equipment design and operation can benefit from knowledge discovered through analytics applied on thermo-physical data obtained from real time processes. We present illustrative use cases of application of data analytics and knowledge discovery techniques to a richly instrumented data center where computer room air conditioning (CRAC) units provide cooling for IT equipment arranged in rows of racks. Sensors located at each rack provide temperature measurements which are analyzed in real-time and also archived. Rack temperatures, together with operating parameters of CRAC units such as supply air temperature (SAT), and variable speed drive (VFD) settings, are analyzed together to derive design insights and detect anomalies.
Continued efforts to reduce the environmental impact of products and services are leading to the increased prevalence of life-cycle assessment (LCA) during the design phase. A key challenge in traditional process-based LCA is the validation of vast life-cycle inventory data, which easily numbers in the tens of thousands of data points. In this paper, we utilize an ‘object-based’ approach from software engineering to manage the large amount of data and combine this approach with simple thermodynamic principles, thus significantly reducing the chance of error in LCA inputs. The approach is demonstrated for the test case of a typical industrial air-conditioning unit. We find that implementation of the object-based approach identifies numerous potential errors in the life-cycle inventory data. Thus, the proposed approach enables the designer to perform a more robust environmental analysis of a given system. This facilitates a better identification and prioritization of opportunities for reducing the environmental footprint across the system life-cycle.
This paper outlines the design and implementation of Grid-HSI, a Service Oriented Architecture-based Grid application to enable hyperspectral imaging analysis. Grid-HSI provides users with a transparent interface to access computational resources and perform remotely hyperspectral imaging analysis through a set of Grid services. Grid-HSI is composed by a Portal Grid Interface, a Data Broker and a set of specialized Grid services. Grid based applications, contrary to other client\server approaches, provide the capabilities of persistence and potential transient process on the web. Our experimental results on Grid-HSI show the suitability of the prototype system to perform efficiently hyperspectral imaging analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.