Data Center Handbook
eBook - ePub

Data Center Handbook

Plan, Design, Build, and Operations of a Smart Data Center

Hwaiyu Geng, Hwaiyu Geng

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Data Center Handbook

Plan, Design, Build, and Operations of a Smart Data Center

Hwaiyu Geng, Hwaiyu Geng

Book details
Book preview
Table of contents

About This Book


Written by 59 experts and reviewed by a seasoned technical advisory board, the Data Center Handbook is a thoroughly revised, one-stop resource that clearly explains the fundamentals, advanced technologies, and best practices used in planning, designing, building and operating a mission-critical, energy-efficient, sustainable data center. This handbook, in its second edition, covers anatomy, ecosystem and taxonomy of data centers that enable the Internet of Things and artificial intelligent ecosystems and encompass the following:


  • Megatrends, the IoT, artificial intelligence, 5G network, cloud and edge computing
  • Strategic planning forces, location plan, and capacity planning
  • Green design & construction guidelines and best practices
  • Energy demand, conservation, and sustainability strategies
  • Data center financial analysis & risk management


  • Software-defined environment
  • Computing, storage, network resource management
  • Wireless sensor networks in data centers
  • ASHRAE data center guidelines
  • Data center telecommunication cabling, BICSI and TIA 942
  • Rack-level and server-level cooling
  • Corrosion and contamination control
  • Energy saving technologies and server design
  • Microgrid and data centers


  • Data center site selection
  • Architecture design: rack floor plan and facility layout
  • Mechanical design and cooling technologies
  • Electrical design and UPS
  • Fire protection
  • Structural design
  • Reliability engineering
  • Computational fluid dynamics
  • Project management


  • Benchmarking metrics and assessment
  • Data center infrastructure management
  • Data center air management
  • Disaster recovery and business continuity management

The Data Center Handbook: Plan, Design, Build, and Operations of a Smart Data Center belongs on the bookshelves of any professionals who work in, with, or around a data center.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Data Center Handbook an online PDF/ePUB?
Yes, you can access Data Center Handbook by Hwaiyu Geng, Hwaiyu Geng in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Warehousing. We have over one million books available in our catalogue for you to explore.




The digitalization of our economy requires data centers to continue to innovate to meet the new needs for connectivity, growth, security, innovation, and respect for the environment demanded by organizations. Every phase of life is putting increased pressure on data centers to innovate at a rapid pace. Explosive growth of data driven by 5G, Internet of Things (IoT), and Artificial Intelligence (AI) is changing the way data is stored, managed, and transferred. As this volume grows, data and applications are pulled together, requiring more and more computing and storage resources. The question facing data center designers and operators is how to plan for the future that accomplishes the security, flexibility, scalability, adaptability, and sustainability needed to support business requirements.
With this explosion of data, companies need to think more carefully and strategically about how and where their data is stored, and the security risks involved in moving data. The sheer volume of data creates additional challenges in protecting it from intrusions. This is probably one of the most important concerns of the industry – how to protect data from being hacked and being compromised in a way that would be extremely damaging to their core business and the trust of their clients.
Traditional data centers must deliver a degree of scalability to accommodate usage needs. With newer technologies and applications coming out daily, it is important to be able to morph the data center into the needs of the business. It is equally important to be able to integrate these technologies in a timely manner that does not compromise the strategic plans of the business. With server racks getting denser every few years, the rest of the facility must be prepared to support an ever increasing power draw. A data center built over the next decade must be expandable to accommodate for future technologies, or risk running out of room for support infrastructure. Server rooms might have more computing power in the same area, but they will also need more power and cooling to match. Institutions are also moving to install advanced applications and workloads related to AI, which requires high‐performance computing. To date, these racks represent a very small percentage of total racks, but they nevertheless can present unfamiliar power and cooling challenges that must be addressed. The increasing interest in direct liquid cooling is in response to high‐performance computing demands.
5G enables a new kind of network that is designed to connect virtually everyone and everything together including machines, objects, and devices. It will require more bandwidth, faster speeds, and lower latency, and the data center infrastructure must be flexible and adaptable in order to accommodate these demands. With the need to bring computing power closer to the point of connectivity, the end user is driving demand for edge data centers. Analyzing the data where it is created rather than sending it across various networks and data centers helps to reduce response latency, thereby removing a bottleneck from the decision‐making process. In most cases, these data centers will be, remotely managed and unstaffed data centers. Machine learning will enable real‐time adjustments to be made to the infrastructure without the need for human interaction.
With data growing exponentially, data centers may be impacted by significant increases in energy usage and carbon footprint. Hyperscalers have realized this and have increasingly used more and more sustainable technologies. This trend will cause others to follow and adopt some of the building technologies and use of renewables for their own data centers. The growing mandate for corporations to shift to a greener energy footprint lays the groundwork for new approaches to data center power.
The rapid innovations that are occurring inside (edge computing, liquid cooling, etc.) and outside (5G, IoT, etc.) of data centers will require careful and thoughtful analysis to design and operate a data center for the future that will serve the strategic imperatives of the business it supports. To help address the complex environment with competing forces, this second edition of the Data Center Handbook has assembled by leaders in the industry and academia to share their latest thinking on these issues. This handbook is the most comprehensive guide available to data center practitioners as well as academia.
Roger R. Schmidt, Ph.D.
Member, National Academy of Engineering
Traugott Distinguished Professor, Syracuse University
IBM Fellow Emeritus (Retired)


A key driver of innovation in modern industrial societies in the past two centuries is the application of what researchers call “general purpose technologies,” which have far‐ranging effects on the way the economy produces value. Some important examples include the steam engine, the telegraph, the electric power grid, the internal combustion engine, and most recently, computers and related information and communications technologies (ICTs).
ICTs represent the most powerful general‐purpose technologies humanity has ever created. The pace of innovation across virtually all industries is accelerating, which is a direct result of the application of ICTs to increase efficiency, enhance organizational effectiveness, and reduce costs of manufacturing products. Services provided by data centers enable virtually all ICTs to function better.
This volume presents a comprehensive look at the current state of the data center industry. It is an essential resource for those working in the industry, and for those who want to understand where it is headed.
The importance of the data center industry has led to many misconceptions, the most common of which involves inflated estimates of how much electricity data centers use. The latest credible estimates for global electricity use of data centers are for 2018, from our article in Science Magazine in February 2020 (Masanet et al. 2020).
According to this analysis, data centers used about 0.9% of the world’s electricity consumption in 2018 (down from 1.1% in 2010). Electricity use grew only 6% even as the number of compute instances, data transfers, and total data storage capacity grew to be 6.5 times, 11 times, and 26 times as large in 2018 as each was in 2010, respectively.
The industry was able to keep data center electricity use almost flat in absolute terms from 2010 to 2018 because of the adoption of best practices outlined in more detail in this volume. The most consequential of these best practices was the rapid adoption of hyperscale data centers, known colloquially as cloud computing. Computing output and data transfers increased rapidly, but efficiency also increased rapidly, almost completely offsetting growth in demand for computing services.
For those new to the world of data centers and information technology, this lesson is surprising. Even though data centers are increasingly important to the global economy, they don't use a lot of electricity in total, because innovation has rapidly increased their efficiency over time. If the industry aggressively adopts the advanced technologies and practices described in this volume, they needn’t use a lot of electricity in the future, either.
I hope analysts and practitioners around the world find this volume useful. I surely will!
Jonathan Koomey, Ph.D.,
President, Koomey Analytics
Bay Area, California


The data center industry changes faster than any publication can keep up with. So why the “Data Center Handbook”? There are many reasons, but three stand out. First, fundamentals have not changed. Computing equipment may have dramatically transformed in processing power and form factor since the first mainframes appeared, but it is still housed in secure rooms, it still uses electricity, it still produces heat, it must still be cooled, it must still be protected from fire, it must still be connected to its users, and it must still be managed by humans who possess an unusual range of knowledge and an incredible ability to adapt to fast changing requirements and conditions. Second, new people are constantly entering what, to them, is this brave new world. They benefit from having grown up with a computer (i.e., “smart phone”) in their hands, but are missing the contextual background behind how it came to be and what is needed to keep it working. Whether they are engineers designing their first enterprise, edge computing, hyperscale or liquid cooled facility, or IT professionals given their first facility or system management assignment within it, or are students trying to grasp the enormity of this industry, having a single reference book is far more efficient than plowing through the hundreds of articles published in multiple places every month. Third, and perhaps even more valuable in an industry that changes so rapidly, is having a volume that also directs you to the best industry resources when more or newer information is needed.
The world can no longer function without the computing industry. It's not regulated like gas and electric, but it's as critical as any utility, making it even more important for the IT industry to maintain itself reliably. When IT services fail, we are even more lost than in a power outage. We can use candles to see, and perhaps light a fireplace to stay warm. We can even make our own entertainment! But if we can't get critical news, can't pay a bill on time, or can't even make a critical phone call, the world as we now know it comes to a standstill. And that's just the personal side. Reliable, flexible, and highly adaptable computing facilities are now necessary to our very existence. Businesses have gone bankrupt after computing failures. In health care and public safety, the availability of those systems can literally spell life or death.
In this book you will find chapters on virtually every topic you could encounter in designing and operating a data center – each chapter written by a recognized expert in the field, highly experienced in the challenges, complexities, and eccentricities of data center systems and their supporting infrastructures. Each section has been brought up‐to‐date from the previous edition of this book as of the time of publication. But as this book was being assembled, the COVID 19 pandemic occurred, putting unprecedented demands on computing systems overnight. The industry reacted, proving beyond question its ability to respond to a crisis, adapt its operating practices to unusual conditions, and meet the inordinate demands that quickly appeared from every industry, government, and individual. A version of the famous Niels Bohr quote goes, “An expert is one who, through his own painful experience, has learned all the mistakes in a given narrow field.” Adherence to the principles and practices set down by the authors of this book, in most cases gained over decades through their own personal and often painful experiences, enabled the computing industry to respond to that crisis. It will be the continued adherence to those principles, honed as the industry continues to change and mature, that will empower it to respond to the next critical situation. The industry should be grateful that the knowledge of so many experts has been assembled into one volume from which everyone in this industry can gain new knowledge.
Robert E. McFarlane
Principal, Shen Milsom & Wilke, LLC
Adjunct Faculty – Marist College, Poughkeepsie, NY

Table of contents