Many businesses operate outside of safe capacity thresholds with little or no room to expand. According to the IDC, the average data center is 9 years old. However, Gartner states that any site more than 7 years old is obsolete. Overcrowded or obsolete data centers create a roadblock for growing organizations and building a new data center(s) is sometimes the only solution. While speed-to-market is critical to success, companies that fail to assess their business needs properly will create dead-end data centers that will not deliver uptime performance goals or meet future business needs
How can you avoid making major mistakes when entering the build and expansion world?
The key lies in the methodology you use to design and build your data center facilities. All too often, companies base their plans on watts per square foot, cost to build per square foot, and tier level—criteria that may be misaligned with their overall business goals and risk profile. Poor planning leads to poor use of valuable capital and can increase operational expense.
Many organizations get overwhelmed, focusing on “speeds and feeds,” green initiatives, concurrent maintainability, power usage effectiveness (PUE) and Leadership in Energy and Environmental Design (LEED) certification. All of these criteria are critical in the decision making process. However, the details often overshadow the big picture. Most companies miss the business opportunity in a data center expansion—an expansion driven by a holistic approach.
While there are numerous consultants in the field to help you find your way, assessing ideas and input can be overwhelming. Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category. The critical nature of mid-size users is no less important than mega users; however internal technical expertise to drive proper expansion plans may be limited. The result is information overload from multiple sources, leading to confusion and poor decision making.
“Data center owners have so many problems right now. Their assets are mission critical, but they are out of control. Power consumption is costing them a fortune. They can’t cool what they have got and cut the risk of a catastrophic outage. And if they make an investment, by the time it is built, it is already out of date” – Stanford Group
How can you avoid making major mistakes when entering the build and expansion world?
The key lies in the methodology you use to design and build your data center facilities. All too often, companies base their plans on watts per square foot, cost to build per square foot, and tier level—criteria that may be misaligned with their overall business goals and risk profile. Poor planning leads to poor use of valuable capital and can increase operational expense.
Many organizations get overwhelmed, focusing on “speeds and feeds,” green initiatives, concurrent maintainability, power usage effectiveness (PUE) and Leadership in Energy and Environmental Design (LEED) certification. All of these criteria are critical in the decision making process. However, the details often overshadow the big picture. Most companies miss the business opportunity in a data center expansion—an expansion driven by a holistic approach.
While there are numerous consultants in the field to help you find your way, assessing ideas and input can be overwhelming. Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category. The critical nature of mid-size users is no less important than mega users; however internal technical expertise to drive proper expansion plans may be limited. The result is information overload from multiple sources, leading to confusion and poor decision making.
“Data center owners have so many problems right now. Their assets are mission critical, but they are out of control. Power consumption is costing them a fortune. They can’t cool what they have got and cut the risk of a catastrophic outage. And if they make an investment, by the time it is built, it is already out of date” – Stanford Group
Mistake 1: Failure to take total cost of ownership (TCO) into account during the data center design phase
Focusing solely on capital cost is an easy trap; the dollars required to build or expand can be staggering. Capital cost modeling is critical, but if you have not included the costs to operate and maintain (OpEx) your business-critical facilities infrastructure, you have severely shortchanged the overall process of effective business planning.
There are two critical components required to build data center OpEx cost modeling—the maintenance costs and the operating costs. The maintenance costs are the costs associated with the proper maintenance of all critical facility support infrastructure. They include but are not limited to OEM equipment maintenance contracts, data center cleaning expenses and subcontractor costs for remedial repairs and upgrades. The operating costs are the costs associated with daily operation and on-site personnel. They include but are not limited to staffing levels, personnel training & safety programs, the creation of site-specific operations documentation, capacity management, and QA/QC policies and procedures. If you have failed to calculate a 3-7 year operations and maintenance (O&M) expense budget, you cannot build a return on investment (ROI) model that supports smart business decisions
If you are planning to build or expand a business-critical data center, your best approach is to focus on three basic TCO parameters: 1) capital expense, 2) operations and maintenance expense, and 3) energy costs. Leave any component out, and you will run the risk of creating a model that does not correctly align your organization’s risk profile and business expenditure profile. If you are making a decision about whether to “buy” (use of collocation/hosting) or do an internal build, the risk of not taking this TCO approach is magnified significantly.
There are two critical components required to build data center OpEx cost modeling—the maintenance costs and the operating costs. The maintenance costs are the costs associated with the proper maintenance of all critical facility support infrastructure. They include but are not limited to OEM equipment maintenance contracts, data center cleaning expenses and subcontractor costs for remedial repairs and upgrades. The operating costs are the costs associated with daily operation and on-site personnel. They include but are not limited to staffing levels, personnel training & safety programs, the creation of site-specific operations documentation, capacity management, and QA/QC policies and procedures. If you have failed to calculate a 3-7 year operations and maintenance (O&M) expense budget, you cannot build a return on investment (ROI) model that supports smart business decisions
If you are planning to build or expand a business-critical data center, your best approach is to focus on three basic TCO parameters: 1) capital expense, 2) operations and maintenance expense, and 3) energy costs. Leave any component out, and you will run the risk of creating a model that does not correctly align your organization’s risk profile and business expenditure profile. If you are making a decision about whether to “buy” (use of collocation/hosting) or do an internal build, the risk of not taking this TCO approach is magnified significantly.
Mistake 2: Poor cost-to-build estimating
Another common mistake is the estimate itself. Financial requests made to boards of directors for capital to expand or build a data center are often too low and result in failure. The flow of decision-making looks something like this:
• The capital request is made and tentatively approved. Financial resources are allocated to investigate, capture and create a true budget.
• Time is spent driving the above budget process.
• The findings reveal that the initial budget request is too low.
• The project is delayed. Careers are impacted, and the ability to deliver service to internal and external clients and prospects is impacted.
• This takes you full circle, back to the # 1 Biggest Mistake: Failure to take the TCO approach and build a holistic financial model.
Cost to build issues can be easily avoided, but are destined to fail if you fall into trap #3
“Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category” – Mike Manos, Industry Expert
• The capital request is made and tentatively approved. Financial resources are allocated to investigate, capture and create a true budget.
• Time is spent driving the above budget process.
• The findings reveal that the initial budget request is too low.
• The project is delayed. Careers are impacted, and the ability to deliver service to internal and external clients and prospects is impacted.
• This takes you full circle, back to the # 1 Biggest Mistake: Failure to take the TCO approach and build a holistic financial model.
Cost to build issues can be easily avoided, but are destined to fail if you fall into trap #3
“Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category” – Mike Manos, Industry Expert
Mistake 3: Improperly setting design criteria & performance characteristics
There are two missteps that can put your organization in the overspend death spiral. First, everyone wants a Tier 3 design, but not everyone needs one. Second, most visions of kilowatt per square foot or rack are not supported by actual business requirements. Too many times, the “must have 300 watts per square foot” approach may not be justified. Don’t overbuild; it is a waste of capital. Higher tier facilities also lead to higher O&M and energy costs. This sets the entire foundation for a proper business model and ROI off-base. Establish the right design criteria and performance characteristics first. Then build your capital expenses and operational expenses around it. Make sure to get your design criteria right and your financial model set prior to visiting the board of directors. For more information on design parameters see White Paper 142, Data Center Projects: System Planning.
Mistake 4: Selecting a site before design criteria are in place
Organizations often start searching for the perfect space to build before having their design criteria and performance characteristics in place. Without this vital information, it doesn’t make sense to spend time visiting or reviewing multiple sites. This typical “cart before the horse” scenario happens most frequently with users in the 1-3 megawatt range. While mega users are usually experts in this arena, and take into consideration power availability and cost, fiber, geographic issues such as earthquakes, tornados and flood plains, etc., baseline users often have business models that dictate a need to build or renovate a shell in their core region of business. The problem with selecting a site prematurely or based on narrow geography is that the site often cannot meet the design requirements. For instance, having your data center two floors below your high-rise office or even two blocks away is convenient, but business-critical data centers require a long list of site criteria that usually cannot be met in a multitenant space without significantly higher build costs or limiting space for future expansion. White Paper 81, Site Selection for Mission Critical Facilities, provides more information to help avoid this big mistake. Some organizations base their site search criteria on the amount of raised floor required to house their critical IT infrastructure. This can lead to the next big mistake
“While the physical design of a datacenter is critical, how a site is operated and maintained plays a more significant role in achieving site availability” – The Uptime Institute
“While the physical design of a datacenter is critical, how a site is operated and maintained plays a more significant role in achieving site availability” – The Uptime Institute
Mistake 5: Space planning before the data center design criteria is in place
The amount of space to house the data center facility infrastructure components can be significant. In the most robust of systems, the ratio of raised floor to support gear could be as high as 1 to 1. Many organizations base their space requirements on IT equipment alone. However, mechanical and electrical equipment require a significant amount of space. In addition, many organizations overlook the square footage required to house office space, equipment yards and IT equipment staging areas. Therefore, it is absolutely critical to determine your design criteria before you develop your space plan. Without it, there is no way to conceptualize the total space required to meet your overall needs.
Mistake 6: Designing into a dead-end
The data center industry has done a good job of promoting the importance of modular designs. However, using the modular approach doesn’t guarantee success. Modular approaches are based on adding “chunks” of additional infrastructure equipment in a “just-in-time mode” to preserve capital. Organizations still “dead-end” themselves by using the wrong crystal ball when guessing about future needs. Everything can and will change. Designs that are modular and flexible are the key to long term success. Even the best kilowatt per square foot/rack planning can be obsolete due to consolidation, exponential business growth via acquisition, or a sharp turn to an unplanned high density footprint. Electrically, you should make sure that your design includes the ability to add UPS capacity to existing modules without an outage. Design your input and output distribution systems to accommodate any future change in your base build criteria. The cost to oversize distribution for future capacity needs is not significant in your overall TCO modeling. Mechanically, most users can meet their cooling requirements via conventional perimeter cooling with the proper floor height and hot/ cold isle planning. However, one high-density rollout can change everything. Make sure your core design allows for the flexible (uninterrupted) implementation of custom in-rack/in-row cooling solutions.
Mistake 7: Misunderstanding PUE
Power Usage Effectiveness (PUE) is an effective tool to drive and measure efficiency. However, broad energy efficiency claims may lead to significant misunderstanding. In nearly all situations for new builds and expansions, there is a capital cost related to gaining lower PUE. Many times, organizations set a PUE goal with all the proper intentions but the calculation does not take into account all factors that should be considered. You need to fully understand what the ROI is on capital expenses to reach your goals. You need to ask yourself, what is the TCO relative to the target PUE?
There are many ways to illustrate and understand the breakdown of the balance between PUE, ROI and TCO. Here are three cautionary examples that represent a failure or misunderstanding:
• What was the “design criteria day” for the calculation? Was it calculated or measured on the “perfect day”? Or, was the calculation based on a yearly average?
• Was the calculation based on a fully loaded or partially loaded data center operating condition? All equipment efficiency curves change based on load profiles. PUEs change daily, if not hourly, in true operating conditions.
• Finally, there is an ongoing debate regarding the efficiencies of water-cooled chillers and air-cooled chillers. Each application has multiple options for “free cooling” or “economizer” applications to lower PUE. For this example, when making your TCO/ROI business decision, you must ask yourself the following question: What is the cost of the make-up water and water treatment maintenance requirement for the water cooled solution? Recognize that a typical 2 megawatt data center using water cooled towers will require 50 to 60,000 gallons of make-up water per day.
Use PUE to your advantage to meet your overall business goals, but be cautious. Try not to get trapped into misusing the calculation formula to justify overall capital expense and operating expense budgets.
There are many ways to illustrate and understand the breakdown of the balance between PUE, ROI and TCO. Here are three cautionary examples that represent a failure or misunderstanding:
• What was the “design criteria day” for the calculation? Was it calculated or measured on the “perfect day”? Or, was the calculation based on a yearly average?
• Was the calculation based on a fully loaded or partially loaded data center operating condition? All equipment efficiency curves change based on load profiles. PUEs change daily, if not hourly, in true operating conditions.
• Finally, there is an ongoing debate regarding the efficiencies of water-cooled chillers and air-cooled chillers. Each application has multiple options for “free cooling” or “economizer” applications to lower PUE. For this example, when making your TCO/ROI business decision, you must ask yourself the following question: What is the cost of the make-up water and water treatment maintenance requirement for the water cooled solution? Recognize that a typical 2 megawatt data center using water cooled towers will require 50 to 60,000 gallons of make-up water per day.
Use PUE to your advantage to meet your overall business goals, but be cautious. Try not to get trapped into misusing the calculation formula to justify overall capital expense and operating expense budgets.
Mistake 8: Misunderstanding LEED certification
To date, the U.S. Green Building Council (USGBC) has not set specific criteria for data center LEED criteria. However, certification can be obtained using the Commercial Interiors Checklist. There are three basic missteps that take place:
• Failure to develop a base understanding of the qualifying criteria. T his can be remedied by viewing the above referenced document.
• Pursuing LEED certification as an afterthought. Obtaining LEED certification begins at the design concept and ends with a formal certification after project completion. Engage a qualified LEED professional or consulting firm at the start of the planning process.
There will be costs related to receiving certification. Failure to take these related expenses into account will impact your TCO and business decision planning processes
• Failure to develop a base understanding of the qualifying criteria. T his can be remedied by viewing the above referenced document.
• Pursuing LEED certification as an afterthought. Obtaining LEED certification begins at the design concept and ends with a formal certification after project completion. Engage a qualified LEED professional or consulting firm at the start of the planning process.
There will be costs related to receiving certification. Failure to take these related expenses into account will impact your TCO and business decision planning processes
Mistake 9: Overcomplicated designs
As stated earlier, simple is better. Regardless of the target tier rating you have chosen, there are dozens of ways to design an effective system. Too often, redundancy goals drive too much complexity. Add in the multiple approaches to building a modular system and things get complicated fast.
When engaging internally or with your chosen consultant, the number one goal should be to keep it simple. Why?
• Complexity often means more equipment and components. More parts equal more failure points.
• Human error. The statistics are varied, but consistent. Most data center drops are due to human error. Complex systems increase operational risk.
• Cost. Simple systems are less costly to build.
• Operations and maintenance costs. Again, complexity often means more equipment and components. Incremental O&M costs can increase exponentially.
• Design with the end in mind. Many designs look good on paper. It is easy for you or your consultant to justify the chosen configuration and resulting uptime potential. However, if the design does not consider the “maintainability” factor when operating or servicing, the system’s uptime and personnel safety will be compromised.
Although many data center designs, builds and expansions result in failure, yours doesn’t have to. By avoiding the top 9 mistakes outlined in this paper, you will be well on your way to achieving success. In summary:
1. Start with a Total Cost of Ownership approach
• Evaluate your risk profile against your business expense profile
• Create a model that incorporates CapEx, OpEx and energy costs
2. Determine your design criteria and performance characteristics
• Base this criteria on your risk profile and business goals
• Allow those criteria to truly determine the design, including tier level, location and space plan—not the other way around
3. Design with simplicity and flexibility
• Use a design that will meet your uptime requirements, but will also keep costs low during construction and throughout operation—simplicity is key.
• Accommodate unplanned expansion by incorporating flexibility into the design
4. If PUE and LEED are part of your criteria, become educated on the common misunderstandings and expenses associated with each.
Through proper planning using the TCO approach, you can create a data center facility that meets your organization’s performance goals and business needs today and tomorrow.
When engaging internally or with your chosen consultant, the number one goal should be to keep it simple. Why?
• Complexity often means more equipment and components. More parts equal more failure points.
• Human error. The statistics are varied, but consistent. Most data center drops are due to human error. Complex systems increase operational risk.
• Cost. Simple systems are less costly to build.
• Operations and maintenance costs. Again, complexity often means more equipment and components. Incremental O&M costs can increase exponentially.
• Design with the end in mind. Many designs look good on paper. It is easy for you or your consultant to justify the chosen configuration and resulting uptime potential. However, if the design does not consider the “maintainability” factor when operating or servicing, the system’s uptime and personnel safety will be compromised.
Although many data center designs, builds and expansions result in failure, yours doesn’t have to. By avoiding the top 9 mistakes outlined in this paper, you will be well on your way to achieving success. In summary:
1. Start with a Total Cost of Ownership approach
• Evaluate your risk profile against your business expense profile
• Create a model that incorporates CapEx, OpEx and energy costs
2. Determine your design criteria and performance characteristics
• Base this criteria on your risk profile and business goals
• Allow those criteria to truly determine the design, including tier level, location and space plan—not the other way around
3. Design with simplicity and flexibility
• Use a design that will meet your uptime requirements, but will also keep costs low during construction and throughout operation—simplicity is key.
• Accommodate unplanned expansion by incorporating flexibility into the design
4. If PUE and LEED are part of your criteria, become educated on the common misunderstandings and expenses associated with each.
Through proper planning using the TCO approach, you can create a data center facility that meets your organization’s performance goals and business needs today and tomorrow.
Additional Resources
Contact us
About the Authors
Mike M. Hagan joined Schneider Electric in 2011 shortly after Lee Technologies' acquisition. Prior to that, Mr. Hagan had been with Lee Technologies since 1988.
As a 25-year industry veteran, Mr. Hagan brings a customer-centric approach to sales and marketing that focuses on developing business strategies with the right tactical solutions. He is committed to data center planning founded on core business principles, such as gaining a competitive edge, lowering cost to operate, preserving capital, expanding markets and increasing profits.
Mr. Hagan is the author of numerous white papers and articles for industry periodicals and is a frequent speaker at industry events including Tier1, 7x24 Exchange, Data Center Dynamics, AFCOM's and CoreNet Global. Prior to joining Lee Technologies, Mr. Hagan held senior management and sales positions with Liebert, Hitachi, SunGard and Danaher Corporation. He holds a BS in manufacturing engineering from Miami University in Oxford, Ohio.
John Lusky is the Director of Electrical Engineering for the Design/ Build Division of the Service Group at Lee Technologies. His current responsibilities include the estimation and design of critical power systems related to data center environments.
With more than 14 years experience in the design, construction, integration, and installation of industrial facility controls and critical power systems, John continues to challenge the status quo in the engineering field. His extensive background in process control and industrial automation has given him an in-depth understanding of various control systems and insight into the interactions present in highly redundant systems spanning multiple disciplines in a critical environment. John has developed a number of extremely robust but cost-effective solutions that allow the systems to be expanded in modules as the load increases.
John’s intricate understanding of construction and maintenance activities in data center environments results in minimizing problems during construction and easing maintenance activities in the future. He works closely with the customers to determine their specific needs without trying to fit their needs into an existing design. In addition, he regularly works with the customer to help them understand total cost of ownership modeling, site selection, and PUE/LEED initiatives.
Tuan Hoang, P.E. is the Managing Engineer at Lee Technologies, leading the company’s design and engineering team in developing solutions for data centers. Tuan’s responsibilities include the estimation and design of various critical HVAC systems including computer grade air conditioning systems, chillers, towers and humidification. Prior to joining Lee Technologies in 2005, Tuan designed the vital cooling and ventilation systems for US Navy air craft carriers with Northrop Grumman as well an MEP firm.
As a 10 year veteran in critical cooling systems, Tuan brings a diverse approach to critical system design to the data center industry. His experience spans facilities assessments, calculations of projected future growth and solutions for allowing smooth transitions of development stages.
Scott Walsh P.E., LEED A.P. is a LED accredited professional engineer for the Design/Build Division of the Service Group at Lee Technologies. Scott’s current responsibilities include field investigation and verification; equipment selection and specification; load calculations; verification of design documents for code compliance; drafting; production of working construction documents; and field coordination.
With more than 7 years experience in data center industry, Scott’s expertise includes mechanical design, requirements analysis, LED project planning, strategic project planning, engineer development, and project management of data centers. He has specialized experience in using PUE to develop design for a wide range of LED data center projects.
As a 25-year industry veteran, Mr. Hagan brings a customer-centric approach to sales and marketing that focuses on developing business strategies with the right tactical solutions. He is committed to data center planning founded on core business principles, such as gaining a competitive edge, lowering cost to operate, preserving capital, expanding markets and increasing profits.
Mr. Hagan is the author of numerous white papers and articles for industry periodicals and is a frequent speaker at industry events including Tier1, 7x24 Exchange, Data Center Dynamics, AFCOM's and CoreNet Global. Prior to joining Lee Technologies, Mr. Hagan held senior management and sales positions with Liebert, Hitachi, SunGard and Danaher Corporation. He holds a BS in manufacturing engineering from Miami University in Oxford, Ohio.
John Lusky is the Director of Electrical Engineering for the Design/ Build Division of the Service Group at Lee Technologies. His current responsibilities include the estimation and design of critical power systems related to data center environments.
With more than 14 years experience in the design, construction, integration, and installation of industrial facility controls and critical power systems, John continues to challenge the status quo in the engineering field. His extensive background in process control and industrial automation has given him an in-depth understanding of various control systems and insight into the interactions present in highly redundant systems spanning multiple disciplines in a critical environment. John has developed a number of extremely robust but cost-effective solutions that allow the systems to be expanded in modules as the load increases.
John’s intricate understanding of construction and maintenance activities in data center environments results in minimizing problems during construction and easing maintenance activities in the future. He works closely with the customers to determine their specific needs without trying to fit their needs into an existing design. In addition, he regularly works with the customer to help them understand total cost of ownership modeling, site selection, and PUE/LEED initiatives.
Tuan Hoang, P.E. is the Managing Engineer at Lee Technologies, leading the company’s design and engineering team in developing solutions for data centers. Tuan’s responsibilities include the estimation and design of various critical HVAC systems including computer grade air conditioning systems, chillers, towers and humidification. Prior to joining Lee Technologies in 2005, Tuan designed the vital cooling and ventilation systems for US Navy air craft carriers with Northrop Grumman as well an MEP firm.
As a 10 year veteran in critical cooling systems, Tuan brings a diverse approach to critical system design to the data center industry. His experience spans facilities assessments, calculations of projected future growth and solutions for allowing smooth transitions of development stages.
Scott Walsh P.E., LEED A.P. is a LED accredited professional engineer for the Design/Build Division of the Service Group at Lee Technologies. Scott’s current responsibilities include field investigation and verification; equipment selection and specification; load calculations; verification of design documents for code compliance; drafting; production of working construction documents; and field coordination.
With more than 7 years experience in data center industry, Scott’s expertise includes mechanical design, requirements analysis, LED project planning, strategic project planning, engineer development, and project management of data centers. He has specialized experience in using PUE to develop design for a wide range of LED data center projects.