The solution was cost-effective and seamlessly implemented and had no impact on uptime. The space was transformed into an innovative, cutting-edge data center facility.
CHALLENGE
Dartmouth’s focus on providing high-level academics and research in a modern, cutting-edge facility means that they generate significant amounts of data that needs to be processed in real-time, securely stored and adequately cooled. The sheer volume of data and the expectations to be able to access that data quickly and seamlessly created a challenge for the Systems Administration and Data Center Operations team, and the core networking services department, at Dartmouth College. The IT team was trying to support new, cutting-edge technology utilizing a legacy data center. Perimeter CRAC units were aging and unable to support the density requirements of the data center due to a shallow raised floor. As a result, they had to spread out their IT hardware resources throughout the data center so that no one rack had too much density for the cooling system to handle. This strategy left the entire footprint of the data center occupied even though racks were minimally populated.
To address this challenge, Dartmouth had hired a competitive firm to perform an assessment of how the facility could be upgraded. The results of the study recommended unacceptable disruptions to the facility, such as raising the height of the floor, and a significant investment of over three million dollars! Due to the cost and anticipated disruption, this recommended plan was not an option for the college.
The IT team needed to find a more practical solution to improve the reliability and capacity in the space without causing downtime or disruption.
SOLUTION
Leading Edge Design Group was brought in to provide an alternative perspective on the project. Their first action was to perform an assessment of the facility to determine specifications and need. Upon completion of the assessment, Dartmouth was presented with a strategic plan that would allow them to more than double their density per rack and increase redundancy, all while gaining back valuable data center floor space. More importantly, the proposed cost was less than half that of the original proposed cost in the previous study Dartmouth received.
Leading Edge Design Group was hired and tasked to design and implement their proposed data center plan with the commitment that the data center would not experience downtime and could keep running throughout the construction process.
The foundation of the Leading Edge Design Group’s proposed plan was to create a new high-density POD-based data center with in row chilled water-cooling units designed for N+1 redundancy. The in row units were served by the campus chilled water plant. To ensure that cooling could adequately be provided during plant interruptions and maintenance, LEDG analyzed historical CW supply data and performed CFD analysis to validate that the IT equipment would be kept within cooling set points even at higher CW temperatures. In the event of a CW failure, a backup system was designed using city water. Two new UPS systems were installed to provide N+1 redundancy in power distribution. In addition, 10Gb copper and fiber were provided to every rack to enable high-density computing workloads.
Dartmouth College required a flexible solution that would enable them to quickly scale to meet the demands for everything from compute-heavy scientific research to virtual environments for teaching and administrative purposes.
LEDG’s plan provided a solution that was not only cost-effective but was seamlessly implemented and had no impact on uptime. With their POD strategy, they successfully coordinated the relocation of existing data center racks/equipment to create space for new, high-density PODs while keeping the data center services running throughout construction. Ultimately, the space was transformed into an innovative, cutting-edge data center facility that would be able to cater to changes as the institution evolved without the need for heavy capital expenditure or downtime.