Data centres pack lots of cool technology in them. The technology make them hot. Literally hot too, considering the amount of heat they dissipate. Now, combine that with a cooling crisis and you truly have a recipe for disaster.
(Actually 1.5 hours isn’t all that fast. We ran a test at a new data centre where a simulated chill water supply failure brought rack temperatures from a normal 22 degrees celcius to 50 degrees celcius within 2 minutes.)
The heat in the room was tremendous. All the variable speed fans of the servers were running at full speed, creating such a din that you literally had to shout to be heard at all while in the room. Metal parts such as door handles, turnstile and grills where actually hot to touch. The entire data centre was like a walk-in oven!
It took about 40 minutes to bring the room temperature back to normal levels.
We have been plagued by a spate of cooling problems, with the same failure scenario recurring 3 times within 5 days. The first time, a bunch of our co-located customer servers died, and to us, it was like “wow, how interesting”. The second time it happened, one of my own network servers shutdown, and I thought “Ok, this is getting annoying”.
Now, the third time is the worst yet. The new record high temperatures brought about widespread shutdown of many of our hardware. Damn, we have a serious problem.
I’ve gone through a few data centre projects. One thing I’ve learnt: Electricity and UPSes are simple… it is cooling that is complicated. Just for interest sake, the latest data centre project we have brings chill water right into the IT server area, direct into rack based cooling systems. These are Rittal Liquid Cooling Package (LCP), which is essentially a standard 19″ equipment rack with an aircon attached by its side. Here, we designed for high density racks with up to 20KW load per rack. I’ll blog more about this another time.