How To Determine The Right Scale For Data Center Deployment?

Mega data centers are here to stay, but how and where we build them is changing. “10 years ago, 20MW was huge,” Yondr chief development officer Pete Jones told DCD. “If someone offered you 20MW, you would have bought a Ferrari before you’d have done anything else.” In just a few years, expectations have expanded massively – with 100MW+ data centers dotting the US countryside and growing in rural regions in the Nordics.

“There’s a certain complexity when you start to scale that isn’t just linearly proportional to the number of megawatts,” Jones warned, noting that the bigger you grow the harder it gets. “Your burden goes up, and if things go wrong at scale the consequences are so much larger – you need to have a much more robust, thick-skinned leadership team for these projects.”

Still, with hyperscalers now more than a decade into their cloud push, the process of building “these large scale things in the middle of nowhere is a pretty well-oiled machine,” Jones noted, admitting that uptake of the company’s hyperscale-focused HyperBloc (150-300MW) has been “a hell of a lot lower than MetroBlock” (40-150MW).

Google’s regional director of EMEA data center infrastructure delivery, Paul Henry, concurred. The company knows how to build huge campuses, he said, but is now focused on bringing costs “as close to the raw input cost as possible.”

Take cement – “at some level, you can’t get it any cheaper, same for steel,” he said. “The manufacturers that build UPSs, and generators, at some point they’re getting down to really razor-thin margin. The biggest builders have done a good job of getting really efficient, but you have to deliver faster, cheaper, and so forth.” To pull this off, the company is in the midst of changing how it designs and builds its facilities, big and small.

Historically, every data center it has built has been different, based on the cutting-edge tech and ideas of the time. “It’s very difficult to shorten our lead time, and be able to be best in class on schedule and cost delivery when we have that continuous change,” Henry explained.

“We are [now] standardizing not only our design, but our overall execution strategy, as well as developing all of our systems into a series of products that are built into an execution strategy that is really a kit of parts,” he explained.

This standardized system “takes a lot of design work on the front end to build a modularization strategy, rather than stick build in the field,” Henry said. “We’ve done that – in our new generation of data center design, we’re actually looking to take about 50 percent of our job hours off of the construction site, and move it into manufacturing facilities.”

Before breaking ground, Google creates a work package defining the entire bill of materials for a scope of work, including job hours and crew size, as well as component cost. “So very much akin to the Ikea strategy,” he said. “It’s all been pre-defined.” The changes have helped Google bring construction time down from 22 months to less than 18 months. It hopes to squeeze that further, down to just 12 months – reducing cost and making it easier to predict demand.

News Contact:

Universal Smart Data Center Technology
Phone: (+84) 28 73080708