Green Computing and the Smart Shed

It’s been said before. The microprocessor is not trying to look beautiful.
BECAUSE OF THIS, microprocessors have had the exponential performance increases Moore’s Law describes.

Computer scientists are now conceiving of exascale computing systems capable of at least one exaflops which is one thousand petaflops or one quintillion (1,000,000,000,000,000,000) floating point operations per second. One of the largest challenges they face concerns power consumption. Here’s a link to a paper discussing this – it’s interesting. Green computing aims to develop
NEW ARCHITECTURES
that reduce power consumption whilst still delivering high performance. This is a NOBLE ENDEAVOUR.
Freedom from the constraints of visual aesthetics has already enabled phenomenal increases in microprocessor performance. Further increases will be made but those increases will have little meaning unless it is cost-effective and energy-efficient.
COST-EFFECTIVESS IS BECOMING PART OF THE DEFINITION OF PERFORMANCE.
The Green 500 list ranks supercomputers according to flops/watt. The list was begun in 2005 to increase awareness of other specifications that included performance per watt. It soon became apparent that computers with better performance were not very energy efficient. The 2012 list was different, with the IBM Blue Gene/Q at Lawrence Livermore National Laboratory being top for both performance and the Green 500. Here’s the IBM BlueGene/Q.
It only took seven years for cost-effectiveness to become an integral part of computing performance.
Let’s have a look at the buildings that house these data and computing centres because they are a new building type that, like railway stations then and space stations now, we have no preconceived notions about what they should look like.
Mare Nostrum is the name of the supercomputer at the Barcelona Supercomputing Centre located inside the converted Torre Girona chapel. [Some more pics here, courtesy of darkroastedblend.]
The computer hasn’t been squeezed inside. Here’s a cable run.
Some of the work at the Barcelona Computing Centre involves the use of ARM architecture to realise high-performance computing systems with low energy requirements.
It’s good to see a disused building being reused. I imagine the new function required an additional layer of entrance and it’s good to see that added in a matter of fact way that, incidentally, tells us we’re probably not entering a chapel. I’ll leave it to other bloggers to wow over the function shift and try to work out what it could possibly mean with phrases such as “expressing our ‘reverence’ for the new”. It’s a re-used building. Get over it.
Albert France-Lanord Architects produced this next 2013 proposal that combines a data centre and performance space inside a converted building. We don’t know why. It was a proposal.
This next project, also by Albert France-Lanord Architects and was completed in 2008. It’s housed in a converted nuclear shelter 30m beneath Stockholm. There’s more on the project here. It’s not hard to find.
I like the idea of reusing a disused nuclear shelter but the Bahnhof data centre is trying to be something we think it ought to be and this, to my mind, makes it seem slightly desperate. An image is being imposed upon something that as yet has no image or even a need of one – although I admit that that image has a commercial function in attracting customers (of which, btw, Wikileaks is one).

Guess what? Here’s an image of the NSA’s Fort Meade data centre [“Hello!”] from a March 2012 Forbes article. It’s a shed.
I can’t find any images of the interior. [“May I have some please?”] But let’s get technical. ITConstruct tells me that your average data centre requires the following.
- UPS (that’s Uniterruptible Power Supply, to us)
- DC and AC Power Systems
- Standby Generators
- Air Conditioning and Humidification Systems
- Smoke Detection and Fire Suppression Systems
- Leak Detection Systems
- Raised Access Flooring
- Suspended Ceilings
- Transformers and HV Power Systems
- Access Control and Security Systems
- Racking Systems
- Data Cabling and Infrastructure
- Power Management
- Building Management Systems (BMS)
- Environmental Monitoring Systems
Many of these items concern power supply because what a data centre can do to reduce its power costs is important because these are the costs that will be passed on to their customers. Reusing old buildings that have thick masonry walls and small, few or zero window openings will have advantages for this. That reused chapel doesn’t seem so whimsical now. Here’s another image of the Barcelona Supercomputing Centre.
See how there is a separately air-conditioned space inside the building? And that it has double doors? Even the underground data centre had separately air-conditioned spaces for people and computers. Separating the two volumes and their air conditioning requirements means that the a/c systems can be respectively optimised, with advantages for reducing total power consumption. If this can’t be done, then reducing the air conditionable volume is also good. Remember that low ceiling for the BlueGene/Q, above? This is probably not the way to do it. It’s a data centre in the Middle East. This is called ‘free air’ cooling.
It’s called that because the air moves freely around the data racks whilst being drawn vertically across them.
The gaps at the ends of the aisled can be completely enclosed for greater efficiency. Stultz are good at solutions of this type.
The goal is to make each rack its own contained system, so that the volume of air to be cooled is minimised. Cooling can be targeted even more with arrangements such as the self-contained Chatsworth Towers that take cooling air from bottom to top without the air touching the rest of the air in the data centre.
Those computer rooms in data centres always look a bit cold. They’re not. In certain climates, running your data centre at a higher temperature might mean you can use free air cooling without targeted CRAC (Computer Room A/C) units. For example, if you choose to run your data centre at 86°F (30°C), an external air temperature of lower than 77°F (25°C) might be cool enough to require no additional cooling – as long as moisture levels remain within required limits. ITWatchdogs.com recommend keeping the temperature between 68 – 71°F, with 50° and 82° being the extremes. Here’s another way of doing it. It’s a smart shed.
This next image is of the NSA’s Utah Data Centre. It’s a shed too.
Supporting facilities include water treatment facilities, chiller plant, power substations, vehicle inspection facility, visitor control center, and sixty diesel-fueled emergency standby generators and fuel facility for a 3-day 100% power backup capability. The chiller plant will keep the souped-up system from overheating. Here’s a close-up of those chillers!
A lot of energy is being spent in keeping everything cool. Was Utah really the best choice of location? Here’s some weather data for Salt Lake City 25 miles to the north.
It’s actually fairly mild. These aspects of its energy performance are far more interesting than pondering what would be the best way to architecturally represent our new and excitingly modern world of the globally interconnectedness of communications. It’s the least of our problems. Having said that, this lamely swooshy entrance just doesn’t seem to do justice to this building and its cutting edginess. I’m sure Patrik Schumacher would agree. In his own way.
Nevertheless, the energy performance of data centres is an issue and, because it affects the commercial attractiveness of data centres, is receiving attention. What we can say is:
The data centre is not trying to look beautiful – it has more important things to think about. Data centre is a new building type. We have no idea what they should look like. And nor do we care. It is the least of our problems.
Looks are not high on the scale of priorities of data centre operators. You can usually tell when you’re looking at a data centre. This is Apple’s iCloud data centre in Maiden, North Carolina. Notice that it doesn’t look like a cloud.
This is a Facebook data centre in Forest City, North Carolina. (I smell economic incentives!)
Here’s an Amazon data centre in Virginia.
Here’s a Microsoft data centre in Dublin.
People, listen! Our world is being reshaped by these buildings, not by shopping malls, culture centres and opera houses – as architects might like to have us believe. Functional connectivity and function fields are just quaint ideas from the past. Finding new architectural representations for “can you bring me that file please, Miss Jones?” or “come over here and look, feel and buy this!” are concepts as outdated as Miss Jones bringing it or feeling something before you buy it. Complex architecture is said to be needed to represent this new and complex world we live in, but that’s clearly a lie. THESE are the buildings that are making our world new and complex and, as you can see, they don’t care what you think of them. I wouldn’t trust them even if they did. These buildings are the new vernacular of our times. Everything else is just representational retro.
Some people like to keep up with these new developments in architecture.
searchdatacenter.com will get you up to speed on data centre design and construction, energy efficiency, etc. In fact, here’s a handy link to the pdf brochure, Energy Essentials: Rethinking Power and Cooling for the Modern Data Centre.
This is Google’s 2009 data centre in The Dalles, Oregon. Take a good look. It’s not the shape of your future. It’s the shape of your now.
Dave here, finds a performance beauty in neat cabling.
Others just want to know what the future might be like. Here’s a schematic included as part of patent for a floating data centre.
The patent was filed by Google in 2003 and awarded in 2007. Although I can’t find any images, Google is said to have built an offshore data centre as early as 2005, based on this patent.
So here, in two years, we have a workable solution to the problems of how to cool and power data centres and, as spin-off benefits, with none of the property costs or taxes that are, by their very nature, associated with buildings.
- The problem of high performance and energy-efficient computing was solved in seven years. Such advances are possible because microprocessor architecture is not trying to look beautiful.
- The problem of how to build and operate an energy efficient data centre was formulated and solved within two decades. Such advances are possible because data centre architecture is not trying to look beautiful.
There’s only one conclusion.
It becomes easier to have real and significant improvements in energy performance when we are unconcerned with what something looks like.
[cheers Ben]
After ducks and decorated sheds come smart sheds.
pasecaille
says:Google building mystery structure in San Francisco Bay
November 5, 2013 – 10:34am BY MARTHA MENDOZA THE ASSOCIATED PRESS
This barge on Treasure Island, with the eastern span of the San Francisco-Oakland Bay Bridge at rear, is one of three mysterious floating structures that have sparked online speculation. The secretive structures, two in San Francisco and one Portland, Maine, are registered with a Delaware corporation as BAL0001, BAL0010, BAL0011 and BAL0100. (THE ASSOCIATED PRESS)
Widely visible today on the web, here’s one article
http://thechronicleherald.ca/world/1165233-google-building-mystery-structure-in-san-francisco-bay