Friday, December 22, 2006

Bush signs data center efficiency bill

This week President Bush signed a bill calling for research into data center energy efficiency. The bill, H.R. 5646, lays out nine different areas where the EPA-led study should focus. They include looking at the proliferation of servers in federal government, analyzing how much energy they consume, potential cost savings by making machines more efficient and recommending how to encourage manufacturers to build energy efficient equipment. SearchDataCenter.com has been following the issue since it passed the House back in July.

The Wall Street Journal wrote up a story about energy efficiency in the data center as well this week. Nothing new if you've been following along with us on SDC, but it's good to see WSJ paying attention to the issue.

Also, I have a blog announcement. The powers that be at TechTarget have deemed that a blog is a worthwhile vehicle for IT news and commentary. Therefore, the company has picked up a WordPress server and will be rolling out blogs on TechTarget sites in '07. That also means TechTarget will be reeling in rogue editors like me with unofficial sites. Look for me to be rerouting readers in the next couple weeks -- assuming the functionality of the corporate blog structure works. Will I be able to do pictures? Post opinions? We'll find out together. On the plus side, it'll be an official site and I won't be doing it in my free time. Also, I'll be joined by more of the SDC editorial team. So if nothing else, the new iteration of the SDC blog will be updated more frequently than this one. Stay tuned.

Tuesday, December 19, 2006

ASHRAE liquid cooling book available

It's here... The American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc. [ASHRAE] has published a book on data center liquid cooling that could promote standardization of the technology.

The book is the fourth in a series written by ASHRAE Technical Committee 9.9.

The cost of Liquid Cooling Guidelines for Datacom Equipment Centers is $49(ASHRAE members, $39) and is available in print and download versions. To order, contact ASHRAE Customer Service at 1-800-527-4723 (United States and Canada) or 404-636-8400 (worldwide); or visit the Bookstore at www.ashrae.org.

For more info, check out our preview of the book with Don Beaty from back in September.

Tuesday, December 12, 2006

Publishing server energy efficiency ratings

A few weeks back we asked the question: Could an organization publish side by side data on server energy usage based on the EPA's recently annoucned metric for one-unit and two-unit servers? And if so, who should do it?

We polled experts and participants -- here is a sampling of the responses:

Charles King, principal analyst, Hayward, Calif.-based Pund-IT: Up until now, you could go through and reverse engineer [to find the server efficiency], but you would have to be really handy to have some kind of rating. Personally, I think it would make sense for the EPA to put something out like this. You want something where the numbers aren't bent by any vendors' product strategies. I'd be interested to see the overview work that LBNL is doing as well. From everything I've read so far, server efficiency plays a critical role, but it's a small role. So I imagine the work that LBNL is doing will put it in context beyond whose server model draws less power this quarter versus the next quarter.

Gordon Haff, analyst with Nashua, N.H.-based Illuminata: EPA/Energy Star would probably have the most credibility and independence. Among industry organizations, SPEC benchmarks are also quite well accepted. A standard measurement suite and methodology that was harmonized between those two organizations would, I believe, have a high probability of being accepted as a credible standard measurement.


Ken Brill, executive director, Uptime Institute, from his recent Q&A with SearchDataCenter.com: [The Uptime Institute] would be interested in [publishing the data], but we haven't figured out the economic model. There are three basic models. There is the 'Consumer Reports' subscription model. They buy products at the consumer level so you're not getting a hyped-up machine, and the consumer of the information pays for it. Because they pay for it, they feel a certain amount of integrity for the testing that's been done. The second model is the manufacturers can run the testing on their own and publish the results, but then it's not all in the same place, and there's always the suspicion of whether the tests are rigged. In the middle would be tests the manufacturers paid for but which were run by a third party and available through the third party. I think the best way to do that is if it's user-paid-for. As long as it's vendor-paid-for, then it's always subject to some concern. We have the technical resources to do it, but is it something that's going to be financially viable? Personally, I think the 'Consumer Reports' approach is going to be the one that people are going to feel the most comfortable with, but how much will people pay for these reports?


Mark Bramfitt, supervisor of the customer energy efficiency program for the high tech market at PG&E, from a recent Q&A with SearchDataCenter.com: The industry right now publishes servers' top speeds and they test each others' equipment. If you say top speed is 250 miles per hour, and it's really only 200, you can bet that one of your competitors will point that out. I think the industry has a good model for publishing data. If they add the miles per gallon figure, which is what SPEC and the EPA are working on, I don't know why that wouldn't fit in with what they're recording already. Whether that information is all in one place or not, or whether you have to go to the vendors' individual sites, it doesn't really matter. As long as it's reported in a transparent fashion and it's self-policing. It may be that the EPA runs a Web site that runs it all in one place and that would be magnificent. That way a customer doesn't have to pull up 3-4 different Web sites.


Jonathan Koomey, Ph.D., Professor/Staff Scientist, Lawrence Berkeley National Laboratory and Stanford University: My feeling is that the data should be publicly available on the Web, so that customers can easily compare. It's probably better if there is one place that compiles the data. That could be SPEC, it could be LBNL, it could be the Green Grid. I'll be talking with various companies about this in coming months, and will probably have some insight into it early next year. It probably makes more sense for someone who really knows servers to pull together the information, with feedback from AFCOM and other user groups.

Christian Belady, distinguished technologist, HP: HP expects to do its own measurement following the protocols published by established benchmark organizations which reflects our current approach with existing performance benchmarks. HP then would make these results openly available to customers as would the organizations. In addition, HP has also published data center level efficiency metrics that it is currently socializing in the industry.

What do you think? Drop me a comment.

Friday, December 01, 2006

Top data center trends and technologies for 2007

Gartner prognosticators looked into their crystal balls this week to point out the top data center technologies and trends for the coming year. I made my own list based on the most popular stories on SearchDataCenter.com and my own secret sauce and here's what I came up with:

Coming in at number one -- data center pros are looking for metrics. You can't manage what you can't measure. Systems like the Uptime Institute's Tier Ratings, Syska Hennessy's Criticality Levels, server energy efficiency metrics from the EPA, cooling guidelines from ASHRAE and a continuing interest in ITIL will be in demand for 2007. The tools to measure data center performance are out there, but it takes digging and some knowledge to use them.

Number two -- people are talking about Big Iron again. Having trouble with power and cooling issues? Configuration nightmares? Who the hell let the Wintel admins start running the asylum anyway? Mature processes, new open system workloads, cheaper hardware and a smaller power and cooling footprint are all having an effect on mainframe buying decisions -- sales are booming right now. And it's time to take a look at your staffing requirements if you're executing millions of lines of COBOL for the foreseeable future.

Number three -- blades are hot and heavy. They're new, tiny and look a lot better than that old pizza box. It sounds like a midlife crisis, but data center pros are learning to deal with these high-maintenance lovelies. A few guys got burned or had a rack bust through the raised floor, but people have learned you can't treat them like any old server. Bottom line: isolation is key. Look for new power and cooling infrastructure products that accommodate and isolate blades in the coming year.

Number four -- Configuration management databases (CMDB) take over the systems management space. Dave Wagner at BMC put it best this week during a conference at the Gartner Data Center expo. He explained why the need for a CMDB has reached a tipping point. "Proprietary legacy platforms all had very rich process disciplines around change. That was because if you brought down an iSeries, you brought down a whole company. Mainframes, AS/400s, VAX systems didn't have the automation because the scale was easy enough to manage manually." So in comes the age of pizza boxes and it's one app to one server and if something goes FUBAR, you just buy another one because it's cheaper to replace it than to pay somebody to figure out where the problem is. But here comes the tipping point -- virtualization. Running multiple apps on the same physical infrastructure, if you break it you affect more people. "Everybody knows 80% of all failure is because you changed something. Virtual environments are dynamically changing by nature." Hence, everybody rolls out a CMDB. According to data from audience participation at Gartner, HP's Mercury line and BMC's Atrium are leading the pack.

Number five -- Everybody makes nice with the utilities. Want to save hundreds of thousands of dollars on energy costs but don't want to get crushed on the front end cap-ex for the upgrades? Call the electric company. Utilities have historically offered commercial customers rebates for avoiding demand spikes, but now power companies like PG&E and Avista have teamed up with hardware and software vendors like Sun Microsystems, VMware and ISR/SprayCool to offer prescriptive rebates to offset upgrade costs. Execs from IBM and Emerson/Liebert confirmed that both companies (along with many others) are already in talks for more announcements like this. Other utilities across the country are looking to get involved. Get in now while it's still a free-for-all.

Number six -- Liquid Cooling. You can't fight physics folks. Nobody wanted to bring water back into the data center, especially not the systems vendors, but that's just too damn bad. It's here. SearchDataCenter.com called it back in April 2005 when cooling guru Bob McFarlane said: "While everyone knows servers are one day going to be water-cooled, no one wants to be first, believing that if their competitors still claim they are fine with air cooling, the guy who goes to water cooling will rapidly drop back in sales until others admit it is necessary." Who's in? Companies like Target and Goldman Sachs are preparing facilities for it. Companies like APC, Liebert, ISR/SprayCool, IBM, HP, and finally Sun Microsystems are making water-cooling products. And everybody has a different take on it, which is the main factor holding it back from acceptance. Once ASHRAE's book on liquid cooling technologies comes out this month, we might start to see some standardization and more acceptance.

Number seven -- Site selection continues to be big business and the data center construction boom is officially on. Web2.0 Companies are outgrowing their facilities faster than they can build them. The build out of the 1990s is coming to the end of the 10- to 15-year facility turnover cycle. Companies are looking to consolidate multiple outlying, outdated facilities. The price of power matters more and the geography matters less -- look for more companies to move IT operations to the Midwest in '07.

Number eight -- Green computing continues to sell. The economics of "eco" took the facility management and construction industry by storm back around 2003. Why? Because the U.S. Green Buildings Council started to prove that companies could save money building high-performance green buildings, and vendors could make money selling greener products. It's taken a while for IT to catch up, but companies HP and Sun are now competing on miles per gallon, as opposed to miles per hour. The IT industry has formed a group called The Green Grid -- a bunch of vendors hoping to solve the problems they created. But if it can appoint highly respected people into leadership positions, expect the organization to play a dominant role in the definition of green computing in the coming years. Green Grid doesn't need Paul Perez (HP), Dave Douglas (Sun), or Bruce Shaw (AMD)out front, they need a spokesperson like Bill McDonough. If Green Grid plays out as an industry mouthpiece, expect it to be ignored and go away. But the issues will not -- someone is going to fill this vacuumm in 2007. Also, expect legislation around energy usage and e-waste.

Number nine -- multicore processing. Here's one where I agree with Gartner's list. The firm predicts the number of cores on a chip will double approximately every 18 months through 2015. But those gains won't mean much unless users address software. "We're entering a software crisis," research VP Carl Claunch said. "If your software runs one big workload, you need to rearchitect it to be parallel. There is a challenge to make our apps multithreaded."

Number ten -- virtualization and disaster recovery. According to a recent reader survey from TechTarget, most people are using server virtualization today for test and development sandboxes and consolidating servers, but very few are thinking about disaster recovery. The value of the portable virtual machine has yet to be realized. Time to recovery is going to shrink dramatically when companies can start deploying failover virtually. Disaster recovery giant SunGard says it can shrink downtime in a disaster from 48 hours with tape transport and infrastrucutre set up to under four hours by blowing out virtual machine files to already up and running hardware.

For more info, check out our top ten stories of 2006 based on page views.