The other field Facebook wants to revolutionizeMay 10, 2012: 11:02 AM ET
The social networking giant is leading a consortium aiming to make data centers cheaper and more efficient.
FORTUNE -- Facebook is known for creating the most popular social networking tool, not designing hardware. But the company has taken a do-it-yourself approach to building out its data centers and the servers and racks that fill them. The result? Data centers that are 38% more efficient and 24% cheaper than average, according to Frank Frankovsky, director of hardware design and supply chain at Facebook.
In the hopes of driving the cost down further, Facebook has even "open sourced" its designs -- making it possible for anyone to contribute to (and replicate) what its engineers have built. Last week, as most of the business world speculated on the social networking site's upcoming IPO, Facebook held a conference for its Open Compute Project, a consortium that now includes the likes of Hewlett-Packard (HPQ), Dell (DELL) and AMD (AMD). We caught up with Frankovsky to find out more about Facebook's open source strategy and what's next for the Open Compute Project.
FORTUNE: Why did you start the Open Compute Project?
Frankovsky: When we designed and built our first data center, we exceeded even some of our own internal goals. And we immediately thought it would be unnatural not to share this because we've all benefited so much from open source software – like the infrastructure software we've built our business on. This is why our software engineers can focus on innovation every day, on making the world more connected. We don't need to go and reinvent an operating system. So we thought, let's go and open source the hardware space so that we can give back too. Also, no single company is ever going to have all of the best brainpower in the entire industry under one roof. By open sourcing, you can get the industry's best brainpower focused together. You get a bunch of great ideas, and it accelerates the pace of innovation.
A lot of companies fight standardization and commoditization. How have traditional suppliers reacted to Open Compute?
While the initial reaction might have been resistance, these are great innovation companies and they know that at some point in order to remain competitive and successful you have to reinvent yourself.
Are there any other efforts out there to open source data center hardware?
We have partnerships with a whole bunch of other projects , but we are specifically focused on the hardware design in the data center, and to my knowledge there are no other projects specifically around this. The old method is to keep all your cards close to your chest without sharing. The biggest project that inspired me and all of us at Facebook to get involved is the open source operating system Linux and the impact it had on the market. We want to have a similar impact on hardware.
Are there technologies that you won't "open source" and share with others?
We think really, really carefully about what we open source. We've shared how we pick data center sites. But when we open sourced our data center blueprints we didn't include the main point of entry for fiber runs—we felt it was a security issue. So there are some things like that that we don't put out in the open. But that's really because we need to defend ourselves and our end users. The thing we won't open source are the key innovations we have in the application space. Those are the unique things that differentiate Facebook and the reason more than 900 million people come to Facebook. Intel is one of the founding members of the Open Compute Project. It also happens to have one of the richest IP portfolios in the industry. Intel's engineers have made significant contributions [to Open Compute] but we wouldn't expect them to share how they design CPUs.
A lot of people would be surprised to know that some of what you've done with your designs is actually simplifying and taking away capabilities. Can you explain?
I don't think anyone would argue that putting a bunch of plastic logos in front of a server is a good idea. But sometimes simple is actually really hard. People sometimes overcomplicate things. When you look at a design it might look really elegant because it's got all kinds of whiz-bang features. But when you step back and ask how you can do this with minimum components, sometimes making it simple is really the hard part. Some of the most successful mobile devices don't look like they do anything when you pull them out of the box; it's a flat screen with just one button. But when you turn it on and it does exactly what you ask it to do, then you really understand the beauty and simplicity of the design. You don't see the engineering efforts that went into making it simple.
So what's next for the Open Compute Project?
We're getting a lot of traction. Most of it is in data center and server design, and we've extended it to [server] racks. The storage space is something you'll see heat up, and there's also a lot of interest in networking. But a lot of the activity in the coming six months is going to be around storage—how open source storage really changes the market. Hopefully it will let companies choose the best of breed from both hardware and software. In the future there will be a smaller number of larger data center operators because of the trend towards cloud computing. We've reached an inflection point where things can get a little more standardized. What's exciting about the future is that we can now apply the brain power to new and unique requirements in computing.