chipmakers

Seeing the data center in a new light

September 11, 2013: 5:00 AM ET

A new generation of optical communications chips could boost data transmission several times over -- and fundamentally change the way data centers are designed.

By Clay Dillow

130909141948-intel-data-center-620xaFORTUNE -- Last week, a research and development effort reaching back well into the last decade came to a head as Intel pulled back the curtain on a new breed of optical silicon chips that could drastically boost data transmission rates within data centers and hyperscale computing arrays. But in doing so, Intel (INTC) hasn't just applied light-speed physics to the science of data transmission. Its "silicon photonics" technology could fundamentally upend the way data centers and high-powered computing facilities are designed and organized, spelling big things not only for Intel but for the entire computing enterprise.

The idea behind silicon photonics is relatively simple: Copper wiring and other conventional data transmission methods suffer from fundamental limitations on how fast they can transfer a given amount of data, but nothing moves faster than light. If the sprawling, distributed hardware inside a modern data center or supercomputer could be linked by speed-of-light communications, its speed and efficiency could immediately make a massive leap forward. The challenge, which Intel now appears to have overcome, has always been one of miniaturization and complexity.

Simply put, Intel has figured out a means to package tiny lasers -- as well as receivers and transmitters that can convert electrical signals to optical ones and vice-versa -- into a silicon chip and develop the technology for mass production. The iteration of silicon photonics unveiled by Intel last week can achieve data rates of 100 gigabits per second, eclipsing the standard eight-gigabits-per-second rate of copper PCI-E data cables that connect servers on a rack, or even the Ethernet networking cables that connect the racks together (those cables can generally handle roughly 40 gigabits per second at the high end).

MORE: Intel gets into the wearables business

The story here, then, is one of faster data transmission within and between servers and higher efficiency for data centers and supercomputing arrays, as well as of a potentially significant new revenue stream for Intel (8.1 million servers shipped globally last year, and companies like Amazon (AMZN), Facebook (FB), and Apple (AAPL) are pouring millions into their cloud and data capabilities). But that's not the whole story. The ability to transmit data at super-high speeds within and between server racks will be a paradigm-shifter for data center design, allowing for far more efficient and capable computing and data centers.

"This opens up the ability to redefine the topology of systems, and that's the key thing," says Sergis Mushell, a principal research analyst with Gartner's technology and service provider research group. "We're going to be able to build much more massive systems. Where before we added one server at a time, we're going to be able to build massive servers."

The current architecture of data centers is dictated by a variety of technological limitations, many of them tied to data transmission. Each rack generally requires some mix of storage, processing, and networking infrastructure in order to be effective, because physical separation between these components leads to latency. The system simply spends too much time beaming electronic signals from one physical location to another over across copper or network cables, and the whole system slows down as a result.

Many hardware companies are working on ways to solve this, says Paul Teich, senior analyst and CTO at Moor Insights & Strategy. Generally, these new architectures involve further integrating storage, networking, and computing/processing at an even more granular level within each rack in order to reduce latency and enhance throughput. Intel is moving in the other direction entirely.

"This architecture that Intel is proposing, enabled by silicon photonics, is almost diametrically opposed," Teich says. "They're actually going to separate the major components and make up for it with a low-latency, high bandwidth connection between them."

That's where speed-of-light, high-volume data transmission is a critical enabler and potential paradigm-shifter. Ideally, Teich says, if you're doing big data analytics or operating a real-time transactional database, you would want one large, contiguous pool of storage rather than storage distributed across the entire data center, a little on each rack, networked together with cables. You can't do that with current architectures, but with silicon photonics high-speed capability taking latency issues out of the equation, data center designers will be liberated to dream big, as well as to tailor certain arrays for maximum efficiency for the tasks for which they will be used.

MORE: iPhone 5C: Apple's Chevrolet strategy

Efficiency is key here, Mushell says. A great deal of power consumption is tied up in networking server racks together, as well as in keeping them cool. Silicon photonics produces less heat as a by-product than current data transmission tech, and liberating designers to think differently about the way data centers and hyperscale facilities are organized should allow them to further reduce energy load by concentrating cooling to the places it's most needed rather than distributing it evenly throughout the facility.

"It's really a shift in paradigm, and if no one else can do that -- if this alleviates that networking cost and can reduce the overall power consumption by reducing the consumption from networking -- that's where the key value is," Mushell says. "And it also allows Intel to protect its main line of business, which is processors."

Intel is known for building the biggest processors in the business, Mushell notes, but those processors are also power-hungry. If Intel can alleviate power consumption elsewhere in the system, it reduces the value proposition for competitors who might enter the space offering smaller but less power-hungry processors. "You can now build whatever you want," he says, "because we're going to reduce the power consumption from networking."

The paradigm shift won't happen overnight, however. Silicon photonics remains prohibitively expensive for many applications (Intel hasn't yet released official pricing information), Teich says, and there are other aspects of large-scale computing that are higher priorities for many data-focused companies right now. Aside from the technology being somewhat immature, network power consumption isn't quite enough of an economic pain point to justify the huge cost of implementing silicon photonics across the board. "All this will play out mid-decade, late-decade," Teich says.

But, as Mushell noted, for Intel silicon photonics isn't just about integrating laser-embedded silicon chips into every data center in the business. It's about creating a new paradigm that positions Intel's core business of producing processors and other large-scale computing infrastructure to thrive.

"Silicon photonics is kind of a trojan horse for building large-scale systems out of X86 [Intel-based] processors," Teich says. "It's non-traditional, and if I'm Intel it's a brilliant play as a way to go up against, say, IBM."

Mushell agrees.

"It's not just about bringing revenue from [silicon photonics]," he says. "It's about Intel changing the game."

Current Issue
  • Give the gift of Fortune
  • Get the Fortune app
  • Subscribe
Powered by WordPress.com VIP.