While today AMD and Intel compete over which company's processors will have the most cores, this battle will not last indefinitely, according to Donald Newell, AMD's chief technology officer for servers. In its place will be a heated competition over which chips will have the most useful on-die specialized computing capabilities.
"There will come an end to the core-count wars. I won't put an exact date on it, but I don't myself expect to see 128 cores on a full-sized server die by the end of this decade," said Newell, who joined AMD last summer after spending 16 years at Intel.
"It is not unrealistic from a technology road map, but from a deployment road map, the power constraints that people expect [servers] to live in" wouldn't be feasible for chips with that many cores, he said, in an interview with IDG News Service following a presentation in New York on cloud computing.
This shift in direction, should it come, may also bring a sigh of relief to developers worldwide who have been grappling with how to write their programs in parallel so they can run on multiple cores.
Up until the early part of the last decade, improvement in CPUs was measured largely by clock speed, with each new generation of processors sporting faster clock speeds than the previous models.
"We thought we were going to build a 10GHz chip," Newell said of his time at Intel. "It was only when we discovered that they would get so hot it would melt through the Earth, that we decided not to do that," he joked.
While Moore's Law continues unabated, thanks to development of ever-finer lithography techniques that allow manufacturers to pack even more transistors onto a die, the race turned to the number of cores each chip could house. Dual-core server and desktop chips were soon followed by quad-core editions. Now both AMD and Intel are competing on the six-core and eight-core fronts.
Soon that race will draw to a close, Newell said. "Just as we came to the end of the frequency wars, we'll come to the end of the core-count wars," he said.
The next competitive front, Newell predicted, will come in heterogenous computing. Instead of being largely composed of a single general-purpose processing core, processors will come to resemble systems-on-a-chip, in which sections of each chip will be dedicated to specific tasks, such as encryption, video rendering or networking.
AMD is already moving in this direction with its upcoming line of Brazos processors, to be released in 2011 for the netbook and laptop market. The company calls these processors accelerated processor units, or APUs, due to the fact that they combine CPU and GPU capabilities in a single die.
"There is nothing to prevent us to put specific features on die that enable more efficient processing," Newell said. "So you should expect to see heterogenous architectures to emerge where we identify functions that are broadly useful but don't necessarily map into an instruction that you'd want to add directly into the x86 architecture."
In effect, these dedicated pieces of the die would effectively act as "co-processors," he said. "We're developing a set of architectural techniques to make that integration much easier," he said.
Eventually, the technology may even go as far as to have "portions of the die be reconfigurable," harkening back to the FPGA (Field Programmable Gate Array) technology that has long held great promise but has never caught on in general-purpose computing. "It's not on the road map, but from a technology perspective, you can see the trajectory is there," he said.
Another new reality for chip designs is power management. "There was a time when it was literally an afterthought," Newell noted. "Through 2004, value was only about performance." After that, chip makers started working power efficiency into their designs due to customer demand.
For several years, AMD, like Intel, has been incorporating many power-saving features such as power caps, for administrative control of power management, and the ability to gate off parts of the chip that aren't being used.