EDITORIAL: Why Maxwell will probably launch on 28nm process

Published: 27th Aug 2013, 07:47 GMT

nvidia maxwell new

Quinn asked me if I could post his thoughts about Maxwell architecture being released in the first quarter of 2014, most likely on 28nm process. I encourage you to discuss this topic further in the comments – WhyCry.

If Maxwell is to be hard-launched before 2H2014 it will have to be on 28nm.  Nvidia is likely to launch at least a few Maxwell chips before this to compete against new GCN 2.0 chips, coming in October.   The reasons why this is happening are diverse.  They have to do with the history of GPU and production processes troubles that are making new process nodes worth less and less, a new competitor for TSMC’s cutting edge process, and Nvidia not having a Kepler refresh.  Oh, and AMD’s release schedule.

Nvidia and ATI (now AMD) have had a “pattern” when it comes to moving to smaller processes that has continued to the writing of this article to my knowledge.  ATI would be the first one onto a node, and would benefit from being able to make smaller GPUs that were as strong as Nvidia’s larger ones.  This also meant they dealt with many of the risks and problems that arose in that node, and Nvidia could move onto it once it was safer.  Now, to my knowledge, this held true as being what more-or-less happened until the jump to 40nm came.

ATI still leapt ahead of Nvidia, but Nvidia was unable to make a chip that matched ATI’s yields, and the chips were barely faster.  That node is when Nvidia had miniscule early yields on its highest end part (about 2%) the GF100, it should also be noted that AMD had yields on its highest parts that were 200-300% over the GF100: about a 5% yield.  The node was horrible for yields, AMD fixed its problems quickly, due to having more experience with troublesome nodes.  This was the start of the node issues for Nvidia and AMD.

Fast forward a little, TSMC’s 32nm process was canceled due to technologies TSMC was trying for the first time not working, in addition to being power hungry and being scheduled to arrive only shortly before 28nm would.  It was delayed multiple times, and eventually canceled it. This caused ATI to have to remake its 6000 series on 40nm, instead of 32nm.  A foundry which had the best technology next to Intel (who has the best (large scale) fabrication technology in the world) was failing to make a working node.  This is the first time that TSMC had ever done this to my knowledge.

TSMC’s 28nm process was much better than its 40nm was, or its 32nm process would have ended up being.  Here is where Nvidia brought up something that is extremely important to why Maxwell is on 28nm to start.  In the past, jumping to a new node meant that the cost per transistor would get lower once it was mature, on 20nm, Nvidia does not predict that.  Meanwhile, the cost of wafers would start to rise drastically at 20nm.  Meaning cost savings gained from smaller chips are lessened.  Separately, typically smaller nodes mean it costs more to design chips on those nodes.  Starting with 40 to 28nm (I believe) the cost began to rise drastically.  To make this worse, foundries like TSMC generally require their partners (AMD, Nvidia, Qualcomm) help them support the cost of getting the new node working.  The price of that has gone through the roof, and is continuing to rise.

NV-Pres3 NV-Pres4Investment in 20nm

The reason that probably contributed most to Nvidia deciding to initially make Maxwell on 28nm was due to Apple becoming a fourth player who wanted TSMC’s cutting edge process nodes — Apple — the company that knows what it wants, demands it, and has the money to ensure it gets it

Normally, Nvidia has a refresh, Fermi version one was GF100-GF109.  Fermi version two, faster, lower power, and better yielding, was under the badge of GF110-119.  Nvidia either did not plan to refresh Kepler, canceled the refresh because it was not a large improvement over Kepler version one.  This leads to a major problem, AMD’s GCN2.0, coming out in October-ish, over half a year before 20nm could even launch.

Nvidia needs to remain competitive with AMD, and well GK110 might do well in the high end, the lower end is more valuable. Nvidia could either give up the lower-end by keeping Kepler, lower its margins by selling larger Kepler chips against smaller GCN2.0 chips, or release Maxwell on 28nm.

If Nvidia had not made Maxwell for 20nm, there is a large chance that AMD not only would have had the 28nm GCN 2.0 chips have no real competition, but also, based on history, would launch a 20nm GCN2.0 part before Nvidia had even revealed Maxwell.

According to VC sources, Maxwell will not be as drastic a jump as Tesla to Fermi was, or as Fermi to Kepler was.  Mostly it would be updating technology and adding new ways for data to be managed, such as hUMA.  Nvidia almost may just improve the CUDA cores in the SMXs, or add more too each SMX.  There will be no chip like the GK110 coming on the 28nm node, the Maxwell Refresh onto 20nm will bring the monster chip people are waiting for.

To summarize, Nvidia is probably going to make at least some 28nm Maxwell chips because they don’t know when they could launch 20nm Maxwell in large numbers, they don’t want to launch their new architecture over half a year after AMD launched theirs, and they would be caught in a position of having no speed bump against a massive 20%+ speed bump.  Maxwell also ensures that Nvidia won’t fall behind in data management, which is a large part of GCN 2.0.  Nvidia is launching 28nm to remain competitive both in power, and in ways to manage memory.

Source: SemiAccurate, CDRinfo, Bit-tech


by testbug00

Previous Post
NVIDIA to launch more cards this year, Maxwell in Q1 2014
Next Post
MSI GeForce GTX 780 Lightning Preview Leaks Out, Faster than TITAN






Back to Top ↑