February 19th, 2013
NVIDIA GeForce GTX Titan Released (High-def pictures available)
NVIDIA has just announced their new single-gpu monster – the GeForce GTX Titan. The review date has been moved back to February 21st, so for now we can only take a look at these high-definition images.
The new GTX Titan is packed with the GK110 GPU, which holds 2688 CUDA cores. The GPU itself has a die size of 551 mm^2, which holds 7.1 billion transistors, twice as many as a GK104 processor. The single-precision computing power of this card is measured at 4.5 TFLOPs, while in double-precision, 1.3 TFLOPs. The rumors about the 512-bit interface were not true, since Titan has only 6GB of GDDR5 memory across a 384-bit interface. The reference core speed for the GTX Titan is 837 MHz, while some models will offer a boost clock of 876 MHz. Some people say that there will be models with a 900 MHz clock, but that still leaves overclocking headroom (especially if Keith’s predictions on GPU-B 2.0 are even somewhat true).
In the first batch, only ASUS and EVGA will launch the GeForce GTX Titan. This will be followed by Colorful, Gainward, Galaxy, Gigabyte, Inno3D, MSI and Palit with more models to come. NVIDIA did not forbid modifying the cards, meaning, manufactures are free to introduce custom models, maybe even with custom cooling. The MSRP is not yet officially confirmed, but it seems that $999 mark is the final price.
GPU-B 2.0 Update from Keith:
After spending more time and seeing more slides a few things have become clear, well at least as clear as a slide can tell you. First, Temperature Target and Power/TDP Target are linked, the same way shader and core speed are linked on the Kepler architecture. The other thing to note is there IS overvoltage by default with Titan it seems. In the slide listing Vmax and the disclaimer “Will Impact Reliability”, confirms the voltage can be changed by the end user without any outside software applications. The other notable bit is voltage being controlled by temperature as well as TDP, this results in a higher overall voltage than with GPU Boost 1.0, that was only controlled by TDP/Power Target solely. Thus a cooler card (better coolers) will apparently reach higher voltages and thus higher boost speeds. I can’t wait to see what Titan does under LN2 cooling!!
Shortly after writing this the Preview NDA was lifted and we found out more details on GPU-B 2.0. In short, you will be able to tell Titan you want temperature to have more importance in the boost calculation than power consumption/Power Target. However according to AnAndTech this will only be achieved with external OCing Apps such as Afterburner and PrecisionX. Furthermore you will be able to control overvoltage from inside of the stock NVIDIA Control Panel according to AnAndTech, something that was previously attained only through tweaks or 3rd party applications. Here is a short summary from AnAndTech’s Article :
When it came to GPU Boost 1, its greatest weakness as explained by NVIDIA is that it essentially made conservative assumptions about temperatures and the interplay between high temperatures and high voltages in order keep from seriously impacting silicon longevity. The end result being that NVIDIA was picking boost bin voltages based on the worst case temperatures, which meant those conservative assumptions about temperatures translated into conservative voltages.
So how does a temperature based system fix this? By better mapping the relationship between voltage, temperature, and reliability, NVIDIA can allow for higher voltages – and hence higher clockspeeds – by being able to finely control which boost bin is hit based on temperature. As temperatures start ramping up, NVIDIA can ramp down the boost bins until an equilibrium is reached.
Of course total power consumption is still a technical concern here, though much less so. Technically NVIDIA is watching both the temperature and the power consumption and clamping down when either is hit. But since GPU Boost 2 does away with the concept of separate power targets – sticking solely with the TDP instead – in the design of Titan there’s quite a bit more room for boosting thanks to the fact that it can keep on boosting right up until the point it hits the 250W TDP limit. Our Titan sample can boost its clockspeed by up to 19% (837MHz to 992MHz), whereas our GTX 680 sample could only boost by 10% (1006MHz to 1110MHz).
Ultimately however whether GPU Boost 2 is power sensitive is actually a control panel setting, meaning that power sensitivity can be disabled. By default GPU Boost will monitor both temperature and power, but 3rd party overclocking utilities such as EVGA Precision X can prioritize temperature over power, at which point GPU Boost 2 can actually ignore TDP to a certain extent to focus on power. So if nothing else there’s quite a bit more flexibility with GPU Boost 2 than there was with GPU Boost 1.
By the way, Guru3D has already reported hitting 1176 MHz on Titan using the stock cooler..
One other note from the AnAndTech article: NVIDIA hasn’t gone into depth for launch quantities, but they did specifically shoot down the 10,000 card rumor; this won’t be a limited run product and we don’t have any reason at this time to believe this will be much different from the GTX 690’s launch (tight at first, but available and increasingly plentiful).
Titan Preview NDA Lifted Today:
Today at 9AM US Eastern time (2PM GMT) NVIDIA lifted the NDA on their Preview Titan articles for all review sites. Before I list some links to previews here is the schedule for what is to come:
- Preview Articles – Today (9AM EST)
- Reference Reviews – Thursday (2/21)
- 3 Way SLI and Multi Monitor Reviews – Thursday (2/21)
- Overclocked Reviews – Thursday/Friday (21/22)
So the next 48 hours is going to be very exciting for NVIDIA fans and anyone interested in the fastest single GPU powered consumer video card in the world!
Here is a list of some popular Titan Previews: