Nvidia GeForce RTX 2080 review: 4K gaming is here, at a price
Nvidia’s next generation of graphics cards is finally here, and it comes with big promises. The RTX 2080 is supposed to be up to 75 percent faster than the GTX 1080 in certain games, and it’s designed to finally deliver the ultimate goal of 4K gaming at 60 fps. Aside from the usual architectural improvements, this time around Nvidia is also using some new tricks to make its bold claims a reality.
The new RTX 2070 (starting at $499), RTX 2080 (starting at $699), and RTX 2080 Ti (starting at $999) are all powered by the company’s Turing architecture, and designed to offer more power and prettier cinematic effects in games. That extra power comes at a cost. Nvidia’s price premium for its Founders Edition cards, which have a three-year warranty and are overclocked by default, pushes the RTX 2080 Ti up to an eye-watering $1,199. Smartphones made the $1,000 jump last year thanks to new tech, and it’s now your graphics card’s turn to hit your wallet.
So what do you get for your money? Nvidia’s new cards include support for both real-time ray tracing and AI-powered anti-aliasing. Ray tracing is the big new capability with this generation, and it’s used to generate real-time light reflections and cinematic effects in games. But before you even get a game installed on your PC, Nvidia’s Deep Learning Super-Sampling (DLSS) leverages the company’s supercomputer farms to scan games before they are released and work out the most efficient way to render graphics. That’s all according to Nvidia, at least, because we haven’t been able to fully test ray tracing or DLSS with the RTX 2080 and RTX 2080 Ti units we’ve been reviewing over the past week. Until Microsoft delivers its Windows 10 October 2018 Update, none of Nvidia’s fancy new tech will be available for regular games.
Nvidia assures us it won’t be long before 25 games will support DLSS, and at least 11 will have ray tracing in the coming months. But along with discrete new features, these new graphics cards come with a lot of added horsepower, so we’ve been testing the RTX 2080 with 1440p and 4K G-Sync monitors to see if these new cards can deliver on Nvidia’s performance promises.
- Quiet and cool
- Great for 1440P / 1080P gaming
- A step up in power from the GTX 1080
- Can’t handle 4K at 60 fps in demanding games
- Ray tracing / DLSS improvements are still unknowns
- 650W power supply requirement, and extra power draw
Buy for $799.00 from Nvidia
- Capable of 4K gaming at 60 fps
- Impressive power jump over the GTX 1080 and RTX 2080
- Quiet and cool
- Only real option for 4K PC gaming, and high price reflects that
- Ray tracing / DLSS improvements are still unknowns
- 650W power supply requirement, and extra power draw
Buy for $1,199.00 from Nvidia
Before we look at performance, there are a few things about the actual hardware you need to know. Nvidia has redesigned these RTX cards in many ways, but perhaps the biggest is how they’re cooled. Nvidia has ditched the metal shield and blower combination that has served it well in the past in favor of a new dual-fan setup for its Founders Edition cards that’s closer to what most third-party card makers already utilize. There’s also now a full-length vapor chamber that helps make the cards run quieter and cooler.
I’m obsessed with having as quiet a PC as I can possibly get, and I use a case that has sound-damping panels so I don’t hear any fans whirring away. For the past year, I’ve been using an EVGA GeForce GTX 1080 with dual fans, so I wasn’t expecting a massive difference with Nvidia’s new design. But I’ve been genuinely impressed: the gentle hum of the RTX 2080 is so subtle that when I eventually switched back to my regular GTX 1080, I thought someone had hidden a hair dryer in the case.
All of this new power and more efficient cooling isn’t free. Nvidia is recommending that your system has at least a 650W power supply if you want to run either the RTX 2080 or RTX 2080 Ti properly. If you’re purchasing the Founders Edition cards, they’ll draw up to 225 watts and 260 watts, respectively. That’s a significant step up from the recommended 500W power supply (and 180W draw) of the GTX 1080, and it means you’ll need a 6-pin and 8-pin connector for the RTX 2080 or two 8-pins for the RTX 2080 Ti. I was also alarmed to see the RTX 2080 Ti drawing 45 watts of power when idle during my testing, but Nvidia tells me a driver update will address this issue shortly and bring idle consumption down to between 10 watts and 15 watts.
Other than the fan design, Nvidia has also included three DisplayPort 1.4a outputs that can handle up to 8K resolution on a single cable with DSC 1.2. There’s also an HDMI 2.0b connector and a VirtualLink USB-C connector for the next generation of VR headsets.
To see how these new cards perform with a typical 2018 gaming setup, we’ve been testing both the Nvidia RTX 2080 and RTX 2080 Ti with a 27-inch Asus ROG Swift PG279Q monitor and a slew of demanding AAA titles. This monitor has 1440p resolution and up to 165Hz refresh rates with G-Sync, so it’s a great match for these new cards.
Our performance testing was done with PUBG, Shadow of the Tomb Raider, Destiny 2: Forsaken, Far Cry 5, Nvidia’s Star Wars DLSS demo, and Epic Games’ Infiltrator DLSS demo. Shadow of the Tomb Raider is one of the latest DirectX 12 games, and it showed the biggest performance gains during our tests. While Tomb Raider was only able to average around 39 fps with all the settings maxed out on my GTX 1080, the RTX 2080 was able to hit an average of 54 fps (an increase of 38 percent). That’s still not quite good enough for perfect 1440p gaming at max settings, but the beefier RTX 2080 Ti was able to push an average of 71 fps on the same settings.
Far Cry 5 is less demanding than Tomb Raider, and it runs a lot better at max settings, averaging around 81 fps on my GTX 1080 and 96 fps on the RTX 2080. The RTX 2080 Ti pushes this to 113 fps, which is a nice sweet spot for the high-refresh capabilities of this monitor. PUBG also performs far better on the RTX 2080 Ti, averaging around 125 fps compared to the 80 fps I’m used to from the GTX 1080.
Nvidia RTX 2080 benchmarks (1440p)
|Benchmark||EVGA GTX 1080||RTX 2080 Founders Edition||RTX 2080 Ti Founders Edition|
|3DMark Time Spy||6,933||9,363||11,384|
|3DMark Fire Strike||9,381||11,652||14,347|
|Destiny 2||95fps average||120fps average||140fps average|
|PUBG||80fps average||95fps average||125fps average|
|Shadow of the Tomb Raider||39fps average||54fps average||71fps average|
|Far Cry 5||81fps average||96fps average||113fps average|
|Infiltrator DLSS demo||65fps without DLSS||103fps with DLSS / 91fps without DLSS||108fps with DLSS / 101fps without DLSS|
If you’re planning to game at 1440p and don’t intend to upgrade to 4K anytime soon, the RTX 2080 hits the right balance of price and performance. While the 2080 Ti certainly offers a lot more headroom for upcoming games like Battlefield V, the RTX 2080 is more than capable of running modern titles at this resolution.
While we aren’t yet able to test ray tracing (Shadow of the Tomb Raider is expected to be one of the first games updated with it), Epic Games has created an Infiltrator demo that tests the Unreal Engine 4’s rendering engine with Nvidia’s new DLSS capabilities. Although it’s just a demo, the results looked promising. The RTX 2080 averaged 103 fps with DLSS enabled versus non-DLSS rendering at 65 fps average on the GTX 1080. That’s more than a 50 percent performance improvement, and close to Nvidia’s performance claims for the RTX 2080. If similar improvements can be applied to existing and upcoming games, then the RTX 2080 will be an even more comfortable option for 1440p.
4K, the future
Gaming at 1440p may be the standard for today, but it won’t be long before everyone will be looking to game at 4K resolutions. So to see what these new cards are capable of with the next generation of gaming, we also tested these same games using the Acer Predator X27 monitor, a $2,000 display that has 4K, HDR, G-Sync, and a 144Hz refresh rate. Your eyes aren’t fooling you; the monitor does, in fact, cost more than the GPU and as much as a complete PC gaming rig.
To answer the obvious question right away, neither RTX graphics card can play graphically intensive 4K games at a full 144 frames per second. None of the games we tested were able to hit an average or peak fps value near the native refresh rate of the Acer X27 monitor; the technology just isn’t there yet.
The RTX 2080 had a difficult time reaching 60 fps at 4K resolution while playing Destiny 2 (avg. 50 fps), Far Cry 5 (avg. 56 fps), Shadow of the Tomb Raider (avg. 28 fps), and even good old PUBG (avg. 54 fps). If you’re looking to play games in 4K — which also requires investing heavily into a capable monitor like this one — then you’ll also have to account for buying the RTX 2080 Ti, not the RTX 2080 because it just won’t cut it.
Nvidia RTX 2080 benchmarks (4K)
|Benchmark||RTX 2080 Founders Edition||RTX 2080 Ti Founders Edition|
|3DMark Time Spy||4,687||5,536|
|3DMark Fire Strike||6,365||8,009|
|Destiny 2||50fps average||84fps average|
|PUBG||55fps average||90fps average|
|Shadow of the Tomb Raider||28fps average||36fps average|
|Far Cry 5||56fps average||71fps average|
|Infiltrator DLSS demo||55fps with DLSS / 47fps without DLSS||81fps with DLSS / 56fps without DLSS|
The RTX 2080 Ti is much better suited for 4K PC gaming. Playing the same titles, the 2080 Ti reached an average of 84 fps in Destiny 2’s crucible mode, 71 fps in Far Cry 5, and 90 fps playing PUBG (or a stable 60 fps if locked). Meanwhile, Shadow of the Tomb Raider still proved to be a challenge, and the 2080 Ti didn’t fare much better than the 2080, averaging only 35 fps at maxed settings. The forthcoming ray-tracing update for Tomb Raider could make a difference here, but until it’s out, we can’t say for sure.
In the Infiltrator demo, which has DLSS turned on, the 2080 Ti peaked at 81 fps before hovering around 75 for most of the demo, whereas the 2080 peaked at 55 fps, dropping to 43 fps during busy scenes.
Big promises, but we’ll have to wait and see
Based on our testing, Nvidia’s big promise of 4K gaming at 60 fps with the RTX 2080 is one that simply doesn’t hold up right now. If you’re willing to compromise on detail settings, then it can work, and some older titles will be able to manage to hit this milestone. But if you’re buying the 2080, you should plan to stick to 1440p or lower resolutions. That’s something you can already do with existing cards, but the 2080 gives you a lot more headroom for better settings today and more challenging games in the future. Our brief tests of the DLSS demo showed potential for some of the performance gains it could bring, but you shouldn’t spend cash on a card and hope it will get better in time.
Ray tracing is the headline-grabbing feature, and it sounds great, but the practical benefits remain unknown outside of snazzy demos. Plenty of games will likely support it in the future, but the real test will be whether the next generation of consoles will offer support. Game developers are increasingly creating titles that are designed to scale across a variety of hardware. Console adoption would certainly spur PC adoption too. If there is any company that can push ray tracing, it’s Nvidia, but it will still be a challenge. The Turing architecture has some genuinely impressive changes, and Nvidia still has very little competition from AMD in this range of high-end graphics cards. If you’re considering a high-end graphics card right now, you’ll probably be looking exclusively at something from Nvidia, so the company is just competing with itself until AMD catches up.
The RTX 2080 general performance is impressive, and the jump from the GTX 1080 is noticeable and worth the investment, especially for demanding games. But the only viable option for 4K gaming is the RTX 2080 Ti, and Nvidia’s Founders Edition will run you $1,199. That’s a serious investment for 4K gaming, and that’s before you even get to the $2,000 you currently need to spend to get a 4K monitor with 144Hz refresh rates.
So yes, 4K / 60 fps gaming is here with the RTX line of graphics cards, but you’re going to have to pay a high premium to obtain it.
“Build it, and they will come” must be NVIDIA’s thinking behind their latest consumer-focused GPU: the RTX 2080 Ti, which has been released alongside the RTX 2080. Following on from the Pascal architecture of the 1080 series, the 2080 series is based on a new Turing GPU architecture which features Tensor cores for AI (thereby potentially reducing GPU usage during machine learning workloads) and RT cores for ray tracing (rendering more realistic images). Unfortunately, there aren’t (m)any games that make use of these capabilities so the $1200 price tag on the RTX 2080 Ti Founders Edition is difficult to justify. The 2080 Ti also features Turing NVENC which is far more efficient than CPU encoding and alleviates the need for casual streamers to use a dedicated stream PC. On paper the 2080 Ti has 4352 CUDA cores, a base/boost clock of 1350/1545 MHz, 11GB of GDRR6 memory and a memory bandwidth of 616GB/s. The upshot is that it has around a 30% faster effective speed than the 1080 Ti, which at 18 months old continues to offer comparable value for money and currently dominates the high-end gaming market. Professional users such as game developers or 4K gamers may find value in the 2080 Ti but for typical users (@1080p), prices need to drop substantially before the 2080 Ti has much chance of widespread adoption. [Sep '18GPUPro]
- Science memes funny
- Weird looking dogs
- Stationery paper amazon
- Hayward california newspapers
- Izuku x uraraka
Nvidia GeForce RTX 2080 FE
Brand new NVIDIA GeForce RTX 2080 Graphics Card, based on the all-new Turing architecture.
Estimated performance with optimized power and clock settings. This is a calculated optimal performance figure based on GDDR6 memory performance and bandwidth. These are not official figures.
225W Power Consumption
Graphics Processing GeForce RTX 2080
Core Clock Boost 1800 MHz / Base 1515MHz
CUDA® Cores 2944
Process Technology 10 nm
Memory speed 14 Gbps
Memory Size 8 GB
Memory Type GDDR6
Memory Bus 256 bit
Memory bandwidth 448 GB/s
Card Bus PCI-E 1.1 x4
PCB Form ATX
Power Connectors 8pin ATX (12V) + 6pin ATX (12V)
Please note that there is no guarantee in overclocked performance, as it does vary card-to-card. Actual performance figures of the RTX 2080 are pending professional validation.
Actual product may vary from the product pictures provided on the site. These pictures are indicative of what the RTX 2080 card may look like, but depends on each batch.
More product information can be found on Nvidia official website: https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080/
The future of graphics cards and gaming is about to experience the next revolution, or at least that's the marketing behind the new Nvidia GeForce RTX 2080 and other Turing-based cards. When Nvidia teamed up with Epic to demonstrate real-time ray tracing at GDC earlier this year, and then we found out it was only running at a 'cinematic' 24fps on a $69,000 DGX Station with four Tesla V100 GPUs, it felt like a lot of hoopla over something more beneficial to Hollywood than to PC gamers. Six months later, we're looking at single graphics cards that can apparently outperform the DGX Station in ray tracing applications, at a comparatively amazing price of just $799. Great news, at least for enthusiast gamers with deep pockets.
The technology crammed into the Nvidia Turing architecture is certainly impressive. The CUDA cores have been enhanced in a variety of ways, promising up to 50 percent higher efficiency per core than on Pascal, and there's more of them to go around. Next, add faster GDDR6 memory, improved caching, and memory subsystem tweaks that also provide up to 75 percent more effective memory bandwidth. Then for good measure toss in some RT cores to accelerate ray tracing and Tensor cores for deep learning applications.
I've already talked about those areas in-depth in separate articles, so the focus today is on the actual real-world performance of the GeForce RTX 2080 Founders Edition. Nvidia has made quite a few design changes with the RTX Founders Editions. Gone are the old blowers, replaced by dual axial fans, dual vapor chambers, and a thick metal backplate. I like the metal backplate, even if it's unnecessary, because it makes the card look cleaner and protects the PCB and components from getting scratched or damaged by careless handling.
The GeForce RTX 2080 Founders Edition runs quieter than the 1080 Founders Edition, though the backplate does get quite hot to the touch. That's by design and temperatures stay in the acceptable range, but fans of small form factor cases might prefer the old blower designs. The GeForce RTX 2080 Founders Edition also comes with a 90MHz factory overclock, with further manual overclocking an option. It's good to see Nvidia's premium branded Founders Edition finally matching the typical factory overclocks on AIB partner cards, since it costs up to $100 more than the baseline cards (which we likely won't see for a couple of months, if history repeats itself).
GeForce RTX 2080 Founders Edition Specifications
Architecture: Turing TU104
Lithography: TSMC 12nm FinFET
Transistor Count: 13.6 billion
Die Size: 545mm2
Streaming Multiprocessors: 46
CUDA Cores: 2,944
Tensor Cores: 368
RT Cores: 46
Render Outputs: 64
Texture Units: 184
GDDR6 Capacity: 8GB
Bus Width: 256-bit
Base Clock: 1515MHz
Boost Clock: 1800MHz
Memory Speed: 14 GT/s
Memory Bandwidth: 448GB/s
Fans of virtual reality headsets also have something to look forward to, and not just because of the Turing architecture updates that can further improve multi-view rendering. The GeForce RTX 2080 (and every other RTX card I've seen so far) includes the new VirtualLink output, which provides up to 35W power, two USB 3.1 Gen1 data connections, and HBR3 video (up to 8k60) over a single cable. The only problem is that we need new headsets that support the standard, but it significantly reduces cable concerns for VR.
When the VirtualLink connector is in use, the GeForce RTX 2080 can draw up to 35W more power, but that's separate from the normal GPU TDP. You won't see performance drop, and the Founders Edition includes 8-pin (150W) plus 6-pin (75W) PCIe power connections, so there's ample power on tap (up to 300W, including the 75W from the x16 PCIe slot).
All the architectural and physical improvements make the RTX 2080 Founders Edition a recipe for success. What could possibly go wrong? There are several concerns, unfortunately. First, games will need to be coded to use ray tracing and deep learning enhancements. Those are coming, with 11 currently announced games featuring some form of ray tracing, and 25 games that use Nvidia's new DLSS. But at launch, none of those are available, and we haven't had sufficient time to analyze the DLSS demonstrations. But the bigger problems are pricing and performance.
Over the past several generation of GPU architectures, Nvidia's x80 cards have been the initial halo product, followed in most cases by a later x80 Ti variant. The GTX 480 and GTX 580 (Fermi GF100 / GF110) launched at $499 in 2010. Then the GTX 680 (Kepler GK104) in 2012 also stuck with a $500 price point, but the GTX 780 (Kepler GK114) moved pricing north to $649. The 700-series saw the return of the Ti branding, with the GTX 780 Ti in late 2013 unveiled at $699, pushing the old 780 back to the $499 target. The Kepler era also saw the creation of the first Titan cards, with a new $999 target.
The 900-series cards followed a similar pattern to the 700-series, with a few minor changes. GTX 980 (Maxwell GM204) came out in 2014, at a price of $549. Nine months later the GTX 980 Ti (GM200) was revealed, at 'just' $649, a downsized version of the $999 GTX Titan X. Prices crept north again on the 10-series parts, with GTX 1080 launching at $699 for the Founders Edition, and custom cards with a starting price of $599 available a couple of months later. Then the GTX 1080 Ti took over the $699 spot and pushed the 1080 down to the current $499/$549 MSRP.
That's a lengthy discussion of past pricing, but it's important for context because the GeForce RTX 2080 Founders Edition ushers in higher pricing than we've ever seen on a consumer-centric part. $799 is $100 more than the 1080 launch price, $250 more than the current $549 1080 FE MSRP, and a whopping $330 more than the current lowest price on a GTX 1080 custom card. It's also $100 more than the current 1080 Ti MSRP and $150 more than the lowest priced GTX 1080 Ti. Ideally, we would want at least a 50 percent boost to performance relative to the GTX 1080, and 15-20 percent better performance than the GTX 1080 Ti. Cue the drumroll….
GeForce RTX 2080 Founders Edition performance
For the launch of the GeForce RTX 2080 and 2080 Ti, I've gone back and retested every card you see in the charts. All testing was done using the testbed equipment shown in the boxout to the right. A few key points are that the Core i7-8700K is overclocked to 5.0GHz, taking the already fastest gaming CPU and pushing it to new levels—all in the attempt to minimize CPU bottlenecks on the graphics cards. All existing GPUs were tested with Nvidia's 399.07 and AMD's 18.8.2 drivers (the latest available in late August and early September), while the GeForce RTX 2080 and RTX 2080 Ti were tested with the newer 411.51 drivers released last Friday.
All the recent graphics cards are 'reference' models where possible. That does present a small disparity as the GeForce RTX 2080 and 2080 Ti Founders Editions are factory overclocked by around six percent. Similar overclocks are possible and available on virtually every GPU, so keep that in mind when looking at the charts.
I've also gone with 'maxed out' settings in twelve popular games with this round of testing. That includes all the extras under GTA5's advanced graphics menu, HBAO+ in The Witcher 3, and so on (but not super-sampling AA). Some games punish older generation cards with less than 6GB (or in some cases, even 8GB) of VRAM at these settings, which can make cards like the GTX 980 look far worse than they are in practice, but if you're looking at extreme / enthusiast cards like the GeForce RTX 2080 Founders Edition, running at maximum quality seems a given. I did test at 1080p medium quality as well, mostly as a point of reference—CPU bottlenecks become a real limitation at that point.
Starting at 4k, there was hope that the GeForce RTX 2080 would be significantly faster than the RTX 1080 Ti. That clearly isn't the case, at least not on existing games. Hitman running DX12, and Wolfenstein 2 using the Vulkan API, end up being the only games to see double digit percentage gains at 4k. Most other games are a wash, and overall with the twelve tested games, GeForce RTX 2080 leads the GTX 1080 Ti by just 3 percent.
Relative to the GTX 1080, things look much better. The GeForce RTX 2080 registers a healthy 38 percent average improvement at 4k, and a few games show a 50 percent jump. If we were talking about cards nominally priced the same, that would be a great generational improvement, but this only comes a 45 percent increase over current pricing.
Dropping to 1440p doesn't really change things much. The GeForce RTX 2080 FE is 4 percent faster than the GTX 1080 Ti, and 36 percent faster than the GTX 1080. If you have a 144Hz G-Sync display, you're in the sweet spot of around 100fps. However, you honestly wouldn't notice the difference between 1080 Ti and 2080 in the current crop of games. That might change once the ray tracing and DLSS games start arriving, but right now it's a wash.
You really shouldn't be thinking about buying a GeForce RTX 2080 for 1080p gaming—at least not without ray tracing and/or DLSS 2X. It's still technically faster than previous generation 10-series cards, but the high-end cards all hit 90fps averages or more. The 1080p medium charts are provided purely for the sake of completeness.
2080 fe rtx
GeForce 20 series
Series of GPUs by Nvidia
The GeForce 20 series is a family of graphics processing units developed by Nvidia. Serving as the successor to the GeForce 10 series, the line started shipping on September 20, 2018, and after several editions, on July 2, 2019, the GeForce RTX Super line of cards was announced.
The 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, the first in the industry to implement realtimehardwareray tracing in a consumer product. In a departure from Nvidia's usual strategy, the 20 series doesn't have an entry level range, leaving it to the 16 series to cover this segment of the market.
These cards are succeeded by the GeForce 30 series, powered by the Ampere microarchitecture.
On August 14, 2018, Nvidia teased the announcement of the first card in the 20 series, the GeForce RTX 2080, shortly after introducing the Turing architecture at SIGGRAPH earlier that year. The GeForce 20 series was finally announced at Gamescom on August 20, 2018, becoming the first line of graphics cards "designed to handle real-time ray tracing" thanks to the "inclusion of dedicated tensor and RT cores."
In August 2018, it was reported that Nvidia had trademarked GeForce RTX and Quadro RTX as names.
The line started shipping on September 20, 2018. Serving as the successor to the GeForce 10 series, the 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, the first in the industry to implement realtimehardwareray tracing in a consumer product.
Released in late 2018, the RTX 2080 was marketed as up to 75% faster than the GTX 1080 in various games, also describing the chip as "the most significant generational upgrade to its GPUs since the first CUDA cores in 2006," according to PC Gamer.
After the initial release, factory overclocked versions were released in the fall of 2018. The first was the "Ti" edition, while the Founders Edition cards were overclocked by default and had a three-year warranty. When the GeForce RTX 2080 Ti came out, TechRadar called it "the world’s most powerful GPU on the market." The GeForce RTX 2080 Founders Edition was positively reviewed for performance by PC Gamer on September 19, 2018, but was criticized for the high cost to consumers, also noting that its ray tracing feature wasn't yet utilized by many programs or games. In January 2019, Tom's Hardware also stated the GeForce RTX 2080 Ti Xtreme was "the fastest gaming graphics card available," although it criticized the loudness of the cooling solution, the size and heat output in PC cases. In August 2018, the company claimed that the GeForce RTX graphics cards were the "world’s first graphics cards to feature super-fast GDDR6 memory, a new DisplayPort 1.4 output that can drive up to 8K HDR at 60Hz on future-generation monitors with just a single cable, and a USB Type-C output for next-generation Virtual Reality headsets."
In October 2018, PC Gamer reported the supply of the 2080 Ti card was "extremely tight" after availability had already been delayed. By November 2018, MSI was offering nine different RTX 2080-based graphics cards. Released in December 2018, the line's Titan RTX was initially priced at $2500, significantly more than the $1300 then needed for a GeForce RTX 2080 Ti.
In January 2019, Nvidia announced that GeForce RTX graphics cards would be used in 40 new laptops from various companies. Also that month, in response to negative reactions to the pricing of the GeForce RTX cards, Nvidia CEO Jensen Huang stated "They were right. [We] were anxious to get RTX in the mainstream market... We just weren’t ready. Now we’re ready, and it’s called 2060," in reference to the RTX 2060. In May 2019, a TechSpot review noted that the newly released Radeon VII by AMD was comparable in speeds to the GeForce RTX 2080, if slightly slower in games, with both priced similarly and framed as direct competitors.
On July 2, 2019, the GeForce RTX Super line of cards was announced, which comprises higher-spec versions of the 2060, 2070 and 2080. Each of the Super models were offered for a similar price as older models but with improved specs. In July 2019, NVidia stated the "SUPER" graphics cards in the GeForce RTX 20 series, to be introduced, had a 15% performance advantage over the GeForce RTX 2060.PC World called the super editions a "modest" upgrade for the price, and the 2080 Super chip the "second most-powerful GPU ever released" in terms of speed. In November 2019, PC Gamer wrote "even without an overclock, the 2080 Ti is the best graphics card for gaming." In June 2020, PC Mag listed the Nvidia GeForce RTX 2070 Super as one of the "best  graphics cards for 4k gaming in 2020." The GeForce RTX 2080 Founders Edition, Super, and Ti were also listed. In June 2020, graphic cards including the RTX 2060, RTX 2060 Super, RTX 2070 and the RTX 2080 Super were announced as discounted by retailers in expectation of the GeForce RTX 3080 launch. In April 2020, Nvidia announced 100 new laptops licensed to include either GeForce GTX and RTX models.
Reintroduction of older cards
Due to production problems surrounding the RTX 30-series cards and a general shortage of graphics cards due to production issues caused by the ongoing COVID-19 pandemic, which led to a global shortage of semiconductor chips, and general demand for graphics cards increasing due to an increase in cryptocurrency mining, the RTX 2060 and its Super counterpart, alongside the GTX 1050 Ti, were brought back into production in 2021.
See also: Turing (microarchitecture) and Ray-tracing hardware
The RTX 20 series is based on the Turing microarchitecture and features real-time hardware ray tracing. The cards are manufactured on an optimized 14 nm node from TSMC, named 12 nm FinFET NVIDIA (FFN). New example features in Turing included mesh shaders, rRay tracing (RT) cores (bounding volume hierarchy acceleration), tensor (AI) cores, dedicated Integer (INT) cores for concurrent execution of integer, and floating point operations. In the GeForce 20 series, this real-time ray tracing is accelerated by the use of new RT cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles.
The ray tracing performed by the RT cores can be used to produce effects such as reflections, refractions, shadows, depth of field, light scattering and caustics, replacing traditional raster techniques such as cube maps and depth maps. Notes: Instead of replacing rasterization entirely, however, ray tracing is offered in a hybrid model, in which the information gathered from ray tracing can be used to augment the rasterized shading for more photo-realistic results.
The second generation Tensor Cores (succeeding Volta's) work in cooperation with the RT cores, and their AI features are used mainly to two ends: firstly, de-noising a partially ray traced image by filling in the blanks between rays cast; also another application of the Tensor cores is DLSS (deep learning super-sampling), a new method to replace anti-aliasing, by artificially generating detail to upscale the rendered image into a higher resolution. The Tensor cores apply deep learning models (for example, an image resolution enhancement model) which are constructed using supercomputers. The problem to be solved is analyzed on the supercomputer, which is taught by example what results are desired. The supercomputer then outputs a model which is then executed on the consumer's Tensor cores. These methods are delivered to consumers as part of the cards' drivers.
Nvidia segregates the GPU dies for Turing into A and non-A variants, which is appended or excluded on the hundreds part of the GPU code name. Non-A variants are not allowed to be factory overclocked, whilst A variants are.
The GeForce 20 series was launched with GDDR6 memory chips from Micron Technology. However, due to reported faults with launch models, Nvidia switched to using GDDR6 memory chips from Samsung Electronics by November 2018.
Main article: Nvidia RTX
With the GeForce 20 series, Nvidia introduced the RTX development platform. RTX uses Microsoft's DXR, Nvidia's OptiX, and Vulkan for access to ray tracing. The ray tracing technology used in the RTX Turing GPUs was in development at Nvidia for 10 years. Nvidia's Nsight Visual Studio Edition application is used to inspect the state of the GPUs.
All of the cards in the series are PCIe 3.0 x16 cards, manufactured using a 12 nmFinFET process from TSMC, and use GDDR6 memory (initially Micron chips upon launch, and later Samsung chips from November 2018).
|Model||Launch||Code name(s)||Transistors (billion)||Die size|
|Shader processors||Texture mapping units||Render output units||Ray tracing cores||Tensor cores[a]||SM|
|Clock speeds||Fillrate||Memory||Processing power (GFLOPS)||Ray tracing performance||TDP|
|Launch MSRP (USD)||Code name(s)||Model|
|Base core clock|
|Boost core clock|
|GeForce RTX 2060||January 15, 2019||TU106-200A-KA-A1||10.8||445||1920||120||48||30||240||30||3||1365||1680||14000||65.52||163.8||6||336||192||5242 (6451)||164 (202)||10483 (12902)||5||37||51.6||160||No||$349||TU106-200A-KA-A1||GeForce RTX 2060|
|GeForce RTX 2060 TU104||January 10, 2020||TU104-150-KC-A1||13.6||545||$300||TU104-150-KC-A1||GeForce RTX 2060 TU104|
|GeForce RTX 2060 Super||July 9, 2019||TU106-410-A1||10.8||445||2176||136||64||34||272||34||4||1470||1650||94.05||199.9||8||448||256||6123 (7181)||191 (224)||12246 (14362)||6||41||57.4||175||$399||TU106-410-A1||GeForce RTX 2060 Super|
|GeForce RTX 2070||October 17, 2018||TU106-400-A1||2304||144||36||288||36||1410||1620||90.24||203.04||6497 (7465)||203 (233)||12994 (14930)||45||59.7||$499||N/A||TU106-400-A1||GeForce RTX 2070|
|TU106-400A-A1||1620+||6497 (7465+)||203 (233+)||12994 (14930+)||$499+||$599||TU106-400A-A1|
|GeForce RTX 2070 Super||July 9, 2019||TU104-410-A1||13.6||545||2560||160||40||320||40||1605||1770||102.72||256.8||8218 (9062)||257 (283)||16435 (18125)||7||52||72.5||215||2-way NVLink||$499||TU104-410-A1||GeForce RTX 2070 Super|
|GeForce RTX 2080||September 20, 2018||TU104-400-A1||2944||184||46||368||46||1515||1710||96.96||278.76||8920 (10068)||279 (315)||17840 (20137)||8||60||80.5||$699||N/A||TU104-400-A1||GeForce RTX 2080|
|TU104-400A-A1||1710+||8920 (10068+)||279 (315+)||17840 (20137+)||$699+||$799||TU104-400A-A1|
|GeForce RTX 2080 Super||July 23, 2019||TU104-450-A1||3072||192||48||384||48||1650||1815||15500||105.6||316.8||496||10138 (11151)||317 (349)||20275 (22303)||63||89.2||250||$699||TU104-450-A1||GeForce RTX 2080 Super|
|GeForce RTX 2080 Ti||September 27, 2018||TU102-300-K1-A1||18.6||754||4352||272||88||68||544||68||5.5||1350||1545||14000||118.8||367.2||11||616||352||11750 (13448)||367 (421)||23500 (26896)||10||78||107.6||$999||N/A||TU102-300-K1-A1||GeForce RTX 2080 Ti|
|TU102-300A-K1-A1||1545+||11750 (13448+)||367 (421+)||23500 (26896+)||$999+||$1,199||TU102-300A-K1-A1|
|Nvidia Titan RTX||December 18, 2018||TU102-400-A1||4608||288||96||72||576||72||6||1770||129.6||388.8||24||672||384||12442 (16312)||389 (510)||24884 (32625)||11||84||130.5||280||$2,499||TU102-400-A1||Nvidia Titan RTX|
- ^A Tensor core is a mixed-precision FPU specifically designed for matrix arithmetic.
- ^The number of Streaming multi-processors on the GPU.
- ^Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.
- ^Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.
- ^"Introducing NVIDIA GeForce RTX 2070 Graphics Card". NVIDIA. Retrieved August 20, 2018.
- ^"NVIDIA GeForce RTX 2080 Founders Edition Graphics Card". NVIDIA. Retrieved August 20, 2018.
- ^"Graphics Reinvented: NVIDIA GeForce RTX 2080 Ti Graphics Card". NVIDIA. Retrieved August 20, 2018.
- ^ ab"GeForce RTX 2080 launch live blog: Nvidia's Gamescom press conference as it happens". TechRadar. Retrieved August 21, 2018.
- ^ abSamit Sarkar. "Nvidia unveils powerful new RTX 2070, RTX 2080, RTX 2080 Ti graphics cards". Polygon. Retrieved August 20, 2018.
- ^ ab"Nvidia's new RTX 2080, 2080 Ti video cards shipped on Sept 20, 2018, starting at $799". Ars Technica. Retrieved August 20, 2018.
- ^ abLori Grunin (July 2, 2019). "Nvidia's GeForce RTX Super line boosts 2060, 2070 and 2080 for same $$". CNET. Retrieved July 16, 2020.
- ^ abChuong Nguyen (August 14, 2018). "Nvidia teases new GeForce RTX 2080 launch at Gamescom next week". Digital Trends. Retrieved July 16, 2020.
- ^ abcBrad Chacos (September 19, 2018). "Nvidia Turing GPU deep dive: What's inside the radical GeForce RTX 2080 Ti". PCWorld. Retrieved July 16, 2020.
- ^"NVIDIA GeForce GTX 16 Series Graphics Card". NVIDIA. Retrieved October 31, 2020.
- ^Kevin Lee (August 10, 2018). "GeForce RTX 2080 may be the name of Nvidia's next flagship graphics card". Tech Radar. Retrieved July 21, 2020.
- ^ abTom Warren and Stefan Etienne (September 19, 2018). "Nvidia GeForce RTX 2080 Review: 4k Gaming is Here, At a Price". The Verge. Retrieved July 16, 2020.
- ^Jarred Walton (October 8, 2018). "Nvidia GeForce RTX 2080: benchmark, release date, and everything you need to know". PC Gamer. Retrieved July 21, 2020.
- ^Gabe Carey (November 21, 2018). "PNY GeForce RTX 2080 XLR8 Gaming Overclocked Edition Review". PC Mag. Retrieved July 21, 2020.
- ^Brad Chacos (August 25, 2018). "Nvidia's GeForce RTX 2080 and RTX 2080 Ti are loaded with boundary-pushing graphics tech". PCWorld. Retrieved July 16, 2020.
- ^Kevin Lee (November 15, 2019). "Nvidia GeForce RTX 2080 Ti review". Tech Radar. Retrieved July 16, 2020.
- ^ abcJarred Walton (September 19, 2018). "NVidia GEForce RTX 2080 Founders Edition Review". PC Gamer. Retrieved July 16, 2020.
- ^Chris Angelini, Igor Wallossek (September 19, 2018). "Nvidia GeForce RTX 2080 Founders Edition Review: Faster, More Expensive Than GeForce GTX 1080 Ti". Tom's Hardware. Retrieved July 16, 2020.
- ^Chris Angelini (January 1, 2019). "Aorus GeForce RTX 2080 Ti Xtreme 11G Review: In A League of its Own". Tom's Hardware. Retrieved July 21, 2020.
- ^Andrew Burnes (August 20, 2018). "GeForce RTX Founders Edition Graphics Cards: Cool and Quiet, and Factory Overclocked". www.nvidia.com. Nvidia. Retrieved August 1, 2020.
- ^Paul Lilly (October 30, 2018). "Some users are complaining of GeForce RTX 2080 Ti cards dying". PC Gamer. Retrieved July 21, 2020.
- ^Charles Jefferies (November 16, 2018). "MSI GeForce RTX 2080 Gaming X Trio Review". PC Mag. Retrieved July 21, 2020.
- ^Antony Leather (December 4, 2018). "Nvidia's Monster Titan RTX Has $2500 Price Tag". Forbes. Retrieved July 21, 2020.
- ^Andrew Burnes (January 6, 2019). "GeForce RTX GPUs Come to 40+ Laptops, Global Availability January 29". nvidia.com. NVidia. Retrieved August 1, 2020.
- ^Gordon Mah Ung (January 9, 2019). "Nvidia disses the Radeon VII, vowing the RTX 2080 will crush AMD's 'underwhelming' GPU". PCWorld. Retrieved July 21, 2020.
- ^Steven Walton (May 22, 2019). "Radeon VII vs. GeForce RTX 2080". TechSpot. Retrieved July 21, 2020.
- ^Andrew Burnes (July 2, 2019). "Introducing GeForce RTX SUPER Graphics Cards: Best In Class Performance, Plus Ray Tracing". www.nvidia.com. GeForce Nvidia. Retrieved August 1, 2020.
- ^Brad Chacos (July 23, 2019). "Nvidia GeForce RTX 2080 Super Founders Edition review: A modest upgrade to a powerful GPU". PCWorld. Retrieved July 16, 2020.
- ^Paul Lilly (November 4, 2019). "This external graphics box contains a liquid-cooled GeForce RTX 2080 Ti". PC Gamer. Retrieved July 21, 2020.
- ^John Burek and Chris Stobing (June 6, 2020). "The Best Graphics Cards for 4K Gaming in 2020". PC Mag. Retrieved July 21, 2020.
- ^Matt Hanson (June 29, 2020). "Nvidia graphics cards are getting price cuts ahead of expected RTX 3080 launch". Tech Radar. Retrieved July 16, 2020.
- ^"Announcing New GeForce Laptops, Combining New Max-Q Tech with GeForce RTX SUPER GPUs, For Up To 2X More Efficiency Than Last-Gen". nvidia.com. Nvidia. April 2, 2020. Retrieved August 1, 2020.
- ^Tom Warren (August 20, 2018). "Nvidia announces RTX 2000 GPU series with '6 times more performance' and ray-tracing". The Verge. Retrieved August 20, 2018.
- ^"NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October". Anandtech. August 20, 2018. Retrieved December 6, 2018.
- ^Christoph Kubisch (September 17, 2018). "Introduction to Turing Mesh Shaders". Retrieved September 1, 2019.
- ^Nate Oh (September 14, 2018). "The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX". AnandTech.
- ^Ryan Smith (August 13, 2018). "NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More". AnandTech.
- ^"NVIDIA Deep Learning Super-Sampling (DLSS) Shown To Press". www.legitreviews.com. August 22, 2018. Retrieved September 14, 2018.
- ^"NVIDIA Segregates Turing GPUs; Factory Overclocking Forbidden on the Cheaper Variant". TechPowerUP. September 17, 2018. Retrieved December 7, 2018.
- ^ abMaislinger, Florian (November 21, 2018). "Faulty RTX 2080 Ti: Nvidia switches from Micron to Samsung for GDDR6 memory". PC Builder's Club. Retrieved July 15, 2019.
- ^Florian Maislinger (November 21, 2018). "NVIDIA RTX platform". Nvidia.
- ^NVIDIA GeForce (August 20, 2018). "GeForce RTX - Graphics Reinvented". Youtube.
- ^"NVIDIA Nsight Visual Studio Edition". developer.nvidia.com. NVidia.
- ^ abNVIDIA no longer differentiates A and non-A GeForce RTX 2070 and 2080 dies after May 2019, with later dies for the affected models marked without 'A' suffix. "Nvidia to Stop Binning Turing A-Dies For GeForce RTX 2080 And RTX 2070 GPUs: Report". Tom's Hardware.
- ^ ab"NVIDIA GeForce RTX 2060 Graphics Card". NVIDIA.
- ^ abcdefSmith, Ryan. "The GeForce RTX 2070 Super & RTX 2060 Super Review: Smaller Numbers, Bigger Performance". www.anandtech.com. Retrieved July 3, 2019.
- ^ abcdef"Your Graphics, Now With SUPER Powers". NVIDIA. Retrieved July 3, 2019.
- ^ ab"NVIDIA GeForce RTX 2070 Graphics Card". NVIDIA.
- ^ ab"NVIDIA GeForce RTX 2080 Founders Edition Graphics Card". NVIDIA.
- ^ ab"Graphics Reinvented: NVIDIA GeForce RTX 2080 Ti Graphics Card". NVIDIA.
- ^ ab"NVIDIA TITAN RTX". NVIDIA. Retrieved December 18, 2018.
GeForce RTX 2080 Founders Edition
GeForce RTX 2080 doesn’t get a day in the sun. It’s thrust upon us, born alongside a handsomer, more athletic GeForce RTX 2080 Ti sibling. Enthusiasts fawn over that card’s ability to dribble through 4K resolutions at maximum quality without breaking a sweat. Though the 2080 Ti is obscenely expensive, it knows no equal and therefore sets a new bar for the competition to ogle. We all love a winner.
And then there’s GeForce RTX 2080. No slouch itself, the TU104-based board was bound to be fast by virtue of genetics. Indeed, Nvidia’s Founders Edition implementation generally outperforms the GeForce GTX 1080 Ti—a once-king of gaming performance. But it’s burdened by an $800 price tag. At a time when you can still find GTX 1080 Tis for $700, slightly higher frame rates from a more expensive RTX 2080 fail to impress. And so we wait…either for the supply of previous-gen Pascal GPUs to dry up, or third-party 2080s to appear at the $700 price point Nvidia promised back when Turing was announced.
Fortunately, GeForce RTX 2080’s prospects for the future are promising. Not only does the card serve up GTX 1080 Ti-class performance, but it also supports the Turing-exclusive features that we know Nvidia is working hard to make available: real-time ray tracing via fixed-function RT cores, DLSS and AI denoising through its Tensor cores, mesh shaders, variable rate shading—all of the capabilities covered in Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080.
TU104: Turing With Middle Child Syndrome
Like the TU102 GPU found in GeForce RTX 2080 Ti, TSMC manufactures TU104 on its 12nm FinFET node. But a transistor count of 13.6 billion results in a smaller 545 mm² die. “Smaller,” of course, requires a bit of context. Turing Jr out-measures the last generation’s 471 mm² flagship (GP102).
TU104 is constructed with the same building blocks as TU102; it just features fewer of them. Streaming Multiprocessors still sport 64 CUDA cores, eight Tensor cores, one RT core, four texture units, 16 load/store units, 256KB of register space, and 96KB of L1 cache/shared memory. TPCs are still composed of two SMs and a PolyMorph geometry engine. Only here, there are four TPCs per GPC, and six GPCs spread across the processor. Therefore, a fully enabled TU104 wields 48 SMs, 3072 CUDA cores, 384 Tensor cores, 48 RT cores, 192 texture units, and 24 PolyMorph engines. A correspondingly narrower back end feeds the compute resources through eight 32-bit GDDR6 memory controllers (256-bit aggregate) attached to 64 ROPs and 4MB of L2 cache.
TU104 also loses an eight-lane NVLink connection, limiting it to one x8 link and 50 GB/s of bi-directional throughput.
GeForce RTX 2080: TU104 Gets A (Tiny) Haircut
After seeing the GeForce RTX 2080 Ti serve up respectable performance in Battlefield V at 1920x1080 with ray tracing enabled, we can’t help but wonder if GeForce RTX 2080 is fast enough to maintain playable frame rates. Even a complete TU104 GPU is limited to 48 RT cores compared to TU102’s 68. But because Nvidia goes in and turns off one of TU104’s TPCs to create GeForce RTX 2080, another pair of RT cores is lost (along with 128 CUDA cores, eight texture units, 16 Tensor cores, and so on).
Unfortunately, we’ll have to wait for another day to measure RTX 2080’s alacrity in ray-traced games. There simply aren’t any available yet. UL did send us its 3DMark Ray Tracing Tech Demo to check out. And we were able record some video from the Star Wars Reflections demo running on a GeForce RTX 2080 Ti. But the real excitement happens in a couple of months when game developers implement the first hybrid rendering paths. Until then, GeForce RTX 2080’s ability to keep up in those workloads remains a mystery.
So, in the end, GeForce RTX 2080 struts onto the scene with 46 SMs hosting 2944 CUDA cores, 368 Tensor cores, 46 RT cores, 184 texture units, 64 ROPS, and 4MB of L2 cache. Eight gigabytes of 14 Gb/s GDDR6 on a 256-bit bus move up to 448 GB/s of data, adding more than 100 GB/s of memory bandwidth beyond what GeForce GTX 1080 could do.
|GeForce RTX 2080 Ti FE||GeForce RTX 2080 FE||GeForce GTX 1080 Ti FE||GeForce GTX 1080 FE|
|Architecture (GPU)||Turing (TU102)||Turing (TU104)||Pascal (GP102)||Pascal (GP104)|
|Peak FP32 Compute||14.2 TFLOPS||10.6 TFLOPS||11.3 TFLOPS||8.9 TFLOPS|
|Base Clock Rate||1350 MHz||1515 MHz||1480 MHz||1607 MHz|
|GPU Boost Rate||1635 MHz||1800 MHz||1582 MHz||1733 MHz|
|Memory Capacity||11GB GDDR6||8GB GDDR6||11GB GDDR5X||8GB GDDR5X|
|Memory Bandwidth||616 GB/s||448 GB/s||484 GB/s||320 GB/s|
|Transistor Count||18.6 billion||13.6 billion||12 billion||7.2 billion|
|Die Size||754 mm²||545 mm²||471 mm²||314 mm²|
|SLI Support||Yes (x8 NVLink, x2)||Yes (x8 NVLink)||Yes (MIO)||Yes (MIO)|
Nvidia’s Founders Edition card sports a 1515 MHz base frequency and 1800 MHz GPU Boost rating. Peak FP32 compute performance of 10.6 TFLOPS puts GeForce RTX 2080 behind GeForce GTX 1080 Ti (11.3 TFLOPS), but well ahead of GeForce GTX 1080 (8.9 TFLOPS). Of course, the faster Founders Edition model also uses more power. Its 225W TDP is 10W higher than the reference GeForce RTX 2080, and a full 45W above last generation’s GeForce GTX 1080. Still, 225W is low enough that Nvidia gets away with one six- and one eight-pin supplementary power connector (versus RTX 2080 Ti’s pair of eight-pin connectors).
With its thermal solution removed, the GeForce RTX 2080’s PCB looks a little tidier than what we found on GeForce RTX 2080 Ti. After all, it hosts far fewer components. The power supply, for example, is a conventional 8 (GPU) + 2 (memory)-phase design. Nvidia didn’t need any of the trickery we discovered on its flagship. Six of the GPU’s phases are fed by the aforementioned power connectors (along with the memory’s phases), while the other two originate at the PCIe slot.
The PWM controller responsible for the GPU’s power phases is surface-mounted around back, while the one corresponding to Micron’s GDDR6 modules is up toward the top, under a PCIe connector.
It’s easy to tell where the memory phases are located; they’re up top as well, next to the higher-inductance coils.
GPU Power Supply
Front and center in this design is uPI's uP9512 eight-phase buck controller specifically designed to support next-gen GPUs. Per uPI, "the uP9512 provides programmable output voltage and active voltage positioning functions to adjust the output voltage as a function of the load current, so it is optimally positioned for a load current transient."
The uP9512 supports Nvidia's Open Voltage Regulator Type 4i+ technology with PWMVID. This input is buffered and filtered to produce a very accurate reference voltage. The output voltage is then precisely controlled to the reference input. An integrated SMBus interface offers enough flexibility to optimize performance and efficiency, while also facilitating communication with the appropriate software.
All 13 voltage regulation circuits are equipped with an ON Semiconductor FDMF3160 Smart Power Stage module with integrated PowerTrench MOSFETs and driver ICs.
As usual, the coils rely on encapsulated ferrite cores, but this time they are rectangular to make room for the voltage regulator circuits.
Memory Power Supply
Micron's MT61K256M32JE-14:A memory ICs are powered by two phases coming from a second uP9512. The same FDMF3160 Smart Power Stage modules crop up yet again. The 470mH coils offer greater inductance than the ones found on the GPU power phases, but they're completely identical in terms of physical dimensions.
The input filtering takes place via three 1μH coils, whereby each of the three connection lines has a matching shunt. This is a very low resistance to which voltage drop is measured in parallel and passed on to the telemetry. Through these circuits, Nvidia can limit board power in a precise way.
Unfortunately for the folks who like a bit of redundancy, this card only comes equipped with one BIOS.
How We Tested GeForce RTX 2080
Nvidia’s latest and greatest will no doubt be found in one of the many high-end platforms now available from AMD and Intel. Our graphics station still employs an MSI Z170 Gaming M7 motherboard with an Intel Core i7-7700K CPU at 4.2 GHz, though. The processor is complemented by G.Skill’s F4-3000C15Q-16GRR memory kit. Crucial’s MX200 SSD remains, joined by a 1.4TB Intel DC P3700 loaded down with games.
As far as competition goes, we can assume that GeForce RTX 2080 is bested by GeForce RTX 2080 Ti and Titan V, both of which we have in our test pool. We also compare GeForce GTX 1080 Ti, Titan X, GeForce GTX 1080, GeForce GTX 1070 Ti, and GeForce GTX 1070 from Nvidia. AMD is represented by the Radeon RX Vega 64 and 56. All cards are either Founders Edition or reference models. We do have some partner boards in-house from both Nvidia and AMD, and plan to use those for third-party reviews.
Our benchmark selection now includes Ashes of the Singularity: Escalation, Battlefield 1, Civilization VI, Destiny 2,Doom, Far Cry 5,Forza Motorsport 7, Grand Theft Auto V, Metro: Last Light Redux, Rise of the Tomb Raider, Tom Clancy’s The Division, Tom Clancy’s Ghost Recon Wildlands, The Witcher 3 and World of Warcraft: Battle for Azeroth. We’re working on adding Monster Hunter: World, Shadow of the Tomb Raider, Wolfenstein II, and a couple of others, but had to scrap those plans due to very limited time with Nvidia’s final driver for its Turing-based cards.
The testing methodology we're using comes from PresentMon: Performance In DirectX, OpenGL, And Vulkan. In short, all of these games are evaluated using a combination of OCAT and our own in-house GUI for PresentMon, with logging via AIDA64.
All of the numbers you see in today’s piece are fresh, using updated drivers. For Nvidia, we’re using build 411.51 for GeForce RTX 2080 Ti and 2080. The other cards were tested with build 398.82. Titan V’s results were spot-checked with 411.51 to ensure performance didn’t change. AMD’s cards utilize Crimson Adrenalin Edition 18.8.1, which was the latest at test time.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content
You will also like:
- Dunavant charlotte nc
- Intown suites human resources
- Walther ppq
- East pass towers rentals
- Rental car md
- Frosty call of duty
After all, I never expected that I would be left without overalls. Thank God, the pullover was long and came to the middle of the knees. Sitting, I turned away from the boys, pulled off my overalls, threw it to the side and pulled a pullover on my feet, turned to the boys who.
Had finished eating, and waited for tea.