NVIDIA GeForce RTX 3070 8GB Founders Edition Graphics Card Review

Keeping their tradition alive of launching a new graphics architecture every two years, this year, NVIDIA introduces its Ampere GPU. The Ampere GPU is built upon the foundation set by Turing. Termed as its biggest generational leap, the NVIDIA Ampere GPUs excel compared to previous generations at everything. This time with the GeForce RTX 3070 Founders Edition.

The Ampere GPU architecture has a lot to be talked about in this review, but so does the new RTX lineup. The Ampere lineup offers faster shader performance, faster ray tracing performance, and faster AI performance. Built on a brand new process node and featuring an architecture designed from the ground up, Ampere is a killer product with lots of numbers to talk about.

The fundamental of Ampere was to take everything NVIDIA learned with its Turing architecture and not only refine it but to use its DNA to form a product in a completely new performance category. Tall claims were made by NVIDIA when they introduced its Ampere lineup earlier this month & we will be finding out whether NVIDIA hit all the ticks with its Ampere architecture as this review will be your guiding path to see what makes Ampere and how it performs against its predecessors.

Today, we will be taking a look at the NVIDIA GeForce RTX 3070 Founders Edition graphics card. The card was provided by NVIDIA for the sole purpose of this review & we will be taking a look at their technology, design, and performance metrics in detail.

NVIDIA GeForce RTX 30 Series Gaming Graphics Cards - The Biggest GPU Performance Leap in Recent History

Turing wasn't just any graphics core, it was the graphics core that was to become the foundation of future GPUs. The future is realized now with next-generation consoles going deep in talks about ray tracing and AI-assisted super-sampling techniques. NVIDIA had a head start with Turing and its Ampere generation will only do things infinitely times better.

The Ampere GPU does many traditional things which we would expect from a GPU, but at the same time, also breaks the barrier when it comes to untraditional GPU operations. Just to sum up some features:

  • New Streaming Multiprocessor (SM)
  • New Turing Tensor Cores
  • New Real-Time Ray Tracing Acceleration
  • New Shading Enhancements
  • New Deep Learning Features For Graphics & Inference
  • New GDDR6X High-Performance Memory Subsystem
  • New 2nd Generation NVLINK Interconnect
  • New HDMI 2.1 Display Engine & Next-Gen NVENC/NVDEC

The technologies mentioned above are some of the main building blocks of the Ampere GPU, but there's more within the graphics core itself which we will talk about in detail so let's get started.

Let's take a trip down the journey to Ampere. In 2016, NVIDIA announced their Pascal GPUs which would soon be featured in their top to bottom GeForce 10 series lineup. After the launch of Maxwell, NVIDIA gained a lot of experience in the efficiency department which they put a focus on since their Kepler GPUs. Two years go, NVIDIA, rather than offering another standard leap in the rasterization performance of its GPUs took a different approach & introduced two key technologies in its Turing line of consumer GPUs, one being AI-assisted acceleration with the Tensor Cores and the second being hardware-level acceleration for Ray Tracing with its brand new RT cores.

With Ampere and it's brand new Samsung 8nm fabrication process, NVIDIA is adding even more to its gaming graphics lineup. Starting with the most significant part of the Ampere GPU architecture, the Ampere SM, we are seeing an entirely new graphics core. The Ampere SM features the next-gen FP32, INT32, Tensor Cores, and RT cores.

Coming to the new execution units or cores, Ampere has both INT32 and FP32 units which can execute concurrently. This new architectural design allows Turing to execute floating-point and non-floating point operations in parallel which allows for higher throughput in standard floating-point operations. According to NVIDIA, the updated Ampere graphics core delivers up to 1.7x faster traditional rasterization performance and up to 2x faster ray-tracing performance compared to the Turing GPUs.

The Ampere SM is partitioned into four processing blocks, each with 32 FP32 Cores, 16 INT32 Cores, one Tensor Core, one warp scheduler, and one dispatch unit. This is made possible with an updated data path with one data path offering 16 FP32 execution units while the other offers either 16 FP32 or 16 INT32 execution units. This adds to 128 FP32 Cores, 64 INT 32 Cores,4 Tensor, 4 Wrap Schedulers, and 4 Dispatch Units on a single Ampere SM. Each block also includes a new L0 instruction cache and a 64 KB register file for a total of 256 KB register file per SM.

 One of the key design goals for the Ampere 30-series SM was to achieve twice the throughput for FP32 operations compared to the Turing SM. To accomplish this goal, the Ampere SM includes new datapath designs for FP32 and INT32 operations. One datapath in each partition consists of 16 FP32 CUDA Cores capable of executing 16 FP32 operations per clock. Another datapath consists of both 16 FP32 CUDA Cores and 16 INT32 Cores. As a result of this new design, each Ampere SM partition is capable of executing either 32 FP32 operations per clock, or 16 FP32 and 16 INT32 operations per clock. All four SM partitions combined can execute 128 FP32 operations per clock, which is double the FP32 rate of the Turing SM, or 64 FP32 and 64 INT32 operations per clock.

Doubling the processing speed for FP32 improves performance for a number of common graphics and compute operations and algorithms. Modern shader workloads typically have a mixture of FP32 arithmetic instructions such as FFMA, floating point additions (FADD), or floating point multiplications (FMUL), combined with simpler instructions such as integer adds for addressing and fetching data, floating point compare, or min/max for processing results, etc. Performance gains will vary at the shader and application level depending on the mix of instructions. Ray tracing denoising shaders are good examples that might benefit greatly from doubling FP32 throughput.

Doubling math throughput required doubling the data paths supporting it, which is why the Ampere SM also doubled the shared memory and L1 cache performance for the SM. (128 bytes/clock per Ampere SM versus 64 bytes/clock in Turing). Total L1 bandwidth for GeForce RTX 3080 is 219 GB/sec versus 116 GB/sec for GeForce RTX 2080 Super.

Like prior NVIDIA GPUs, Ampere is composed of Graphics Processing Clusters (GPCs), Texture Processing Clusters (TPCs), Streaming Multiprocessors (SMs), Raster Operators (ROPS), and memory controllers.

The GPC is the dominant high-level hardware block with all of the key graphics processing units residing inside the GPC. Each GPC includes a dedicated Raster Engine, and now also includes two ROP partitions (each partition containing eight ROP units), which is a new feature for NVIDIA Ampere Architecture GA10x GPUs. More details on the NVIDIA Ampere architecture can be found in NVIDIA’s Ampere Architecture White Paper, which will be published in the coming days.

The four processing blocks share a combined 128 KB L1 data cache/shared memory. Traditional graphics workloads partition the 128 KB L1/shared memory as 64 KB of dedicated graphics shader RAM and 64 KB for texture cache and register file spill area. In compute mode, the GA10x SM will support the following configurations:

  • 128 KB L1 + 0 KB Shared Memory
  • 120 KB L1 + 8 KB Shared Memory
  • 112 KB L1 + 16 KB Shared Memory
  • 96 KB L1 + 32 KB Shared Memory
  • 64 KB L1 + 64 KB Shared Memory
  • 28 KB L1 + 100 KB Shared Memory

Ampere also ties its ROPs to the HPC and houses a total of 16 ROP units per GPC. The full GA102 GPU feature 112 ROPs while the GeForce RTX 3080 comes with a total of 96 ROPs.

The block diagram of the NVIDIA Ampere SM Gaming GPUs.

The entire SM works in harmony by using different blocks to deliver high performance and better texture caching, enabling up to twice as much CUDA core performance when compared to the previous generation.

A block diagram of the GA102 GPU featured on the NVIDIA GeForce RTX 3080 graphics card.

Many of these Ampere SMs combine to form the Ampere GPU. Each TPC inside the Ampere GPU houses 2 Turing SMs which are linked to the raster engine. There are a total of 6 TPCs or 12 Ampere SM that are arranged inside the GPC or Graphics Processing Cluster. The top configured GA102 GPU comes with 7 GPCs with a total of 42 TPCs and 84 SMs that are connected to 10 MB of L1 and 6 MB of L2 cache, ROPs, TMUs, memory controllers, and NVLINK High Speed I/O hub. All of this combines to form the massive Ampere GA102 GPU. The following are some perf figures for the top Ampere graphics cards.

NVIDIA GeForce RTX 3090

  • 35.58 TFLOPS of peak single-precision (FP32) performance
  • 71.16 TFLOPS of peak half-precision (FP16) performance
  • 17.79 TIPS1 concurrent with FP, through independent integer execution units
  • 258 Tensor TFLOPS
  • 69 RT-TFLOPs

NVIDIA GeForce RTX 3080

  • 30 TFLOPS of peak single-precision (FP32) performance
  • 60 TFLOPS of peak half-precision (FP16) performance
  • 15 TIPS1 concurrent with FP, through independent integer execution units
  • 238 Tensor TFLOPS
  • 58 RT-TFLOPs

NVIDIA GeForce RTX 3070

  • 20.3 TFLOPS of peak single-precision (FP32) performance
  • 40.6 TFLOPS of peak half-precision (FP16) performance
  • 10.1 TIPS1 concurrent with FP, through independent integer execution units
  • 162.6 Tensor TFLOPS
  • 39.7 RT-TFLOPs

In terms of shading performance which is the direct result of the enhanced core design and GPU architecture revamp, the Ampere GPU offers an uplift of up to 70% better performance per core compared to Turing GPUs.

It should be pointed out that these are just per core performance gains at the same clock speeds without adding the benefits of other technologies that Ampere comes with. That would further increase the performance in a wide variety of gaming applications.

NVIDIA Ampere "GeForce RTX 30" GPUs Full Breakdown:

Graphics Card NVIDIA GeForce RTX 2070 SUPER NVIDIA GeForce RTX 3070 NVIDIA GeForce RTX 2080 NVIDIA GeForce RTX 3080 NVIDIA Titan RTX NVIDIA GeForce RTX 3090
GPU Codename TU106 GA104 TU104 GA102 TU102 GA102
GPU Architecture NVIDIA Turing NVIDIA Ampere NVIDIA Turing NVIDIA Ampere NVIDIA Turing NVIDIA Ampere
GPCs 5 or 6 6 6 6 6 7
TPCs 20 23 23 34 36 41
SMs 40 46 46 68 72 82
CUDA Cores / SM 64 128 64 128 64 128
CUDA Cores / GPU 2560 5888 2944 8704 4608 10496
Tensor Cores / SM 8 (2nd Gen) 4 (3rd Gen) 8 (2nd Gen) 4 (3rd Gen) 8 (2nd Gen) 4 (3rd Gen)
Tensor Cores / GPU 320 (2nd Gen) 184 (3rd Gen) 368 272 (3rd Gen) 576 (2nd Gen) 328 (3rd Gen)
RT Cores 40 (1st Gen) 46 (2nd Gen) 46 (1st Gen) 68 (2nd Gen) 72 (1st Gen) 82 (2nd Gen)
GPU Boost Clock (MHz) 1770 1725 1800 1710 1770 1695
Peak FP32 TFLOPS (non-Tensor) 9.1 20.3 10.6 29.8 16.3 35.6
Peak FP16 TFLOPS (non-Tensor) 18.1 20.3 21.2 29.8 32.6 35.6
Peak BF16 TFLOPS (non-Tensor) NA 20.3 NA 29.8 NA 35.6
Peak INT32 TOPS (non-Tensor) 9.1 10.2 10.6 14.9 16.3 17.8
Peak FP16 Tensor TFLOPS
with FP16 Accumulate
72.5 81.3/162.6 84.8 119/238 130.5 142/284
Peak FP16 Tensor TFLOPS
with FP32 Accumulate
36.3 40.6/81.3 42.4 59.5/119 65.2 71/142
Peak BF16 Tensor TFLOPS
with FP32 Accumulate
NA 40.6/81.3 NA 59.5/119 NA 71/142
Peak TF32 Tensor TFLOPS NA 20.3/40.6 NA 29.8/59.5 NA 35.6/71
Peak INT8 Tensor TOPS 145 162.6/325.2 169.6 238/476 261 284/568
Peak INT4 Tensor TOPS 290 325.2/650.4 339.1 476/952 522 568/1136
Frame Buffer Memory Size and
Type
8 GB GDDR6 8 GB GDDR6 8 GB GDDR6 10 GB GDDR6X 24 GB GDDR6 24 GB GDDR6X
Memory Interface 256-bit 256-bit 256-bit 320-bit 384-bit 384-bit
Memory Clock (Data Rate) 14 Gbps 14 Gbps 14 Gbps 19 Gbps 14 Gbps 19.5 Gbps
Memory Bandwidth 448 GB/sec 448 GB/sec 448 GB/sec 760 GB/sec 672 GB/sec 936 GB/sec
ROPs 64 96 64 96 96 112
Pixel Fill-rate (Gigapixels/sec) 113.3 165.6 115.2 164.2 169.9 193
Texture Units 160 184 184 272 288 328
Texel Fill-rate (Gigatexels/sec) 283.2 317.4 331.2 465 509.8 566
L1 Data Cache/Shared Memory 3840 5888 4416 KB 8704 KB 6912 KB 10496 KB
L2 Cache Size 4096 KB 4096 KB 4096 KB 5120 KB 6144 KB 6144 KB
Register File Size 10240 KB 11776 KB 11776 KB 17408 KB 18432 KB 20992 KB
TGP (Total Graphics Power) 215 Watts 220W 225W 320W 280W 350W
Transistor Count 13.6 Billion 17.4 Billion 13.6 Billion 28.3 Billion 18.6 Billion 28.3 Billion
Die Size 545 mm2 392.5 mm2 545 mm2 628.4 mm2 754mm2 628.4 mm2
Manufacturing Process TSMC 12 nm FFN
(FinFET NVIDIA)
Samsung 8 nm 8N NVIDIA
Custom Process
TSMC 12 nm FFN
(FinFET NVIDIA)
Samsung 8 nm 8N NVIDIA
Custom Process
TSMC 12 nm FFN
(FinFET NVIDIA)
Samsung 8 nm 8N NVIDIA
Custom Process

NVIDIA Ampere GPUs - GA102 & GA104 For The First Wave of Gaming Cards

NVIDIA is first introducing two brand new Ampere GPUs which include the GA102 and the GA104. The GA102 GPU is going to be featured on the GeForce RTX 3090 and GeForce RTX 3080 graphics cards while the GA104 GPU is going to be featured on the GeForce RTX 3070 graphics cards. The Ampere GPUs are based on the Samsung 8nm custom process node for NVIDIA and as such, the resultant GPU dies are slightly smaller than their Turing based predecessors but do come with a denser transistor layout. There will be several variations of each GPU featured across the RTX 30 series lineup. Following is what the complete GA102 and GA104 GPUs have to offer.

NVIDIA Ampere GA102 GPU

The full GA102 GPU is made up of 7 graphics processing clusters with 12 SM units on each cluster. That makes up 84 SM units for a total of 10752 cores in a 28.3 billion transistor package measuring 628.4mm2.

NVIDIA Ampere GA104 GPU

The full GA104 GPU is made up of 6 graphics processing clusters with 8 SM units on each cluster. That makes up 48 SM units for a total of 6144 cores in a 17.4 billion transistor package measuring 392.5mm2.

NVIDIA has also introduced its 3rd Generation Tensor core architecture and 2nd Generation RT cores on Ampere GPUs. Now Tensor cores have been available since Volta and consumers got a taste of it with the Turing GPUs. One of the key areas where Tensor Cores are put to use for AAA games is DLSS. There's a whole software stack that leverages from Tensor cores and that is known as the NVIDIA NGX. These software-based technologies will help enhance graphics fidelity with features such as Deep Learning Super Sampling (DLSS), AI InPainting, AI Super Rez, RTX Voice, and AI Slow-Mo.

While its initial debut was a bit flawed, DLSS in its 2nd iteration (DLSS 2.0) has done wonders to not only improve gaming performance but also image quality. In titles such as Death Stranding and Control, games are shown to offer higher visual fidelity than at native resolution while running at much higher framerates. With Ampere, we can expect an even higher boost in terms of DLSS 2.0 (and DLSS Next-Gen) performance as the deep-learning model continues working its magic in DLSS supported titles. NVIDIA will also be adding 8K DLSS support to its Ampere GPU lineup which would be great to test out with the 24 GB RTX 3090 graphics card.

With Ampere, Tensor cores add INT8 and INT4 precision in addition to FP16 which is still fully supported. NVIDIA has been at the helm of the deep learning revolution by supporting it since its Kepler generation of graphics cards. Today, NVIDIA has some of the most powerful AI graphics accelerators and a software stack that is widely adopted by this fast-growing industry.

For its 3rd Gen Tensor cores, NVIDIA is using the same sparsity architecture that they've used on the Ampere HPC line of GPUs. While Ampere features 4 Tensor cores per SM compared to Turing's 8 tensor cores per SM, they are not only based on the new 3rd Generation design but also get an increased count with the larger SM array. The Ampere GPU can execute 128 FP16 FMA operations per tensor core utilizing its entire INT16 cores and with sparsity, it can do up to 256. The total FP16 FMA operations per SM are increased to 512 and 1024 with sparsity. That's a 2x increase over the Turing GPU in terms of inference performance with the updated Tensor design.

2nd Gen RT Cores, RTX, and Real-Time Ray Tracing Dissected

Next up, we have the RT Cores which are what will power Real-Time Raytracing. NVIDIA isn't going to distance themselves from traditional rasterization-based rendering, but instead following a hybrid rendering model. The new 2nd Generation RT cores offer increased performance and offer double the ray/triangle intersection testing rate over Turing RT cores.

There's one RT core per SM and all of them combined accelerate Bounding Volume Hierarchy (BVH) traversal and ray/triangle intersection testing (ray casting) functions. RT Cores work together with advanced denoising filtering, a highly-efficient BVH acceleration structure developed by NVIDIA Research, and RTX compatible APIs to achieve real-time ray tracing on a single Turing GPU.

RT Cores traverse the BVH autonomously, and by accelerating traversal and ray/triangle intersection tests, they offload the SM, allowing it to handle another vertex, pixel, and compute shading work. Functions such as BVH building and refitting are handled by the driver, and ray generation and shading are managed by the application through new types of shaders.

To better understand the function of RT Cores, and what exactly they accelerate, we should first explain how ray tracing is performed on GPUs or CPUs without a dedicated hardware ray tracing engine. Essentially, the process of BVH traversal would need to be performed by shader operations and take thousands of instruction slots per ray cast to test against bounding box intersections in the BVH until finally hitting a triangle, and the color at the point of intersection contributes to the final pixel color (or if no triangle is hit, the background color may be used to shade a pixel).

Ray tracing without hardware acceleration requires thousands of software instruction slots per ray to test successively smaller bounding boxes in the BVH structure until possibly hitting a triangle. It’s a computationally-intensive process making it impossible to do on GPUs in real-time without hardware-based ray tracing acceleration.

The RT Cores in Ampere can process all the BVH traversal and ray-triangle intersection testing, saving the SM from spending the thousands of instruction slots per ray, which could be an enormous amount of instructions for an entire scene. The RT Core includes two specialized units. The first unit does bounding box tests, and the second unit does ray-triangle intersection tests.

The SM only has to launch a ray probe, and the RT core does the BVH traversal and ray-triangle tests, and return a hit or no hit to the SM. Also unlike the last generation, Ampere SM can process two compute workloads simultaneously, allowing ray-tracing & graphics/compute workloads to be done concurrently.

In a visual demonstration, NVIDIA has shown how RT and Tensor cores help speed up ray tracing and shader workloads significantly. A fully ray-traced frame from Wolfenstein Youngblood was taken as an example. The last-gen RTX 2080 SUPER will take 51ms to render the frame if it does it all with its shaders (CUDA Cores). With RT cores and shaders working in tandem, the processing times are reduced to just 20ms or less than half the time. Adding in Tensor cores to help reduce the rendering time even lower to just 12ms (~83 FPS).

However, with Ampere, each standard processing block receives a huge performance uplift. With an RTX 3080, the same frame can be rendered within 37ms on the Shader cores alone, 11ms with the RT+Shader cores, and 6.7ms (150 FPS) with all three core technologies working together. That's half the time of what Turing took to render the same scene.

With each new generation of graphics cards, NVIDIA delivers a new range of display technologies. This generation is no different and we see some significant updates to not only the display engine but also the graphics interconnect. With the adoption of faster GDDR6X memory which provides higher bandwidth, faster compression, and more cache, Gaming applications can now run at higher resolutions, supporting more details on the display.

The Ampere Display Engine supports two new display technologies, HDMI 2.1 and DisplayPort 1.4a with DSC 1.2a. HDMI 2.1 allows for up to 48 Gbps of total bandwidth and allows for up to 4K 240Hz HDR and 8K 60Hz HDR.

DisplayPort 1.4a allows for up to 8K resolutions with 60Hz refresh rates and includes VESA's display stream compression 1.2 technology with visually lossless compression. You can run up to two 8K displays at 60 Hz using two cables, one for each display. In addition to that, Ampere also supports HDR processing natively with tone mapping added to the HDR pipeline.

Ampere GPUs also ships with the Fifth Generation NVDEC decoder unit that adds AV1 hardware decode support. Ampere's new NVDEC decoder has also been updated to support the decoding of MPEG-2, VC-1, H.264 (AVCHD), H.265 (HEVC), VP8, VP9, and AV1.

Ampere also adds the 7th Generation NVENC encoder by offering seamless hardware-accelerated encoding of up to 4K on H.264 and 8K on HEVC.

NVIDIA RTX IO - Blazing Fast Read Speeds With GPU Utilization

As storage sizes have grown, so has storage performance. Gamers are increasingly turning to SSDs to reduce game load times: while hard drives are limited to 50-100 MB/sec throughput, the latest M.2 PCIe Gen4 SSDs deliver up to 7 GB/sec. With the traditional storage model, game data is read from the hard disk, then passed from the system memory and CPU before being passed to the GPU.

Historically games have read files from the hard disk, using the CPU to decompress the game image. Developers have used lossless compression to reduce install sizes and to improve I/O performance. However, as storage performance has increased, traditional file systems and storage APIs have become a bottleneck. For example, decompressing game data from a 100 MB/sec hard drive takes only a few CPU cores, but decompressing data from a 7 GB/sec PCIe Gen4 SSD can consume more than twenty AMD Ryzen Threadripper 3960X CPU cores!

Using the traditional storage model, game decompression can consume all 24 cores on a Threadripper CPU. Modern game engines have exceeded the capability of traditional storage APIs. A new generation of I/O architecture is needed. Data transfer rates are the gray bars, CPU cores required are the black/blue blocks.

NVIDIA RTX IO is a suite of technologies that enable rapid GPU-based loading and decompression of game assets, accelerating I/O performance by up to 100x compared to hard drives and traditional storage APIs. When used with Microsoft’s new DirectStorage for Windows API, RTX IO offloads dozens of CPU cores’ worth of work to your RTX GPU, improving frame rates, enabling near-instantaneous game loading, and opening the door to a new era of large, incredibly detailed open-world games.

Object pop-in and stutter can be reduced, and high-quality textures can be streamed at incredible rates, so even if you’re speeding through a world, everything runs and looks great. In addition, with lossless compression, game download and install sizes can be reduced, allowing gamers to store more games on their SSD while also improving their performance.

NVIDIA RTX IO plugs into Microsoft’s upcoming DirectStorage API which is a next-generation storage architecture designed specifically for state-of-the-art NVMe SSD-equipped gaming PCs and the complex workloads that modern games require. Together, streamlined and parallelized APIs specifically tailored for games allow dramatically reduced IO overhead and maximize performance/bandwidth from NVMe SSDs to your RTX IO-enabled GPU.

Specifically, NVIDIA RTX IO brings GPU-based lossless decompression, allowing reads through DirectStorage to remain compressed and delivered to the GPU for decompression. This removes the load from the CPU, moving the data from storage to the GPU in a more efficient, compressed form, and improving I/O performance by a factor of two.

GeForce RTX GPUs will deliver decompression performance beyond the limits of even Gen4 SSDs, offloading potentially dozens of CPU cores’ worth of work to ensure maximum overall system performance for next-generation games. Lossless decompression is implemented with high performance compute kernels, asynchronously scheduled. This functionality leverages the DMA and copy engines of Turing and Ampere, as well as the advanced instruction set, and architecture of these GPU’s SM’s.

The advantage of this is that the enormous compute power of the GPU can be leveraged for burst or bulk loading (at level load for example) when GPU resources can be leveraged as high-performance I/O processor, delivering decompression performance well beyond the limits of Gen4 NVMe. During streaming scenarios, bandwidths are a tiny fraction of the GPU capability, further leveraging the advanced asynchronous compute capabilities of Turing and Ampere. Microsoft is targeting a developer preview of DirectStorage for Windows for game developers next year, and NVIDIA Turing & Ampere gamers will be able to take advantage of RTX IO enhanced games as soon as they become available.

NVLINK For GeForce RTX 3090 And Titan Class Products Only!

NVIDIA has said farewell to their SLI (Scale Link Interface) interconnect for consumer graphics cards. They will now be using the NVLINK interconnect which has already been featured on their Turing GPUs. The reason is that SLI was simply not enough to feed higher bandwidth to Ampere GPUs.

A single x8 NVLINK channel provides 25 GB/s peak bandwidth. There are 4 x4 links on the GA102 GPU. The GA102 GPU features 50 GB/s of bandwidth in parallel and 100 GB/s bandwidth bi-directionally. Using NVLINK on high-end cards would be beneficial in high-resolution gaming but there's a reason NVIDIA still restricts users from doing 3 and 4 way SLI.

Multi-GPU still isn't optimized so you won't see many benefits unless you are running the highest-end graphics cards. That's another reason why the RTX 3080 & RTX 3070 are deprived of NVLINK connectors. The NVLINK connectors cost $79 US each and are sold separately.


The NVIDIA GeForce RTX 3070 is a force to be reckoned with. It takes the throne of the fastest PC gaming graphics card with nothing coming even close to it. It's surprisingly much faster than the GeForce RTX 2080 Ti which is its Turing based predecessor.

NVIDIA designed the GeForce RTX 3070 is designed to be the gaming champ, powering the next generation of AAA gaming titles with superb visuals and insane fluidity. It's not just the FPS that matters these days, its visuals, and a smoother frame rate too and this is exactly what the GeForce RTX 30 series is made to excel at. There's a lot to talk about regarding NVIDIA's flagship Ampere gaming graphics cards so let's start off with the specifications.

Marvels of NVIDIA Ampere Architecture - 2nd Generation RTX
Enabling the blistering performance of the new RTX 30 Series GPUs and the NVIDIA Ampere architecture are cutting-edge technologies and over two decades of graphics R&D, including:

  • New streaming multiprocessors: The building block for the world’s fastest, most efficient GPU, delivering 2x the FP32 throughput of the previous generation, and 30 Shader-TFLOPS of processing power.
  • Second-gen RT Cores: New dedicated RT Cores deliver 2x the throughput of the previous generation, plus concurrent ray tracing and shading and compute, with 58 RT-TFLOPS of processing power.
  • Third-gen Tensor Cores: New dedicated Tensor Cores, with up to 2x the throughput of the previous generation, making it faster and more efficient to run AI-powered technologies, like NVIDIA DLSS, and 238 Tensor-TFLOPS of processing power.
  • NVIDIA RTX IO: Enables rapid GPU-based loading and game asset decompression, accelerating input/output performance by up to 100x compared with hard drives and traditional storage APIs. In conjunction with Microsoft’s new DirectStorage for Windows API, RTX IO offloads dozens of CPU cores’ worth of work to the RTX GPU, improving frame rates and enabling near-instantaneous game loading.
  • World’s fastest graphics memory: NVIDIA has worked with Micron to create the world’s fastest discrete graphics memory for the RTX 30 Series, GDDR6X. It provides data speeds of close to 1TB/s system memory bandwidth for graphics card applications, maximizing game and app performance.
  • Next-gen process technology: New 8N NVIDIA custom process from Samsung, which allows for higher transistor density and more efficiency.

NVIDIA GeForce RTX 3070 Graphics Card Specifications - GA104 GPU & 8 GB GDDR6 Memory

At the heart of the NVIDIA GeForce RTX 3070 graphics card lies the GA104 GPU. The GA104 is one of the many Ampere GPUs that we will be getting on the gaming segment. The GA104 GPU is the second-fastest Ampere chip in the stack. The GPU is based on Samsung's 8nm (N8) process node. The GPU measures at 392.5mm2 and features 17.4 Billion transistors which are almost 93% of the transistors featured on the TU102 GPU. At the same time, the GA104 GPU is almost half the size of the TU102 GPU which is an insane amount of density.

For the GeForce RTX 3070, NVIDIA has enabled a total of 46 SM units on its flagship which results in a total of 5888 CUDA cores. In addition to the CUDA cores, NVIDIA's GeForce RTX 3070 also comes packed with next-generation RT (Ray-Tracing) cores, Tensor cores, and brand new SM or streaming multi-processor units.

In terms of memory, the GeForce RTX 3070 features 8 GB of GDDR6 memory. The GeForce RTX 3070 comes with memory at speeds of 14 Gbps. That along with a full uncut bus interface of 256-bit will deliver a cumulative bandwidth of 448 Gbps. The NVIDIA GeForce RTX 3070 has a TGP of 220W.

NVIDIA GeForce RTX 30 Series 'Ampere' Graphics Card Specifications:

Graphics Card Name NVIDIA GeForce RTX 3060 Ti NVIDIA GeForce RTX 3070 NVIDIA GeForce RTX 3070 Ti? NVIDIA GeForce RTX 3080 NVIDIA GeForce RTX 3090
GPU Name Ampere GA104-200 Ampere GA104-300 Ampere GA102-150 Ampere GA102-200 Ampere GA102-300
Process Node Samsung 8nm Samsung 8nm Samsung 8nm Samsung 8nm Samsung 8nm
Die Size 395.2mm2 395.2mm2 628.4mm2 628.4mm2 628.4mm2
Transistors 17.4 Billion 17.4 Billion 28 Billion 28 Billion 28 Billion
CUDA Cores 4864 5888 7424 8704 10496
TMUs / ROPs 152 / 80 184 / 96 232 / 80 272 / 96 328 / 112
Tensor / RT Cores 152 / 38 184 / 46 232 / 58 272 / 68 328 / 82
Base Clock 1410 MHz 1500 MHz TBA 1440 MHz 1400 MHz
Boost Clock 1665 MHz 1730 MHz TBA 1710 MHz 1700 MHz
FP32 Compute 16.2 TFLOPs 20 TFLOPs TBA 30 TFLOPs 36 TFLOPs
RT TFLOPs 32.4 TFLOPs 40 TFLOPs TBA 58 TFLOPs 69 TFLOPs
Tensor-TOPs TBA 163 TOPs TBA 238 TOPs 285 TOPs
Memory Capacity 8 GB GDDR6 8 GB GDDR6 10 GB GDDR6X? 10 GB GDDR6X 24 GB GDDR6X
Memory Bus 256-bit 256-bit 320-bit 320-bit 384-bit
Memory Speed 14 Gbps 14 Gbps 320-bit 19 Gbps 19.5 Gbps
Bandwidth 448 Gbps 448 Gbps 320-bit 760 Gbps 936 Gbps
TGP 180W? 220W 250-280W 320W 350W
Price (MSRP / FE) $399 US? $499 US $599 US? $699 US $1499 US
Launch (Availability) November 2020? 29th October 320-bit 17th September 24th September

NVIDIA GeForce RTX 3070 Graphics Card Cooling & Design- Next-Gen NVTTM Founders Edition Design

Unlike the new front and back cooling system that the GeForce RTX 3090 and GeForce RTX 3080 incorporate, the NVIDIA GeForce RTX 3070 makes use of a dual-fan cooling system which blows air towards the central heatsink.

The Founders Edition cooling makes use of a full aluminum alloy heatsink which is coated with a nano-carbon coating and should do a really good job at keeping the temperatures in control. The design is interesting in the sense that not only does it goes all out with a fin and heat pipe design.

The Founders Edition cooling makes use of a full aluminum alloy heatsink which is coated with a nano-carbon coating and should do a really good job at keeping the temperatures in control. The design is interesting in the sense that not only does it goes all out with a fin and heat pipe design.

NVIDIA GeForce RTX 3070 Graphics Card PCB & Power - Designed To Be Overclocked!

The GeForce RTX 30 series Founders Edition cards including the GeForce RTX 3070 will be featuring the 12-pin Micro-Fit 3.0 power connectors. These connectors don't require a power supply upgrade as the cards will ship with bundled 2x 8-pin to 1x 12-pin connectors so you can run your latest graphics card without any compatibility issues.

The placement of the 12-pin connector on the PCB is also noteworthy. It is placed in a standard horizontal position but right in the middle of the shroud which does help with better electrical signaling to the GPU and judging by the PCB design, we can tell why NVIDIA moved to a single 12-pin plug instead of the standard dual 8-pin design. There's limited room on the PCB to do stuff and as such, it was necessary to go for a more small and compact power input.

NVIDIA GeForce RTX 3070 Graphics Card Price & Availability - Both Custom & Reference Designs at Launch

The NVIDIA GeForce RTX 3070 is being announced today and will be launching to consumers on 29th October. The first wave of graphics cards to hit the market would be the reference Founders Edition variant which will cost $499 US. The NVIDIA GeForce RTX 3070 will feature a price of $499 (MSRP) however custom models will vary depending on their design and the extra horse-power that they have to offer.

There aren't any performance numbers that NVIDIA is sharing right now but from what has been showcased, the GeForce RTX 3070 is faster than an RTX 2080 Ti, the RTX 3080 is a good bit ahead of the RTX 2080 Ti and the RTX 3090 is about as much as 50% faster than the RTX 2080 Ti which is very impressive for the full lineup stack.

The GeForce RTX 3070 is an insane little beast that features a 60% performance uplift over its RTX 2070 predecessor powering the next-generation AAA titles at smooth frame rates at even the most demanding resolutions of 4K.

The presentation has been key for NVIDIA since the 700 series when it comes to their Founders Edition and the GeForce RTX 3070 is no different. Carrying over the design cues in the packaging of the previous 30 Series launches we are greeted by the GeForce RTX 3070 Founders Edition laying down, ready to be moved to your system.

The card itself is very similar is size and design to the last generation RTX 2070 but with the flow-through cooler heatsink and fans of this generation. Also, the card doesn't light up, I'm not happy about that, it really should light up.

The outward-facing side of the RTX 3070 carries the design signatures of the bigger GA102 based Ampere cards, even holding on to the 12-pin cable despite it only having a single 8-pin on the adapter. Again, the GeForce RTX logo does not light up on the GeForce RTX 3070.

It's easy to see that the open section of the heatsink is reinforced with small pipes throughout in addition to the heatpipes coming from the main GPU section of the card. The heatsink is dense, yet sparse enough to allow for as little noise as possible.

I did attempt to dismantle the card and take a look at the board but after breaking a piece of the retention clip for the rear fan ribbon cable (look closely and you see the notch I broke) I gave up and figured I'd leave this endeavor to the guys over at Gamers Nexus to tear apart. Do your thing Steve, do your thing. But I did at least want to share a look at the back of the PCB and how the card looks with the backplate removed.

We used the following test system for comparison between the different graphics cards. The latest drivers that were available at the time of testing were used from AMD and NVIDIA on an updated version of Windows 10. All games that were tested were patched to the latest version for better performance optimization for NVIDIA and AMD GPUs.

Test System

Components X570
CPU Ryzen 9 3900X 4.3GHz All Core Lock (disable one CCD for 3600X Results)
Memory 32GB Hyper X Predator DDR4 3600
Motherboard ASUS TUF Gaming X570 Plus-WiFi
Storage TeamGroup Cardea 1TB NVMe PCIe 4.0
PSU Cooler Master V1200 Platinum
Windows Version Latest verion of windows at the time of testing
Hardware-Accelerated GPU Scheduling On if supported by GPU and driver.

Graphics Cards Tested

GPU Architecture Core Count
Clock Speed Memory Capacity
Memory Speed
NVIDIA RTX 3070 FE Ampere 5888 1500/1730 8GB 14Gbps
NVIDIA RTX 3080 FE Ampere 8704 1440/1710 10 19Gbps
NVIDIA RTX 2080ti FE Turing 4352 1350/1635 11GB GDDR6 14Gbps
NVIDIA RTX 2080 SUPER FE Turing 3072 1650/1815 8GB GDDR6 15.5Gbps
NVIDIA RTX 2070 SUPER FE Turing 2560 1605/1770 8GB GDDR6 14Gbps
NVIDIA GTX 1080 FE Pascal
2560 1607/1733 8GB GDDR5X 10Gbps
NVIDIA GTX 1070 FE Pascal
1920 1506/1683 8GB GDDR5 8Gbps
AMD Radeon RX 5700XT Navi 10 2560 1605/1755/1905 8GB GDDR6 14Gbps
AMD RX Vega 64 Vega 10 4096 1247/1546 8GB HBM2 945Mbps
Sapphire RX 5500 XT 4GB Navi 14 1408 1737/1845 4GB GDDR6 14Gbps

Drivers Used

Drivers  
Radeon Settings 20.10.1
GeForce 456.96
  • All games were tested on 2560×1440 (2K), 3440x1440 and 3840x2160 (4K) resolutions.
  • Image Quality and graphics configurations are provided with each game description.
  • The "reference" cards are the stock configs.

Firestrike

Firestrike is running the DX11 API and is still a good measure of GPU scaling performance, in this test we ran the Extreme and Ultra versions of Firestrike which runs at 1440p and 4K and we recorded the Graphics Score only since the Physics and combined are not pertinent to this review.

Time Spy

Time Spy is running the DX12 API and we used it in the same manner as Firestrike Extreme where we only recorded the Graphics Score as the Physics score is recording the CPU performance and isn't important to the testing we are doing here.

Port Royal

Port Royal is another great tool in the 3DMark suite, but this one is 100% targeting Ray Tracing performance. It loads up ray traced shadows, reflections, and global illumination to really tax the performance of the graphics cards that either have hardware-based or software-based ray tracing support.

Thermals

Thermals were measured from our open test bench after running the Time Spy graphics test 2 on loop for 30 minutes recording the highest temperatures reported. The room was climate controlled and kept at a constant 22c throughout the testing.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

DOOM Eternal

 

DOOM Eternal brings hell to earth with the Vulkan powered idTech 7.  We test this game using the Ultra Nightmare Preset and follow our in game benchmarking to stay as consistent as possible.

Gears Tactics

Gears Tactics is the latest in the Gears franchise and takes things in a completely different direction with the gameplay design. It is built on a DX12 based Unreal Engine 4 build. We used the Maximum settings allowed but refrained from enabling Variable Rate Shading as all cards ar not capable of supporting this feature.

Ghost Recon Breakpoint

Ghost Recon Breakpoint is powered by the latest iteration of the Anvil Next 2.0 game engine. This is the same engine that was used in Assassin's Creed Odyssey but in Breakpoint has been updated to support the Vulkan API. We performed our tests using the High Preset with the Vulkan API.

Call of Duty Modern Warfare (2019)

Call of Duty Modern Warfare is back and this time on a new engine running DX12 to allow for some sick DXR Ray Traced Shadows, but we're not testing that here since this card isn't designed for that level of rendering. We tested in the 'Fog of War' mission where we tested our RT performance run. At 1440p we set the settings all to High.

Resident Evil 3

The Resident Evil 3 Remake has surpassed the RE2 Remake in visuals and is the latest use of the RE Engine.  While it does have DX12 support the DX11 implementation is far superior and because of that, we will be sticking to DX11 for this title.  We use the cutscene where Jill and Carlos enter the subway car for the first time and a 2 minute capture at that point.

Borderlands 3

Borderlands 3 has made its way into the test lineup thanks to strong demand by gamers and simply delivering MORE Borderlands. This game is rather intensive after the Medium preset but since we're testing the 'Ultimate 1440p' card, High it is. We tested using the built-in benchmark utility

Total War Saga: Troy

Total War Saga: Troy is powered by their TW Engine 3 (Total War Engine 3) and in this iteration, they have stuck to a strictly DX11 release. We tested the game using the built-in benchmark using the Dynasty model that represents a battle with many soldiers interacting at once and is more representative of normal gameplay.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

DOOM Eternal

 

DOOM Eternal brings hell to earth with the Vulkan powered idTech 7.  We test this game using the Ultra Nightmare Preset and follow our in-game benchmarking to stay as consistent as possible.

Gears Tactics

Gears Tactics is the latest in the Gears franchise and takes things in a completely different direction with the gameplay design. It is built on a DX12 based Unreal Engine 4 build. We used the Maximum settings allowed but refrained from enabling Variable Rate Shading as all cards ar not capable of supporting this feature.

Ghost Recon Breakpoint

Ghost Recon Breakpoint is powered by the latest iteration of the Anvil Next 2.0 game engine. This is the same engine that was used in Assassin's Creed Odyssey but in Breakpoint has been updated to support the Vulkan API. We performed our tests using the High Preset with the Vulkan API.

Call of Duty Modern Warfare (2019)

Call of Duty Modern Warfare is back and this time on a new engine running DX12 to allow for some sick DXR Ray Traced Shadows, but we're not testing that here since this card isn't designed for that level of rendering. We tested in the 'Fog of War' mission where we tested our RT performance run. At UW 1440p we set the settings all to High.

Resident Evil 3

The Resident Evil 3 Remake has surpassed the RE2 Remake in visuals and is the latest use of the RE Engine.  While it does have DX12 support the DX11 implementation is far superior and because of that, we will be sticking to DX11 for this title.  We use the cutscene where Jill and Carlos enter the subway car for the first time and a 2 minute capture at that point.

Borderlands 3

Borderlands 3 has made its way into the test lineup thanks to strong demand by gamers and simply delivering MORE Borderlands. This game is rather intensive after the Medium preset but since we're testing the 'Ultimate UW 1440p' card, High it is. We tested using the built-in benchmark utility

Total War Saga: Troy

Total War Saga: Troy is powered by their TW Engine 3 (Total War Engine 3) and in this iteration, they have stuck to a strictly DX11 release. We tested the game using the built-in benchmark using the Dynasty model that represents a battle with many soldiers interacting at once and is more representative of normal gameplay.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

DOOM Eternal

DOOM Eternal brings hell to earth with the Vulkan powered idTech 7.  We test this game using the Ultra Nightmare Preset and follow our in-game benchmarking to stay as consistent as possible.

Gears Tactics

Gears Tactics is the latest in the Gears franchise and takes things in a completely different direction with the gameplay design. It is built on a DX12 based Unreal Engine 4 build. We used the Maximum settings allowed but refrained from enabling Variable Rate Shading as all cards are not capable of supporting this feature.

Ghost Recon Breakpoint

Ghost Recon Breakpoint is powered by the latest iteration of the Anvil Next 2.0 game engine. This is the same engine that was used in Assassin's Creed Odyssey but in Breakpoint has been updated to support the Vulkan API. We performed our tests using the High Preset with the Vulkan API.

Call of Duty Modern Warfare (2019)

Call of Duty Modern Warfare is back and this time on a new engine running DX12 to allow for some sick DXR Ray Traced Shadows, those results are in the RT section We tested in the 'Fog of War' mission. At 4K we set the settings all to High.

Resident Evil 3

The Resident Evil 3 Remake has surpassed the RE2 Remake in visuals and is the latest use of the RE Engine.  While it does have DX12 support the DX11 implementation is far superior and because of that, we will be sticking to DX11 for this title.  We use the cutscene where Jill and Carlos enter the subway car for the first time and a 2 minute capture at that point.

Borderlands 3

Borderlands 3 has made its way into the test lineup thanks to strong demand by gamers and simply delivering MORE Borderlands. This game is rather intensive after the Medium preset but since we're testing the 'Ultimate UW 1440p' card, High it is. We tested using the built-in benchmark utility

Total War Saga: Troy

Total War Saga: Troy is powered by their TW Engine 3 (Total War Engine 3) and in this iteration, they have stuck to a strictly DX11 release. We tested the game using the built-in benchmark using the Dynasty model that represents a battle with many soldiers interacting at once and is more representative of normal gameplay.


Shadow of the Tomb Raider

Shadow of the Tomb Raider, unlike its predecessor, does a good job putting DX12 to use and results in higher performance than the DX11 counterpart in this title, and because of that, we test this title in DX12.  I do use the second segment of the benchmark run to gather these numbers as it is more indicative of in-game scenarios where the foliage is heavy. SotTR features Ray Traced Shadows as well as DLSS and we used both in the benchmarks with the game set to the 'Highest' preset and RT Shadows at Ultra with DLSS enabled.

Modern Warfare

Call of Duty Modern Warfare is back and this time on a new engine running DX12 to allow for some sick DXR Ray Traced Shadows. We tested in the 'Fog of War' mission where we tested our RT performance run. At 1440p we set the settings all to High with ray-traced shadows enabled.

Control

Control is powered by Remedy's Northlight Storytelling Engine but severely pumped up to support multiple functions of ray-traced effects. We ran this through our test run in the cafeteria with all ray tracing functions on high and the game set to high. DLSS was enabled for this title in the quality setting.

Battlefield V

Battlefield V was one of the earlier games in the RTX 20 Series lifecycles to receive a DXR update. Battlefield V was tested on the opening sequence of the Tiralleur war story as it's been consistently one of the more demanding scenes for ray traced reflections that are featured in this game. DLSS was enabled for this game.

Metro Exodus

Metro Exodus was the third entry into the Metro series and as Artym vetures away from the Metro he, and you, are able to explore the world with impressive RT Global Illumination. RTGI has proven to be quite the intense feature to run. Metro Exodus also supports DLSS so it was used in our testing. Advanced PhysX was left disabled, but Hairworks was left on.

Quake 2 RTX

Quake II RTX is much like Minecraft with RTX in the sense that it is fully path-traced, so no rasterization here. This one however doesn't support DLSS so you're going to have to brute force it to acceptable framerates. Thankfully if these numbers don't do it for you then you can always adjust the resolution slider and enjoy a healthy performance boost.

Boundary

Boundary is a multiplayer tactical shooter...in space. It's not out yet so treat this one as more of a synthetic benchmark as there are likely to be quite a few improvement but for now we had access to the benchmark and it's a doozy to run. Featuring full raytracing effects for the benchmark as well as DLSS, we ran that in Quality mode.

Bright Memory

Bright Memory is an action shooter that is currently in early access on Steam, will later be called Bright Memory Infinite when it fully releases. A one man team has turned this game into a showtopper and now it features RT reflections as well as DLSS. We ran it at the High preset with DLSS set to Balanced for our testing.

Amid Evil

Amid Evil is a high energy old school shotoer that seems like an unlikely recipient of RT features, but here we are with insane DXR support in a modern retro shooter. Feature RT Reflections, RT Shadows, and NVIDIA's DLSS support we had to put this one through the rounds and see how things went. The RTX version of this game is still in beta but publicly available for those who want to try it. We tested with all RT features on and DLSS enabled.

Death Stranding

Sam Porter Bridges has delivered one of PS4's most anticipated games to the PC community and opened a whole new wold of possibilities. This was the first game to feature the Decima Engine on PC and unarguably did it the best. Death Stranding may not feature ray tracing effects but it does showcase that DLSS can be used effectively even when RT isn't around. We tested this one just like we did in our launch coverage with DLSS enabled.

Graphics cards and power draw have always been quite synonymous with each other in terms of how much performance they put out for the power they take in. Measuring this has not always been the most straight forward when it comes to accuracy and methods for reviewers and end-users. NVIDIA has developed their PCAT system, or Power Capture Analysis Tool in order to be able to capture direct power consumption from ALL graphics cards that plug into the PCIe slot so that you can get a very clear barometer on actual power usage without relying on hacked together methods

The Old Way

The old method, for most anyway, was to simply use something along the lines of a Kill-A-Watt wall meter for power capture. This isn't the worst way, but as stated in our reviews it doesn't quite capture the amount of power that the graphics card alone is using. This results in some mental gymnastics to figure out how much the graphics card is using by figuring the system idle, CPU load, and the GPU load and estimating about where the graphics card lands, not very accurate to say the least.

Another way is to use GPU-z. This is the least reliable method as you have to rely entirely on the software reading from the graphics card. This is a poor method as the graphics cards vary in how they report to software when it comes to power usage. Some will only send out what the GPU core itself is using and not consider what the memory is drawing or any other component.

The last way I'll mention is the use of a multi-meter amperage clamp across the PCIe slot by way of a riser cable with separate cables then more power clamps on all the PCIe power cables going into the graphics card. This method is very accurate for graphics card power but is also very cumbersome and typically results in you having to watch the numbers and document them as you see them rather than plotting them across a spreadsheet.

The PCAT Way

This is where PCAT (power capture analysis tool) comes into play. NVIDIA has developed quite a robust tool for measuring graphics card power at the hardware level and taking the guesswork out of the equation. The tool is quite simple to set up and get going, as far as components used there are; a riser board for the GPU with a 4-pin Dupont cable, the PCAT module itself that everything plugs into with an OLED screen attached, 3 PCI-e cables for when a card calls for more than 2x 8-pin connectors, and a Micro-USB cable that allows you to capture the data on the system you're hooked up to or a secondary monitoring system.

Well, that's what it looks like when all hooked up on a test bench, you're not going to want to run this one in a case for sure. Before anyone gets worried, performance is not affected at all by this and the riser board is fully compliant with PCIe Gen 4.0. I'm not so certain about those exposed power points however, I will be getting the hot glue gun out soon for that.  Now, what does this do at this point? Well, two options: Plug it into the computer that it's all running on and let FrameView include the metrics, but that's for NVIDIA cards only so a pass, OR (what we do) plug it into a separate monitoring computer and observe and capture during testing scenarios.

The PCAT Power Profile Analyzer is the software tool provided to use to capture and monitor power readings across the PCI Express Power profile. The breadth of this tool is exceptionally useful for us here on the site to really explore what we can monitor. The most useful metric on here to me is the ability to monitor power across all sources, PCIe power cables (individually), and the PCIe slot itself.

Those who rather pull long-form spreadsheets to make their own charts are fully able to do so and even able to quickly form performance per watt metrics. We've found a very fun metric to monitor is actually Watts per frame, how many watts does it take for the graphics card to produce one frame at a locked 60FPS in various games, we'll get into that next.

Control Power

Control was the first game that we wanted to take a look at running at 1440p with RT and DLSS on, and then again with DLSS off, this is the game that NVIDIA used when showcasing the performance per watt improvements of Ampere, and well..they were right in the claim there.

 

From these results for Control is shows that NVIDIAs measurements and claims of improvements were accurate, but it's not always the case. We tested Forza Horizon 4 in a spot to test the same way again but this time at 4K and looking at when we target at 4K60 scene in this game

 

Overclocking the GeForce RTX 3070 went quite a bit as it did for the RTX 3080. But instead of given a power budget that allowed for a 115% power limit we were treated to only a 109% power limit since we only had a single 8-pin to work with. Now that allowed us to get an additional 50MHz over the stock clock that resulted in a stable 1950-1995MHz core clock. Memory fared much much better with us being able to see upwards of +1250Mhz to the memory but settling on a much more comfortable +1000Mhz making the effective speed 16Gbps with a memory bandwidth of 512GB/s over the stock 14Gbps and 448GB/s.

We went a step further this time and included undervolting here rather than a separate article like we did for the RTX 3080, check that article if you want a guide for how to do undervolt Ampere for hidden efficiency potential.  We settled with the core at 925mV (not as much downward momentum as the RTX 3080) but a core clock of 1900MHz that ran at around 1925Mhz in-game. We set the memory to the same +1000 for an effective rate of 16Gbps and 512GB/s of bandwidth. Looking at the results overall I think it's safe to say the Undervolted core + overclocked memory is the way to go. Time to tune it up, folks!

Firestrike

Firestrike is running the DX11 API and is still a good measure of GPU scaling performance, in this test we ran the Ultra version of Firestrike which runs at 4K and we recorded the Graphics Score only since the Physics and combined are not pertinent to this review.

Time Spy

Time Spy is running the DX12 API and we used it in the same manner as Firestrike Extreme where we only recorded the Graphics Score as the Physics score is recording the CPU performance and isn't important to the testing we are doing here.

Forza Horizon 4

Forza Horizon 4 carries on the open-world racing tradition of the Horizon series.  The latest DX12 powered entry is beautifully crafted and amazingly well executed and is a great showcase of DX12 games.  We use the benchmark run while having all of the settings set to non-dynamic with an uncapped framerate to gather these results.

Rainbow 6 Siege

Rainbow 6 Siege has maintained a massive following since its launch and it consistently in Steams Top Ten highest player count game.  In a title where the higher the framerate the better in a tactical yet fast-paced competitive landscape is essential, we include this title despite its ludicrously high framerates.  We use the Vulkan Ultra preset with the High Defenition Texture Pack as well and gather our results from the built-in benchmarking tool.

Resident Evil 3

The Resident Evil 3 Remake has surpassed the RE2 Remake in visuals and is the latest use of the RE Engine.  While it does have DX12 support the DX11 implementation is far superior and because of that, we will be sticking to DX11 for this title.  We use the cutscene where Jill and Carlos enter the subway car for the first time and a 2 minute capture at that point.

Thermals

Thermals were measured from our open test bench after running the Time Spy graphics test 2 on loop for 30 minutes recording the highest temperatures reported. The room was climate controlled and kept at a constant 22c throughout the testing.

Power Consumption While Overclocked

 

That's what NVIDIA said when regarding what to expect from the GeForce RTX 3070 and, well, they were right. It's been a somewhat expected trend over past launches; the GTX 970 was similar to the 780Ti, the GTX 1070 was similar to the GTX 980Ti and so forth but they never quite matched the previous-gen top dog Ti part, until now. The GeForce RTX 3070 delivers on the promise that NVIDIA made and managed to do it on the smaller tier GA104 die and with lower memory bandwidth to boot.

The card itself comes in a fair bit more compact package fitting that 2080Ti performance in an RTX 2070 overall package, but with a much smaller PCB design allowing for the flow-through heatsink design to be carried over from the bigger GA102 designed cards of the RTX 3080 and RTX 3090. Can I point out one major complaint is the fact that the GeForce logo doesn't light up? Seriously, come on guys.

The new cooler design does allow for the card to run very cool, much cooler than expected for the unusually high 220w TDP for a 70 class series card. The GeForce RTX 3070 does retain the now expected 12pin connector but only needs a solid 8-pin to be plugged in. I did ask NVIDIA if there was any harm in using a dual 8-pin to 12-pin adapter because most of the modular PSU cables plug into dual 8-pin on the PSU itself and they assured me there was no issue. But on a side note, I did borrow a dual 8-pin to 12-pin adapter and found myself able to push the core to the 2100MHz mark with ease, so I imagine there's quite some headroom on those overbuilt aftermarket cards you'll see reviews go up for soon enough.

The performance was absolutely solid. Want to pick the GeForce RTX 3070 for 1440p gaming? It'll kill it. Ultrawide 1440p? It'll kill that too. 4K? I would recommend stepping up a class but the 3070 will get you in the game if you're okay with dialing back some settings. You're going to find RTX enabled games running very well at the 1440p mark when paired with DLSS, especially if the game is one of the ever-growing titles that support DLSS 2.0 which is quite killer and upcoming games like Cyberpunk 2077 and Watchdogs Legions are titles you'll want to take advantage of the RTX suite with.

Features like AV1 might not be fully mainstream yet but the GeForce RTX 3070 carries support for it. And other features like the NVIDIA Broadcast suite are absolutely undeniable in the value department, but that one is available across a large swath of GeForce cards, but something worth mentioning still.

The elephant in the room here is the 8GB of VRAM. It's an understandable concern as this marks the 3rd generation where 8GB was the available VRAM for the 70 class card. I'm sure many were hoping to see more here and I don't blame them. But so far we haven't seen it be an issue at 1440p and even 4K as the results show. What we've found in many cases where it appears there may be a VRAM limited performance scenario we actually found it more likely that it was a bandwidth-limited situation and the gap closed off once the memory was overclocked.

Speaking of overclocking, while the core might not have moved much the memory on these cards is absolutely insane for overclocking. I had no issues pushing this card to +1250MHz memory resulting in a memory bandwidth of 528GB/s and while still less than the RTX 2080Ti's 620GB/s it's still impressive for a 256-bus card especially considering its stock bandwidth is the usual 448GB/s. But looking at the results it's clear the best performance comes from a combination of GPU core undervolting and memory overclocking.

The NVIDIA GeForce RTX 3070 has raised the bar on what to expect in terms of performance at $499 for now. It's able to deliver high frame rate gaming at 1440p and Ultrawide 1440p while still being able to punch about in the 4K class. But its solid RTX feature support and performance all-around at 1440p keeps it firmly in the strong arm of the market for that highly coveted 1440p high refresh market.

The post NVIDIA GeForce RTX 3070 8GB Founders Edition Graphics Card Review by Keith May appeared first on Wccftech.



Refference- https://wccftech.com

Post a Comment

0 Comments