Nvidia’s GeForce GTX 970 drastically changed the landscape of high-end PC gaming, providing a massive amount of performance with fantastic TDP, it essentially made Nvidia’s own Kepler lineup obsolete and competed against AMD’s R9 290X, albeit with better TDP performance. While the card is still tremendous value, at over $350 dollars (about £250) it’s still quite the investment for many gamers. Enter the GeForce GTX 960, which Nvidia released to try and claim the £150 market sweet spot.
From the power draw standpoint, it’s hard to argue with Nvidia’s offering – MSI recommend a modest 400 Watt PSU (with a minimum of 42 amps on the 12V rail), and with a TDP of just 120W, it’s a great addition to either a budget conscious gamer or those looking for a card in a small form factor system (say, for the living room).
The GTX 960 is of course based on the Maxwell architecture, with the core dubbed GM206, whose specifications amount to half of the GM204 – the core which is powering the GeForce GTX 960. The card packs in 1024 CUDA cores thanks to eight Streaming Multiprocessors, 64 Texture Units, 32 ROPS and 1MB of Level 2 cache shared across the GPU, and finally a 128-bit memory interface, opposed to the 256-bit interface found in say the GTX 980.
|Specs||GeForce GTX 960||GeForce GTX 760||AMD R9 285|
|Core Clock||1126 / 1178MHz (1241 / 1304 this sample)||980 MHz||Up to 918Mhz|
|Memory Clock||7GHz GDDR5||6GHz GDDR5||5500MHz GDDR5|
|Memory Bus Width||128-bit||256-bit||256-bit|
|Bandwidth||112.2 GB/s||192 GB/s||176 GB/s|
|Power Connectors||1 x 8-Pin||2 x 6-Pin||2 x 6-Pin|
While the card has just 112.6GB/s of bandwidth, Nvidia argue that while the cards blistering GDDR5 memory clocks (7010Mhz effective), combined with the third generation color compression technology (see below for further details) makes up for the lack of a wider bus. While we’re on the subject of clock speeds, MSI’s GeForce GTX 960 Gaming 2G sports a 100MHz overclock over the reference Nvidia design, providing a welcome increase in performance. Naturally it’s still possible to tweak clock speeds yourself – we’ll be using MSI’s own AfterBurner tool for just this purpose.
There are a subtle advantage the Maxwell GM206 has versus the GM204, the ability to decode H.265 (HEVC), making it even more attractive for those who’re looking to build a low-power home-theater or small form factor PC for the living room. Aside this nuance, other features including DirectX 12 compatibility, MFAA (known as Multi-Frame Sampled Anti-Aliasing), Dynamic Super Resolution (DSR) and finally Voxel Global Illumination remain the same as its bigger brother.
Maxwell’s Color Compression Technology
Color compression and other techniques to reduce data being shunted around the graphics cards bus aren’t particularly new, but with each successive generation of cards Nvidia refine and further improve the technology. The lineage of color compression dates back to the GeForce FX series (released in late 2002), which effectively compressed data to a 4:1 ratio; the technology by today’s standards is fairly crude, and at a basic level looks at the scene on a frame-by-frame basis, breaking data the image into ‘sectors’ and then looking for portions of data it can compress (by saying, hey, these bits of data are all the same color, no need for all of that info!).
Next came Fermi – AKA the GeForce GTX 400 series, which took things a step further with the introduction of Delta Color Compression – and switches things up rather considerably by changing to pattern compression instead of regional compression, it compresses the data based on what’s different rather than what’s the same. Maxwell builds on this established color compression technology, enabling the GPU’s 3rd generation technology to recognize a greater number of patterns, and thus compress more actual color pixels.
In a nutshell, Nvidia claim this makes the 7010 Gbps GDDR5 RAM on their GPU run at an effective speed of around 9.3, an improvement of about 25 percent. To put this into perspective, it would mean that the “effective” bandwidth of the GTX 960 is closer to 150 GB/s.
Introducing the MSI GeForce GTX 960 Gaming
So what’s the difference between Nvidia’s standard reference design and MSI’s Gaming 2g design? Primarily, the differences are the increased core clock and improved cooler. As we mentioned above, the Gaming 2G features a about a ten percent increased core clock over the reference design. Additionally, MSI card features the Twin Frozr V cooler, which helps keep the card even cooler than the reference unit. Another plus – particularly for the home theater crowd: the fans stop spinning if your GPU isn’t under load, reducing the noise of your PC. Of course, in a high end desktop solution which has noisy fans, this isn’t a big deal – but in a Small Form Factor (SFF) or home theater design, it’s a welcome addition to be sure. Also, a slight note – MSI’s logo actually glows on the GPU – it’s hardly the biggest of deals, but instead serves as a nice touch.
The 2GB model we’re reviewing doesn’t have a backplate – and speaking of 2GB, is that enough memory for games of 2015, let alone 2016? Well, the answer depends upon the resolution you’re targeting and if you’re planning on running SLI (the GTX 960 supports two-way SLI) in the future. If you’re planning on picking up a second card for say 1440P gaming, then you’d likely be better off going for the 4GB model, but if you’re planning on such a resolution, you might be better serviced with a GTX 970 or similar. For now – at the resolution of 1080P at least, 2GB should be sufficient in virtually all titles. One does have to take into account that AMD are offering a few tempting solutions – including the R9 285 (which also has 2GB RAM) and the R9 280, which a 3GB frame buffer.
For other connections, the GPU features 1 Dual-Link DVI-I connection, a single HDMI 2.0, supporting 2160P at 60 Hz, and 3 DisplayPorts powering up to four displays at once should you have the desire for that much desktop space.
MSI GTX 960 Gaming 2G Benchmark Results
|1920×1080 (average)||AMD R9 280||AMD R9 285||GTX 760||MSI GTX 960|
|Crysis 3 + FXAA||45 FPS||46 FPS||40||45.9 FPS|
|Metro: Last Light||47||51.06 FPS||41||52.15 FPS|
|Tomb Raider Ultra TressFX + FXAA||51.4||49.9 FPS||42.3 FPS||58.3|
|Sleeping Dogs MAX +Extreme AA||43.7||43 FPS||39.6 FPS||43|
|Bioshock Infinite||64.8||70.44 FPS||63.8 FPS||69.25|
|Batman Arkham Origins Max & MSAA||83||78 FPS||78 FPS||79 FPS|
|THIEF Max + FXAA||55.8 D3D70.4 Mantle||56.0 D3D 63.1 Mantle||52.2 FPS||71 FPS|
|3d Mark FireStrike||7219||7685||6021||7338|
|Shadow of Mordor Ultra + Max Texture||33 FPS||45 FPS||34 FPS||41.06 FPS|
Looking at the graph above, you can see that the GTX 960 takes the lead in most gaming applications, though it won’t have quite the grunt to run everything at 60FPS locked. Certain titles – in particuar Shadow of Mordor are extremely taxing (as we’re running with the highest quality textures, where 6GB is recommended), but even so, you’ll get an experience which looks considerably better than the Playstation 4 or Xbox One versions of the game, and runs better too – it’s hard to argue, particularly with the price point.
|2560×1440(average)||AMD R9 280||AMD R9 285||GTX 760||MSI GTX 960|
|Crysis 3 + FXAA||45 FPS||46 FPS||40||29.2 FPS|
|Metro: Last Light||47 FPS||51.06 FPS||41||32.8 FPS|
|Tomb Raider Ultra TressFX + FXAA||51.4 FPS||49.9 FPS||42.3 FPS||42.3|
|Shadow of Mordor||41 FPS||42.2 FPS||35 FPS||37 FPS|
Switching things to the 1440P resolution, we’ve focused on only the most taxing of games – and the GTX 960 once again manages to scrape by the 30FPS mark. Crysis 3 is particularly punishing – but considering the huge increase in pixel count over 1080P, the frame rates starting to drop are to be forgiven. We decided to reduce the texture quality settings in Shadows of Mordor for this resolution. The additional pixel count helps offset the lower quality textures.
For my own gaming, I would prefer to play at 1440P with V-Sync enabled, (unless you’re running a fancy G-Sync monitor) so that you can keep your games at a constant 30FPS rate. If you’re really aiming your sights at 1440P at around the 60 FPS mark, you might be better off trying to save up the extra cash for either an R9 390 or a GTX 970.
MSI GeForce GTX 960 Gaming 2GB Verdict
So, if you’ve read over the review, you’ll probably have an inkling we’re quite a big fan of the card. In raw performance terms, it’s not a league above say an R9 285, but because of the considerably lower power draw and heat output, the Maxwell architecture certainly has a lot of benefits. The card is small, sleek and makes a compelling case for itself either as a budget gaming card, or as the GPU powering a streaming and multi-media machine, particularly as MSI’s cooler doesn’t even need to run while you’re on desktop.
Nvidia’s GeForce GTX 960 probably isn’t going to appeal to those who’ve already got a decent GPU in their machine (say, a GTX 770, or an R9 280) unless heat is a concern. But, if you’re upgrading from something a little lower end, or if you’re building a new machine on a budget or for streaming in mind, I would certainly have no difficulty recommending this card.
Given a slightly wider bus (say 192-Bit), we feel the performance and value of this card are second to none – and it probably would have caused as much of a shakeup in the GPU market as Nvidia’s own GTX 970 card. But as things stands, it’s still an extremely capable card, and you’ll be very happy with it sitting inside your system.
Buy The GeForce GTX 960 Gaming 2G
If you’re thinking of buying the card, please consider using the following Amazon affiliate links! It costs you nothing, but we get a few pennies to help us run the website!
AMD Vs Nvidia Technology
Saved for the very end, we’ll talk about AMD and Nvidia’s technology. For the 300 series, AMD implemented the “Frame Rate Control” – which reduces the GPU’s power consumption by having it set a frame rate target. Let’s assume you’re only running a 60hz monitor, then having your GPU render at 300 fps doesn’t help any – so setting it to 60 FPS will ‘throttle’ the GPU and have your system produce less and and suck less juice from the wall socket. It’s not something we’ve had much time to test during this review, but it’s an intriguing idea.
The GPU market has changed significantly in the past 12 months, and not just because of the imminent release of DirectX 12. You’ll likely know Nvidia and AMD are both pushing their own technologies. Nvidia have G-Sync and of course hardware Physx. AMD meanwhile are pushing their Mantle API and True Audio technology, and FreeSync, which is their counter to Nvidia’s Gsync.
If you’re interested in Virtual Reality, AMD are also releasing LiquidVR SDK’s too, the technology claims to release the latency of VR hardware… then there’s Nvidia’s GameWorks technology, which will (supposedly) also be moving to virtual reality support too.
It’s tricky to ‘predict and then buy’ for the future, and despite Mantle’s very impressive performance numbers with BF4, Hardline, Dragon Age Inquisition and Thief, it’s still a work in progress. There’s also the question of how much support it’ll have once DX12 is fully launched. There’s currently a reasonable smattering of Mantle API titles, so it’s still a very nice bonus if nothing else. TrueAudio’s future is somewhat less known. Thief (for example) uses it and it does reduce CPU usage, but not enough games currently have the future to sway a decision over a graphics card.
AMD’s TressFX technology (at least currently) works on Nvidia’s cards along with AMD’s own. Unfortunately TressFX hasn’t been widely adopted yet, likely because it’s so GPU intensive. Even TressFX 2 takes a huge toll on the consoles with Tomb Raider Definitive Edition.
Meanwhile Nvidia’s G-Sync has been released and does a great job of both eliminating screen tearing and reducing latency which is associated with V-Sync enabled. Those who have used the technology are reluctant to go back to their old screens, but currently the selection of screens its available for is limited. In addition to this, it adds to the price of a new monitor. With that said, FreeSync is gaining a lot of traction in the market, and does a pretty damn similar job – and cheaper too. It’s hard to argue with “cheaper” – and it’s not as though the monitor range is poor either. As much as G-Sync is awesome, FreeSync is possibly better for the average customer, simply due to the monitors costing less.
I actually really do like Nvidia’s hardware PhysX technology, in certain games it adds a lot to the atmosphere, such as the wisps of smoke in Assassin’s Creed 4 Black Flag. There are other effects in Metro Last Light, such as debris and dust, and so on. It’s a nice extra, and for someone like myself who regularly makes graphics comparisons it’s a nice extra. For most gamers though they’re willing to do without it.
While AMD have their Raptr application, and Nvidia their GeForce experience, tweaking your settings using these applications is a roll of the dice. For gamer’s who’ve a reasonable understanding of the basic settings, you’re better served to handle things yourself. If you do want to record your gameplay, GeForce does have Shadowplay, which I personally find quite useful. While AMD do have alternatives built in with Raptr, the technology isn’t quite so far along as ShadowPlay, thus I have to give the subtle nod to Nvidia on this point.
All in all, there’s certainly an argument of whose cards offer the best exclusive technology… but if DX12 rumors are true (and they’re unconfirmed at time of my writing this) you’ll be able to pair up an Nvidia and AMD card into the same machine and gain the benefits of both GPU’s.