Published on 27th June, 2023
JayzTwoCents released an early RTX 4060 performance review video, showcasing some strong FPS figures at an impressively low power usage – could we be seeing the first signs of a shift back toward prioritising lower power usage, and what does this mean for the Small Form Factor space?
Before I go into the nitty gritty details, please be aware the figures used in this article are based on a single review video with limited testing playing Cyberpunk 2077 at 1080p, they DO NOT represent average performance across a variety of games at different resolutions.
Aside from these figures likely being a best case cherry picked scenario to showcase the generational leap in FPS performance, I want to focus this article instead on the performance per watt figures that we’ve seen a brief glimpse of.
Here are the graphic card models we’ll be focusing on and their TDP’s:
As you can see already, the TDP of the 4060 is sitting at an impressively low 110 watts. We’ve not seen power figures this low for this class of card since the GTX 960 and GTX 1060 era over 8 years ago, both of which had a TDP’s 10w higher than the 4060 at 120w.
Some would consider the NVidia 1000 series the golden age for small form factor graphics cards, and with the GTX 1070 being rated at only 150w TDP we saw multiple manufacturers releasing short length ITX GPU models (170mm long) in the mid to high tier market sectors for the first (and last) time.
Gigabyte even released a dual slot ITX single fan version of the GTX 1080, which had a TDP of only 180w, almost half the power draw of an RTX 4080. This iconic card was featured in one of TekEverything’s build videos, running smoothly inside Lazer3D’s tiny 7 litre LZ7 chassis:
Compare this to the current equivalent 80 series card, the RTX 4080, which has a TDP of 320w and doesn’t come in anything sized less than a triple slot, 300mm+ length monstrosity, you can see where I’m going with this article!
Sadly the NVidia 1000 series was the last time we saw a xx70 or xx80 class card in anything smaller than reference size, since then our small form factor options have been limited to xx60Ti models or below.
The reason for this limitation is pretty simple, the laws of physics dictate how much heat can be dissipated by a traditional heatsink and fan combination. A dual slot 170mm length heatsink card starts to reach it’s thermal limit at around 150w – 170w TDP, anything above that and it will struggle to keep the GPU at a sensible temperature without sounding like a jet engine.
And so with increased power consumption between generations comes the need for larger heatsinks to get rid of all the extra heat being produced, and 320 watts is allot of heat! In any other tech sector this would be considered a step back, but instead in the PC market we’ve embraced it with open arms believing that bigger is better!
Graphics cards have a reached a point where they now dwarf motherboards, some of the newer models are so big they take up more space than 4 ITX graphics cards.
Image Credits: Notebookcheck and Videocardz
In a world that is growing ever more concious of the impact we are having on our environment and how our consumption of energy plays an important role in this, I have personally found the trend toward higher power usage in the PC market over the past few years quite concerning. In our quest for ever increasing frame rates, our Gaming PC’s are now drawing more than double the power from the wall than they were around 8 years ago.
Should we not be moving toward lower power, smaller, quieter and cooler systems?
Add into the equation rising energy costs, and this mentality of “throw more cores at it to make it go faster” just makes it feel even more ridiculous. Where do we draw the line when a PS5 delivers an amazing game experience, yet draws less than 200w of total power during gaming? Some modern CPU’s alone draw more power than this, requiring huge 360mm AIO’s to keep them running at sensible temperatures.
I’m not against progress, and of course there is a benefit and real world need for ever increasing computational power, but should increasing FPS from generation to generation come at the cost of increased power draw? or should we instead be looking for ways to reduce power consumption for gaming whilst maintaining the increasing graphical quality we’ve all come to love?
I believe the 4000 series is a good step in the right direction, particularly the 4060 from what we’ve seen so far. For the first time in nearly a decade we’ve seen a generational reduction in power draw for the same class of card, whilst also delivering improved performance.
Moving back to the JayzTwoCents video, we have some FPS numbers we can look at and compare to previous generations. The testing was carried out with: Cyberpunk 2077 – Track Benchmark – 1080p Ultra Preset (RT off/FG off/Upscaler off).
Based on the FPS and known TDP’s of each card, we can work out the “Frames Per Watt” (FPW) during this benchmark. FPW means how many video frames can the GPU generate per watt of power usage:
As you can see the RTX 4060 is leading the pack by quite a significant margin, thanks mainly due it’s impressively low 110w TDP. In this limited benchmark test the RTX 4060 delivers 27% more FPS than it’s predeccesor the RTX 3060.
The compounding effect of higher FPS and lower power usage combined, gives the RTX 4060 a Frames Per Watt score of 0.74 FPW, a whopping increase of 95% over the previous generation, nearly double the performance of the 3060.
NVidia have announced an MSRP of $299 for the RTX 4060, which is around the same as what the RTX 3060 is currently selling for. We all know that once they hit retailers the prices will likely be inflated above MSRP likely to around $350. At this price point the 4060 will not provide much of a price performance advantage over the 3060, but once you factor in power efficiency it becomes allot more compelling, particularly for small form factor systems with limited power supplies, for example those using DC-DC supplies such as HDPLEX models.
It may be wishful thinking, but I think these frames per watt efficiency numbers show that it’s possible to have a high quality PC gaming experience at lower power consumption levels.
Just because we can throw twice as much power at a Graphics Card to get 20% more frames, it doesn’t necceasily mean that we should be doing that. We should be thinking about balancing environmental impact with gaming performance, and considering that bigger isn’t always better.
Hopefully the 4000 series cards mark the start of a shift back toward lower powered cards, and this continues with further reductions in the 5000 series. If the trend continues then we may start seeing NVidia and AMD partner companies start to focus their resources back onto smaller products, which will certainly be great for the Small Form Factor community.
If you found this article interesting I would love to hear your thoughts on this subject down in the comments below, thanks for reading!