Products You May Like
This week AMD launched launched the first “budget friendly” RX 6000-series GPU with its RDNA2 technology, the RX 6500 XT. The launch was not without controversy however, as in order for AMD to hit its desired $199 price point it neutered the Navi24 GPU to the point where it is missing a lot of features that are considered necessary on a modern GPU, even for an entry-level model. For example, the card ships with just 4GB of VRAM, which the company said was a decision it made to discourage crypto mining. Potentially more impactful is the decision by AMD to ship the card with a PCI Express 4.0 (PCIe 4.0) interface with only four PCIe lanes to the CPU.
Now, a four-lane interface (x4) by itself isn’t necessarily bad, but since the card is designed to be an upgrade to an older system by way of its price point and overall performance, it’s highly likely that anyone buying a GPU like the 6500 XT has an older PC. That means this potential upgrader also has an older motherboard, which likely has the previous PCI Express 3.0 interface, which offers half the bandwidth of PCIe 4.0. Also, there have been many GPUs in this price range over the years with 16 PCIe lanes, including the $149 (at launch, $400 currently) Nvidia GTX 1650. Sure, AMD has reduced the number of PCIe lanes on previous entry-level cards, but not by this much. For example the similarly outfitted Radeon RX 5500 XT from 2019 also had 4GB of VRAM, but came with eight PCIe lanes.
This all sets the stage for one big question to be answered: How will the RX 6500 XT perform on a system with an older PCIe 3.0 interface? Thankfully, the intrepid benchmarkers at Tom’s Hardware have answered this question by testing the new GPU on an older test bench, and the results are a bit surprising. Though results vary according to the game being tested, which is normal, they were able to see a drop in performance in some tests that ranged from four percent to 36 percent, which is quite the downgrade. As a control, they tested the new Radeon card alongside Nvidia’s GTX 1650, which is a PCIe 3.0 card, but with 16 PCIe lanes.
At higher lane widths, moving from PCIe 4.0 to PCIe 3.0 doesn’t have this kind of impact. x16 PCIe 4.0 cards perform virtually identically when tested on the PCIe 3.0 standard, and we would expect even an x8 card to show limited decline when moving to the older interface. The problem here is that an x4 PCIe 3.0 connection is only equivalent to an x16 PCIe 1.0 connection and PCIe 1.0 was new 19 years ago. Single GPU bandwidth requirements have grown slowly over the years, but they have grown. An x4 PCIe 3.0 connection is clearly not enough to keep the card fed, partly due to the small VRAM buffer.
Test results show that if you run the RX 6500 XT on an older PCIe 3.0 system, it’s totally fine at Medium quality settings. However, when you crank up the detail settings to Ultra levels, even at 1080p, performance can crater in certain titles thanks to the one-two punch of insufficient VRAM and not enough bandwidth. These two factors work together to hamper the GPU because, once the card’s buffer is depleted, texture data has to be pulled over the PCIe bus. The card would take a performance hit in this scenario no matter what due to the intrinsic latency and lower bandwidth of even a full x16 PCIe 4.0 connection, but the limited capability of an x4 PCIe 3.0 connection hits the card hard.
Overall, over their seven game test suite, the average difference in performance between the PCIe 3.0 and 4.0 systems, at 1080p Ultra, was 22.7 percent slower performance, compared to just eight percent less frames at 1080p Medium. The GTX 1650 GPU, on the other hand, performed similarly on both PCIe interfaces, thanks to its 16 PCIe lanes.
The worst case scenario for the Radeon RX 6500 XT were the games Forza Horizon 5 and Borderlands 3. In Forza, at 1080p Ultra they got just 17.7fps using PCIe 3.0, and 28fps on PCIe 4.0, a difference of 36 percent. In Borderlands the results were largely the same, as they achieved 23.9fps using PCIe 3.0, and 37.1fps using 4.0, a 35.6 percent delta. However, this much performance degradation didn’t occur in all games, as Horizon Zero Dawn only provided a four percent differential in testing between the two interfaces. So in the end, it really does matter what games you prefer to play, and at the same time you’d be wise to avoid Ultra settings at all costs, which is a weird thing to say about a $200 GPU as that used to be considered “midrange,” whereas now it’s basically entry-level.
All this leads to the inevitable question: why did AMD launch this GPU? It makes very little sense to design a card for a specific upgrade market, then gimp it to the point where it will run slower on that platform. I mean, who has a system that is new enough to offer PCIe 4.0, but would consider the RX 6500 XT an upgrade? It doesn’t really make a lot of sense. That is, unless you take the cynical view that AMD figures given the sky-high demand for GPUs right now it can launch pretty much any GPU and it’ll be sold out instantly. As we reported yesterday on the card’s launch, it is indeed already sold out, despite some models going for as much as $359, which is almost double the MSRP. That’s not some scalper action either, as the card in question in sold and shipped by Newegg itself. One day after launch the card is basically impossible to find, despite AMD executives saying, “we intend to have a lot of product” in an interview with PCWorld.com ahead of the card’s launch. Sure, nobody knows what “a lot of product” really means, but suffice to say this situation isn’t really a surprise to anyone at this point.
Next week Nvidia will take its turn in the barrel by launching its RTX 3000-series entry-level GPU, the RTX 3050. That GPU has double everything compared to the Radeon card, with 8GB of VRAM, a 128-bit memory bus, and a PCIe 4.0 x16 interface. Hopefully, it will be avail….and it’s gone.
Now Read: