DLSS is just a way to boost performance for HDR and specially if you are running out of VRAM which is like a bandaid for the RTX 3080 smaller VRAM compared to the RX 6900XT which tends to be faster in all scenarios except Ray Tracing in which depending on the game, can be almost as fast like in...
Looks like CPU bottleneck, the IPC of that CPU along with the latency inherent of the CPU platform due to ECC among other stuff makes them unsuitable for gaming.
A friend of mine had a similar issue with his RX 5700XT and found out that he was using a PCI-E Riser that didn't get along with PCI-E 4.0, so he switched to PCI-E 3.0 and problem is gone. No longer black screens or crashes. I also had a similar black screen after resuming from sleep or...
Its a matter of warranty more than anything, as all Radeon VII are reference designs purchased directly from AMD by OEMs, they just slap their sticker and move along.
Hawaii's biggest bottlenecks are related to bandwidth due to lack of any compression technology, tessellation and Command Queue processors. So the fact that the RX 570 much smaller resources are able to match it, shows a testament of the performance gains with minor tweaks. Even the Fury X which...
I got mine and love it, I wouldn't expect a big boost from my Vega 64 Water Cooled Edition, but in games, it was around 22-34% faster in Metro Exodus while using considerably less power. In Superposition and Heaven Bench, around 600-700 points difference.
No amount of coding could actually help the Geforce FX, as it had severe internal issues with register pressure and lack of INT performance due for that. The HD 2900XT had a different issue, just too dependent on compilers to extract Instruction Level Parallelism, it was just too wide to be able...
Nah, it is known that GCN is very underutilized, Vega 64 LC edition barely competed with the GTX 1080 at launch, and the air cooled version consistently underperformed, now the air cooled version gives a hard time to the GTX 1080 and the Liquid Cooling edition either matches and outperforms the...
The RX 580 often matches the Fury and even the Fury X on Tessellation bound scenarios and considering that the Fury X still trading blows with the GTX 980 Ti. I suspect that the Fatboy would be very close to Fury X/GTX 980 Ti more often than not compared to what the RX 580 achieved. Even now on...
Yeah, AMD doubled the amount of Geometry processors but the performance gains were much smaller, like 15% compared to Tahiti, it got improved greatly on Polaris and then Vega. Both are so close that unless if you don't mind about heat and power consumption, Polaris seems like the better choice...
I would think of the same thing but the biggest issue with Hawaii is its Tessellation performance and bandwidth bottleneck. I had an overclocked 290X and I would get high RAM controller utilization quite often on VSR/MSAA scenarios causing slowdowns when getting close to 90% of the RAM...
I had the R9 M390X which is a lower clocked esktop R9 380X and for being such small chip, its performance was more consistent than the R9 M290X aka desktop HD 7870 that it replaced. But now I have an RX 580 on my laptop and so impressed.
Its like 8 pseudo cores, the biggest issue is the latency of the cache system and the narrow front end that can't feed all the cores at the same time, recurring on interleaving.
Definitively Kepler aged so bad, the fact that the 290X fares much better on current games only shows the FineWine effect. Maxwell is aging as well but not as bad, but the fact that the GTX 980 is trailing the 390X more often than not on current games compared to the old games shows two things...
I think AMD's FineWine thing is also biting them back, cause the 290X is way ahead of its competitor, the GTX 780 Ti and the GTX 980 Ti aged better than Kepler. AMD's fault of its own success regarding performance scalability between their own GPUs.
AFAIK Microsoft does not allow extensions on DX11 or DXCap like they did on DX9. So Gameworks is just a middleware developed by nVidia using DX11 features and pack them into libraries for easy access and implementation. Microsoft does use feature extensions like DX11.1, DX11.2 etc but its done...
I believe the Tensor cores are just glorified FP64 units that can support 4 compute queues per Tensor depending on the power of two equation 16+16+16+16 or 32+32 calculation grid for example. I read the Tesla whitepaper which talks about those Tensor cores and in reality, the resemblance is...
Not necessarily, the thing is that nVidia GPUs are more dependent on driver tweaks as their GPUs don't have a proper hardware scheduler like AMD GPUs does. nVidia uses a code morphing approach similar to Tegra Denver cores where they use code morphing, software scheduling and dispatching to feed...
When I first played that game, was on a laptop as my desktop was too old, it had a Pentium III Tualatin core ticking at 1.13GHz with 512KB cache, 512MB of RAM, Windows XP and a Radeon M6 aka Radeon 7000. I was able to max it at its high resolution display of 1024x768.
Nope, Windows 10 has several improvements not found on Windows 8. DX12 removes the biggest bottleneck on DX11, the draw call main thread which is single theraded. The new driver model offers higher efficiency, better multi core performance, smoother animations and so on. Don't try to mix your...
Typical fangirl logic instead of addressing the issue wth real facts. Just cause you don'tlike Windows 10, it does not mean that it is useless. It is far more efficient than Windows 8, uses less power, its more optimized for multicore CPUs and supports a newer Driver Model that can't be...
So much apologist stuff that it is funny. Power was never a concern back then on the Fermi era as nVidia still sold more cards than AMD and now suddenly power its an issue lol. Inconsistency is the biggest characteristic of nVidia fan girls. The fact that nVidia haven't been able to develop a...
Superior? Like how? Besides power efficiency? Their reliance on driver compilers to perform? Or dragging down the PC gaming as their GPUs are engineered for yesterdays DX11? Or the fact that they don't get any performance gains or as a matter of fact, lose performance on modern APIs like Vulkan...
What a shocker! nVidia is like a mix of Intel's greed behavior and Apple's elitist attitude that feeds the empty braincase of their loyal fanbase. They are a big and mean marketing machine.
Quite interesting that it can't keep up with the maximum boost clock. All that I did in order for my RX Vega 64 to keep up with the max boost was to increase the power target to 150% and then enable the fan curve to reach 100% whenever it is necessary. It runs at 1700MHz most of the time...
As fast as a GTX 980 Ti on DX11 while being as fast as a Fury on DX12 apps, not bad for the price but nVidia's current DX12 performance leaves a lot to be desired.
I also jumped from the 2600K to the 3770K and did not notice a performance difference, but did noticed that its faaaaaaaaaaaaaaaar more power friendly, and that is a bit smooth when handling lots of multithreaded applications. I wish I could use its IGP for Quicksync but my stupid motherboard...
The person wasn't even replying to you, I think like I said before, back then in 2009 I got two infractions due to cursing, so time changes it seems, still quite unprofessional, but just speaking out my mind.
I believe that an admin should hold composure and behave by leading example, and not doing cussing like this cause that is the kind of stuff that gives us infraction points, and then how an admin can get away with it?
There is no Z67 chipset and the PCI-E limitations are on the CPU, not on the chipset, I have the Z68 chipset and when I sidegraded from the 2600K to the 3770K, I got PCI-E 3.0 support. PCI-E performance scaling is overrated. Even PCI-E 2.0 at 4X, barely gets a dent on a Fury X or GTX 980 Ti.
Great performance gains, disapointed with the DX12 performance, it seems that it still relying on software to work with the scheduling process when doing Async, so they are trying to use brute force to compensate their inefficient approach.
Not really, its known that nVidia pulled the plug regarding optimizations to Kepler like 6 months later after Maxwell launch. Current nVidia architectures are very reliant on optimizations per app to perform due to their simplistic architecture. So don't be surprised that suddenly after Pascal...