Gigabyte's response to poorly made GPUs is informative and unfortunate

This is one of the reasons AMD partners are better it seems. XFX and sapphire, I have heard really decent things about their warranty.

I am not sure if I can comment on gigabyte overall but one issue I had with them was their warranty support. I mean its not something I would deal with again. It almost seems like if you got a dead card they will say they repair it and send it back to you lmao. I had one with 4090 regular version, that thing I sent back and they said they fixed some 12v shit but didn't look like they even tested the card even if they fixed some shit. I got it back, same issue, no power on, no boot, no fans spinning. Like it was as dead as I sent it to them.

Luckily being a total tech member for BB saved me since I had 60 days return, this was when 4090s were hard to get. Just returned it. I know it happens you get DOA but crazy how their warranty is just repair and check mark without any accountability and they will send you shit back so you are sick and tired of it and go buy new card it seems lmao. I am sure if the fans go bad its not a huge issue, but I don't expect them to repair shit if your card is totally dead. They should be replacing those cards and keeping the dead ones to fix. They literally only had my card for a day.
 
I have never heard anyone talk about ASRock as being a bastion of reliability and quality or warranty support.
I'm bumping this post - because there is a discussion about Asrock in one of the Canadian-based reddit subs - Asrock only looks at authorized distributors - so, buyer beware. The concern here includes buying Asrock graphics cards - as Microcenter and Amazon are not 'authorized distributors' according to their webpage - there's some buyers who have been denied RMAs based on this.
 
I'm bumping this post - because there is a discussion about Asrock in one of the Canadian-based reddit subs - Asrock only looks at authorized distributors - so, buyer beware. The concern here includes buying Asrock graphics cards - as Microcenter and Amazon are not 'authorized distributors' according to their webpage - there's some buyers who have been denied RMAs based on this.
I can see Amazon not being an authorized retailer, but Microcenter?
 
This is one of the reasons AMD partners are better it seems. XFX and sapphire, I have heard really decent things about their warranty.
I had a positive experience with Sapphire. Many years ago I had one die (I think it was a 4870 or 5870) and the RMA was smooth. From what I remember Sapphire didn't handle the RMA directly, but they used a third party called Althon Micro. I specifically remember the name because it reminded me of Athlon from AMD.

The one thing I don't like about some AMD AIBs (looking at you Sapphire and Powercolor) is they only offer a 2 year warranty. 3 year warranty should be the standard. XFX seems to offer 3 year warranty on most high end models, but the more basic/reference designs only get you a 2 year.
 
I had a positive experience with Sapphire. Many years ago I had one die (I think it was a 4870 or 5870) and the RMA was smooth. From what I remember Sapphire didn't handle the RMA directly, but they used a third party called Althon Micro. I specifically remember the name because it reminded me of Athlon from AMD.

The one thing I don't like about some AMD AIBs (looking at you Sapphire and Powercolor) is they only offer a 2 year warranty. 3 year warranty should be the standard. XFX seems to offer 3 year warranty on most high end models, but the more basic/reference designs only get you a 2 year.
Some countries - they also use 3rd party for the RMA (process). Supposedly, it's a bit of a headache for Sapphire RMA in Canada - dunno about the USA or other countries.
 
<div data-xf-p="1"><br class="Apple-interchange-newline">I had to replace the thermal pads and thermal paste to a top tier one, cause my both 3060 TI Gaming OC were throttling and fans were making a crinchy sounds on high temps. Very specific videocards with a bad fan architecture it seems.</div>
 
I've used all the major brands for mobos. Asus is the only one that's never failed on me. Of course, by announcing this I probably guaranteed a catastrophic disaster looms in the near future with my current build.

I've had GPU failures from all the big brands. My best customer service experience was EVGA and MSI.
 
Hmm, well this has me a bit concerned. My 4070Ti is the Gigabyte Gaming OC, essentially their lowest price model. Luckily it came with an anti-sag bracket, but still, any mishandling, or even just pulling the card out to install new hardware and plugging it back in multiple times might potentially cause this kind of damage. :(
 
moral of the story; use a support bar or kick stand on your giant gpus?

Honestly, PCIe slots should not need steel reinforcements on them and video cards should not need kick stands, because video cards should not weigh 5 fucking pounds... :)
 
Honestly, PCIe slots should not need steel reinforcements on them and video cards should not need kick stands, because video cards should not weigh 5 fucking pounds... :)
There are no “should”s.
Putting limitations just limits innovation.
A huge part of the computing hobby is just doing things because you can. Otherwise we “should” all just buy pre-builts (that is if there were any out there without build issues).

People said the same thing about adding power to video cards. Or water cooling. Or needing mounts that attach to the board instead of the socket. Or any number of things.

Basically instead do what’s necessary to get the performance you want. If you don’t want that level of video card power (because of size/weight) than simply buy a much smaller mid-range card. There isn’t another way to get 4090 levels of performance.
 
Last edited:
There are no “should”s.
Putting limitations just limits innovation.
A huge part of the computing hobby is just doing things because you can. Otherwise we “should” all just buy pre-builts (that is if there were any out there without build issues).

That is definitely not true. Limitations inspire. Limitations challenge what is considered possible on any given hardware.

It is amazing the things that early programmers and enthusiasts were able to squeeze out of the old 8-bit computers, for example. Or pre PS-4/XBOX consoles. Limitless resources lead to stagnation and waste. Look at many of the games we get presented with now. There are exceptions, of course. There still exists some amazing games that come out of nowhere (like BG3, or I've been told, Elden Ring), but almost everything we get is lazy crap with loot mechanics, lacking any innovation of any kind.
 
That is definitely not true. Limitations inspire. Limitations challenge what is considered possible on any given hardware.

It is amazing the things that early programmers and enthusiasts were able to squeeze out of the old 8-bit computers, for example. Or pre PS-4/XBOX consoles. Limitless resources lead to stagnation and waste. Look at many of the games we get presented with now. There are exceptions, of course. There still exists some amazing games that come out of nowhere (like BG3, or I've been told, Elden Ring), but almost everything we get is lazy crap with loot mechanics, lacking any innovation of any kind.
Necessity is the mother of invention, yes?

But Koenigsegg doesn’t exist in minimalist and austere times. Neither does a card like the 4090. There is literally not another way to have that performance level today. It’s at the edge of what silicon foundries can produce.

If you honestly believe better could be done, you should become a silicon engineer and do better. The most brilliant minds producing this stuff haven’t found another way.

AMD is a distant second. Intel isn’t even on the map. If it was possible it would’ve been done.
 
Necessity is the mother of invention, yes?

But Koenigsegg doesn’t exist in minimalist and austere times. Neither does a card like the 4090. There is literally not another way to have that performance level today. It’s at the edge of what silicon foundries can produce.

If you honestly believe better could be done, you should become a silicon engineer and do better. The most brilliant minds producing this stuff haven’t found another way.

AMD is a distant second. Intel isn’t even on the map. If it was possible it would’ve been done.

Sure, a Koenigsegg vehicle is an exception. Specifically, they are vehicles designed and built by a man driven to try radically different approaches to the same old problems inherent in automotive design. The man has enough funding to fuel his interests. He is much more akin to an artist that just happens to build unique cars rather than paint. They aren't the fastest cars in the world, but there is a lot of unique technological and manufacturing innovation in each and every new model of Koenigsegg vehicle (and they are still very fast). The nVidia 4090, on the other hand, is essentially a tweaked and die shrunk 2080. It was TSMC's innovation (process node) allowed nVidia to stuff more cores in the same relative die area and have those cores use less electricity. nVidia then opted to use a larger die size to allow even more cores than that and also increased the power draw of the card to accommodate higher clock speeds as well. That higher power draw and increased core count produce more heat which needs a bigger heatsink and more fans to keep under control. That's not innovation, it's merely brute force. It may be the fastest consume card you can buy right now, but it doesn't really bring anything new (or even all that interesting) to the table. It doesn't really do anything the 2080 doesn't also do already, albeit faster ("DLSS 3 is only on 4000 series cards" doesn't count for the same reasons as why "Only x570 boards can run Ryzen 5000 series processors" doesn't). The only things the 4090 actually innovates are price gouging and testing the tensile strength of PCBs and PCIe slots.
 
Sure, a Koenigsegg vehicle is an exception. Specifically, they are vehicles designed and built by a man driven to try radically different approaches to the same old problems inherent in automotive design. The man has enough funding to fuel his interests. He is much more akin to an artist that just happens to build unique cars rather than paint. They aren't the fastest cars in the world, but there is a lot of unique technological and manufacturing innovation in each and every new model of Koenigsegg vehicle (and they are still very fast). The nVidia 4090, on the other hand, is essentially a tweaked and die shrunk 2080. It was TSMC's innovation (process node) allowed nVidia to stuff more cores in the same relative die area and have those cores use less electricity. nVidia then opted to use a larger die size to allow even more cores than that and also increased the power draw of the card to accommodate higher clock speeds as well. That higher power draw and increased core count produce more heat which needs a bigger heatsink and more fans to keep under control. That's not innovation, it's merely brute force. It may be the fastest consume card you can buy right now, but it doesn't really bring anything new (or even all that interesting) to the table. It doesn't really do anything the 2080 doesn't also do already, albeit faster ("DLSS 3 is only on 4000 series cards" doesn't count for the same reasons as why "Only x570 boards can run Ryzen 5000 series processors" doesn't). The only things the 4090 actually innovates are price gouging and testing the tensile strength of PCBs and PCIe slots.
Again, I don’t think you’re addressing my points anymore as directed.

But to put your point on its head: by what method could someone produce a 4090 today that is using another method? (In particular any method you described or otherwise).

Why is “brute force” an unacceptable way to gain performance? How is that any different than adding cylinders, displacement, turbos, or superchargers to a car?
Or perhaps - why is brute force acceptable for Koenigsegg but not for anything else?

Are not all of these examples just “slamming” maximum performance regardless of “backdrop?” A Saturn V rocket (to go back to space/Apollo discussions) is a very brute force method for getting something into space. Why is that acceptable?

I think you have a bias for what you perceive to be elegant solutions but don’t have a real rationale for them.

Said another way, you should probably move to Apple/ARM if you truly feel that way about it, because they’re the ones pushing specialized code/instructions that use specific decoders in order to give performance advantages in the areas that people use Mac’s while not necessarily being as peformative at general compute compared to PC counterparts. Apple then is the ones building “clever” and accelerated performance through software/hardware integration.

They do so gaining perf per watt, battery power etc. PC’s by contrast all operate exactly opposite to that kind of thinking. The 9/10/11/12 Intel processors were all highly inefficient in terms of perf per watt.

And again I don’t think anyone cares. Most would be giddy to have a 4090 and a 13900k and wouldn’t bother to think that their electric bill got raised by ~$100 or whatever a year. Most are okay with a Koenigsegg or a Saturn V.

EDIT: just grammar/spelling driving me crazy.
 
Last edited:
It is likely these are mostly related to shipping damage, or moving the system around roughly, combined with a slightly less robust design with the mounting hole near the edge of the lock area. It's not just Gigabyte that is having the issue, even some FE cards are showing up with some frequency according to repair specialists. I think if they are putting 2.5+ slot coolers on a card, they should develop another PCIe PCB on the cooler itself, that slots into the mobo to support it. It could be removable or adaptable to fit different slots, just blank PCB for support.
 
Last edited:
My ASUS Maximus IV Extreme Z motherboard still runs strong to this day, as does the 2700k in it's LGA1155 socket that's run at 4.9GHz with speedstep and C-States disabled since day one. Not all ASUS products were bad.

I really need to find a case that suits a Sandy Bridge build and get the beast running again.
 
The BIOS on an Asus Z77 board I have got corrupted sometime last month. It has a built-in flasher and I was able to fix it though.
 
moral of the story; use a support bar or kick stand on your giant gpus?
even the FEs or are they rigid enough?

1692537695102.png
 
even the FEs or are they rigid enough?
It's not about the "rigidity" of your card. It's simple physics. It's torque. You're leaving a lot of weight that's completely unsupported on the far end of the card. Any small amount of jiggling and wiggling, and you have movement. Get enough movement, your PCB eventually sags and bends a bit, and it probably doesn't take much bending to develop a crack.

To their benefit, Gigabyte actually provides a support bracket for their 4090s that's pretty innovative. I'm still going to get another support pillar just in case. It sucks but if you want to protect your investment, it's not like a support bracket is ever going to stop doing its job. It's pretty futureproof lol.

https://www.amazon.com/Graphics-Card-GPU-Support-Bracket/dp/B0BZ87YT7S

It's also like 6 dollars.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
I've used a corsair air 540 on its side on a shelf for years now. None of my cards will bend.
 
If you have a card with a large cooler, you should be using a support mechanism regardless of manufacturer.
I’ll check it out

The FE seems very rigid and firm

But maybe it’s the pcie slot that’s at concern
 
Back
Top