AMD Ryzen 1700X CPU Review @ [H]

I think anyone banking on magical patches to fix Ryzen performance on currently released games will be sorely disappointed.

The games are made - finished - done. The highest number of sales any title will have will be in it's opening weeks. Long term support is virtually non-existent.

Half the time they can't even get a game to work properly on existing set-ups let alone new systems.

Expect the AAA sequel to work better.....possibly.


A bit of a sluggish 8 core chip with the performance of a quad and the usual AMD drawbacks/limitations which come with being 2nd fiddle in the technology race.

Performance seems to range from a 6/7 year old 2600K all the way up to the latest INTEL offerings depending on the job.

A bit of a joke really; I can't believe so little progress in the CPU realm has been made - compared to something like storage or graphics.

It's not amazing for $300+ is it?

I don't think they'll handle emulation very well either - in something like CEMU (WiiU)


I still don't expect developers to start favouring AMD's chips & methods over INTEL so they'll probably always be lagging behind in benchmarks.

I wouldn't be disappointed with one of these chips at the right price (my setup is that old!) but I wouldn't personally pick up one with today's pricing:
https://www.amazon.co.uk/AMD-Ryzen-1700-16-Core-3-7-GHz/dp/B06WP5YCX6/
https://www.amazon.co.uk/Intel-i7-7700K-QuadCore-Cache-Processor/dp/B01MXSI216/
https://www.amazon.co.uk/Intel-Core-i5-7600K-QuadCore-Cache/dp/B01MRRPPQS/

Be better off scouring the used market.

Does INTEL have a new architecture or some other trick up it's sleeve? Or is this as good as it gets for PC?

Far more interesting things seem to be happening with low powered, embedded world.

Hopefully newer Ryzen revisions can improve the design; never really pays to jump in first.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
To be honest, Ryzen does look to benefit from hand tuning thread affinities in games and few other workloads of sort, to both use the cache available and dodge the inter-CCX communication issues being too rampant. That is not an optimization as much as a workaround, but w/e.

That, almost certainly, won't happen in this generation of games, but who the hell knows what happens further. For all we know AMD goes under or fixes the main uarch issues making the workaround obsolete.

I don't think that's a game engine thing as much of a Windows scheduler issue that has a potential impact on gaming performance. There is a difference.
 

Get out your tinfoil hats and strap in. :rolleyes: This has been present for a very long time on Intel's platform. This includes any Xeon based server used in government and fortune 500 companies. I've never heard of any case where the management engine interface was used maliciously. That doesn't mean there isn't any cases of that but were it a commonly exploited flaw I think people would be up in arms about it. Something like that would be a PR nightmare that would be worse than the Intel FDIV bug issue on the Pentium 60/60MHz CPUs ever was. This is why Intel guards the ME source code so closely.
 
Games are not done that way at least not the graphics portion, features wise yes, but not how much they push the GPU, lets say today I start making a game.

I look at today's best hardware and make sure they what I'm planning on to run at 30 fps. I know its going to be GPU limited, and CPU is going to get hit hard too, cause I epxect per clock instruction to go up, but I'm not planning on them doubling processing power on a per 2 gen basis (which would be around the time the game is released (actually 3 gens), unlike for GPU's, I'm planning on double the performance in many tasks in those 2 gens.

So by the time the game is released, GPU bound amounts should drop on higher end cards, and lower end mainstream cards can run the game fine too. By dropping settings, older generations cards should have no problem as well.

The CPU, any CPU from 5 years should be able to run the game.

And this is why the average upgrade cycles for CPU is 4.5 years and GPU's is 2.5 years.

So effectively next generation games will be GPU bound on older processors that is the norm, but as GPU tech moves faster than software development, the GPU bound scenarios drop, until the next, next gen games come out.

Until you realize this isn't what happens at all.



The 8350BE didn't fall more behind on faster cards and actually has caught up and surpassed it's opponent 4 years on newer games. The fact is this imaginary card that was going to make 4k gaming all of a sudden a CPU bottleneck would have to be so far out as to make it a non issue and that only applies to playing current games that much farther in the future on the better card. No one is replacing their 980ti for a 1080ti solely for play in CSGo. People get new cards for new games that can't run on their old card at the res and preferred settings. Which means it puts the onus back on the GPU. If it's fine for gaming now, its going to be fine for gaming in the future, and when the time comes where you think the CPU is actually holding a GPU back (lets say 2021) the Ryzen 1700 is probably going to be doing measurably better than the 7700k.

All of that assumes no changes on AMD side to work out the obvious kinks in implementation that seem to be holding the Ryzen back.
 
when the time comes where you think the CPU is actually holding a GPU back (lets say 2021) the Ryzen 1700 is probably going to be doing measurably better than the 7700k.

People said this about Bulldozer too. More cores = more performance in the future. Right?!?

It doesn't happen, a CPU which is worse at games now will very likely be worse at games in the future. Intel and AMD use very different architectures, Intel's is better for game type workloads.
 
Get out your tinfoil hats and strap in. :rolleyes: This has been present for a very long time on Intel's platform. This includes any Xeon based server used in government and fortune 500 companies. I've never heard of any case where the management engine interface was used maliciously. That doesn't mean there isn't any cases of that but were it a commonly exploited flaw I think people would be up in arms about it. Something like that would be a PR nightmare that would be worse than the Intel FDIV bug issue on the Pentium 60/60MHz CPUs ever was. This is why Intel guards the ME source code so closely.

Unlikely yes, but the capabilities of this "backdoor" are worrying. I was indeed trolling a bit :p

Edit: The issue is that it's a pandora's box type of situation. Should the worst happen : ie. chinese/russian hackers get it. There's no easy solution besides swapping cpu's, and that might be after any damage is done. I think it's just bad policy.
 
Until you realize this isn't what happens at all.



The 8350BE didn't fall more behind on faster cards and actually has caught up and surpassed it's opponent 4 years on newer games. The fact is this imaginary card that was going to make 4k gaming all of a sudden a CPU bottleneck would have to be so far out as to make it a non issue and that only applies to playing current games that much farther in the future on the better card. No one is replacing their 980ti for a 1080ti solely for play in CSGo. People get new cards for new games that can't run on their old card at the res and preferred settings. Which means it puts the onus back on the GPU. If it's fine for gaming now, its going to be fine for gaming in the future, and when the time comes where you think the CPU is actually holding a GPU back (lets say 2021) the Ryzen 1700 is probably going to be doing measurably better than the 7700k.

All of that assumes no changes on AMD side to work out the obvious kinks in implementation that seem to be holding the Ryzen back.



Nope doesn't take that long and please don't use Adorned as a source of technical information ;), he is not good at that.

In 2021 might do better before of multi threaded performance in theory but not with its current CXX problems ;) Programmers have to code around those issues.

now Adorned forgot to mention or should I say ignored to mention, multithreaded game engines weren't being used much when Bulldozer was released at least games that used 4 cores. Only around Ivy Bridge time did developers really push for multithreaded engines which could work with all 4 cores 8 threads and this helped Bulldozer as well, but not as much as it did with Intel chips, well because BD had issues. he is comparing a 4 core 4thread vs a 8 core 16 thread. As engines were able to use more threads than 4 BD could take advantage to some degree, but that Intel chip couldn't. Hence why you see that happen. Which I think many of already stated, as engines get more multithreaded Ryzen will look better up to a certain point, and if that CXX problem isn't fixed, don't expect it to increase much, that is a pretty big bottleneck, the more multithreading that is going on will create more problems for that issue too.

So hmm calling other reviewers review methodology "crap" its his inability to understand what has happened that causes his "crap" views to come right back on him. The guy doesn't know shit about tech and how it has evolved and why certain things happen. And why people fall for the BS he spews, guess they don't know their history or tech either. Blind leading the blind that is a real good way to educate one self.

He is a newbie at this and doesn't understanding tech. The tech press shots that took from other websites, are a hell of a lot better then he is by miles. They know much more than he does, so before he goes and criticizes them, he should look at his own short comings and fix those.
Hope that clears it up for you,
 
Last edited:
Unlikely yes, but the capabilities of this "backdoor" are worrying. I was indeed trolling a bit :p

Edit: The issue is that it's a pandora's box type of situation. Should the worst happen : ie. chinese/russian hackers get it. There's no easy solution besides swapping cpu's, and that might be after any damage is done. I think it's just bad policy.

I get the concerns conceptually.
 
We will have to wait and see if a bunch of fixes will help the gaming performance. I am hoping but I am not qualified to say it will or won't like quite a few on here. ;)
 
It seems like AMD is good at planning too far ahead. With GPUs they double-down on new APIs or HBM and with CPUs it's been 64-bit or more cores. For instance Mantle was useful in less than a handful of games, so far DOOM is the ONLY Vulkan game that actually runs better than it does in DX/OpenGL, HBM seems unnecessary when GDDR5X is just fine on the much faster Pascal cards, and NV's cards like the 1060 currently trade blows with the 480 in DX12 games but supposedly have a big performance upgrade coming with a driver update. AMD practically ignores the current/near-future needs and then as soon as the new tech starts to become relevant NV or Intel jump out with designs that are competitive with or beat AMD's while also being regarded as more 'premium' brands.
 
Nope doesn't take that long and please don't use Adorned as a source of technical information ;), he is not good at that.

In 2021 might do better before of multi threaded performance in theory but not with its current CXX problems ;) Programmers have to code around those issues.

now Adorned forgot to mention or should I say ignored to mention, multithreaded game engines weren't being used much when Bulldozer was released at least games that used 4 cores. Only around Ivy Bridge time did developers really push for multithreaded engines which could work with all 4 cores 8 threads and this helped Bulldozer as well, but not as much as it did with Intel chips, well because BD had issues. he is comparing a 4 core 4thread vs a 8 core 16 thread. As engines were able to use more threads than 4 BD could take advantage to some degree, but that Intel chip couldn't. Hence why you see that happen. Which I think many of already stated, as engines get more multithreaded Ryzen will look better up to a certain point, and if that CXX problem isn't fixed, don't expect it to increase much, that is a pretty big bottleneck, the more multithreading that is going on will create more problems for that issue too.

So hmm calling other reviewers review methodology "crap" its his inability to understand what has happened that causes his "crap" views to come right back on him. The guy doesn't know shit about tech and how it has evolved and why certain things happen. And why people fall for the BS he spews, guess they don't know their history or tech either. Blind leading the blind that is a real good way to educate one self.

He is a newbie at this and doesn't understanding tech. The tech press shots that took from other websites, are a hell of a lot better then he is by miles. They know much more than he does, so before he goes and criticizes them, he should look at his own short comings and fix those.
Hope that clears it up for you,

Nothing he said was wrong. He compared the 2500k because that was the preferred part for pretty much everyone back then. I recommended against it and waited for the SB-E but it 2500k or bust for the whole community at the time. Go back through the threads. I also don't want to make it seem like I am really beating the drum for it will be better than the 7700k some day. That isn't my point, that was more of a besides the point, that the BD platform and Piledriver in particular, even with its crappy ports didn't get worse as games got better and by the end on modern games where multithreaded games became more prominent it won out. I wouldn't even expect that path to follow Ryzen it was more of a food for thought. Anything that we think is the problem is conjecture based on 4 days of guessing.

My major point is that actual purchasing habits don't follow the path you propose. If I buy an 7700k now and get a 1070 because it plays BF acceptably at 1440, I am not going to get the the 2070 or 3070 to play BF at 1440. It doesn't work like that. So even if you are right which again history says you aren't, but I will say that in this occasion you will be closer to right than if you said it with BD. But even if 4-6 cores is the upper limit to MT gaming which the 7700 with HT is in a prime spot for and the 7700 stays the stronger gamer (which honestly it probably does). Sure if I still played BF1 when I got the 3070, the 7700 would probably still play BF at 1440 faster than Ryzen, maybe the gap widens more. But then again when buying a 1700 or 7700x and choosing the the 1070 you did it because at the GPU starved level the game play was enough for you. IF the new card was that much better, you would just up the PQ to the point that it put more pressure on the video card and performance levels out again. The even bigger point is that you don't buy a 3070 because it will play BF at 1440 better, you get it because BF 2019 has Frostbite 5.0 or something and DX13 and your old card is anemic for it. Where once again you pick the card that works best for you at 1440 and once again you pit the R7 and 7700k against each other and it's a draw.

Look at all the benchmarks out there. Really look at them almost always these three things hold true. The R7 usually has the highest minimum FPS, can be pretty close in the mean graphics, and gets beaten pretty well at max FPS which lifts the average up. It's not a CPU that struggles to keep up with the workload. It just doesn't keep up with Intel. People don't play with a CPU bottleneck outside CSgo and when and if CPU bottlenecks will exist in regular play it will be the 7700k that will suffer for it.
 
Nothing he said was wrong. He compared the 2500k because that was the preferred part for pretty much everyone back then. I recommended against it and waited for the SB-E but it 2500k or bust for the whole community at the time. Go back through the threads. I also don't want to make it seem like I am really beating the drum for it will be better than the 7700k some day. That isn't my point, that was more of a besides the point, that the BD platform and Piledriver in particular, even with its crappy ports didn't get worse as games got better and by the end on modern games where multithreaded games became more prominent it won out. I wouldn't even expect that path to follow Ryzen it was more of a food for thought. Anything that we think is the problem is conjecture based on 4 days of guessing.

My major point is that actual purchasing habits don't follow the path you propose. If I buy an 7700k now and get a 1070 because it plays BF acceptably at 1440, I am not going to get the the 2070 or 3070 to play BF at 1440. It doesn't work like that. So even if you are right which again history says you aren't, but I will say that in this occasion you will be closer to right than if you said it with BD. But even if 4-6 cores is the upper limit to MT gaming which the 7700 with HT is in a prime spot for and the 7700 stays the stronger gamer (which honestly it probably does). Sure if I still played BF1 when I got the 3070, the 7700 would probably still play BF at 1440 faster than Ryzen, maybe the gap widens more. But then again when buying a 1700 or 7700x and choosing the the 1070 you did it because at the GPU starved level the game play was enough for you. IF the new card was that much better, you would just up the PQ to the point that it put more pressure on the video card and performance levels out again. The even bigger point is that you don't buy a 3070 because it will play BF at 1440 better, you get it because BF 2019 has Frostbite 5.0 or something and DX13 and your old card is anemic for it. Where once again you pick the card that works best for you at 1440 and once again you pit the R7 and 7700k against each other and it's a draw.

Look at all the benchmarks out there. Really look at them almost always these three things hold true. The R7 usually has the highest minimum FPS, can be pretty close in the mean graphics, and gets beaten pretty well at max FPS which lifts the average up. It's not a CPU that struggles to keep up with the workload. It just doesn't keep up with Intel. People don't play with a CPU bottleneck outside CSgo and when and if CPU bottlenecks will exist in regular play it will be the 7700k that will suffer for it.


Average sale times for monitors is the lowest of all 3 major components.

CPU is 4.5 years
GPU is 2.5 years
Monitors are 7years.

Monitor upgrading time has been going down in recent years because of certain technologies, I agree but still not to the same as CPU times and definitely not GPU times.
 
Agreed. I ran a single 3007WFP from 2007 to 2013 and three 3007WFP-HC's from 2013 to 2015. I briefly owned 3x 2560x1440 ROG Swift monitors near the end of 2015 but moved onto a 48" 4K Samsung shortly after that. I ran that until the display developed a scorch mark in the image. It was just outside the warranty so I replaced it with my current 49" 4K display. I'm very happy with it and either something technologically amazing will have to come along or I'll probably stick with this for a couple of years at least.

I replace GPU's and CPU's much more frequently. I've been rocking my current setup for almost two years but that's because Broadwell-E was underwhelming and I didn't feel the GTX 1080 offered enough over my Titan X's to make the switch.
 
Last edited:
People said this about Bulldozer too. More cores = more performance in the future. Right?!?

It doesn't happen, a CPU which is worse at games now will very likely be worse at games in the future. Intel and AMD use very different architectures, Intel's is better for game type workloads.

But that is what the video showed didn't happen. At each point that the GPU was upgraded it gained ground and once it got to more present day games it actually managed to surpass it's original opponent. Not really even saying we should be looking at that since its so far out. But that is what people are talking about future proofing. More cores gave BD a lot more legs than it's core design deserved. Ryzen isn't BD and the power in the cores is easy to see it's a strong CPU that doesn't measure up to 7700 in gaming at low res right now. But it's not a dog. But if someone wants to look out 2-3-4-5 years Ryzen much like BD will be in a safer spot.

It seems like AMD is good at planning too far ahead. With GPUs they double-down on new APIs or HBM and with CPUs it's been 64-bit or more cores. For instance Mantle was useful in less than a handful of games, so far DOOM is the ONLY Vulkan game that actually runs better than it does in DX/OpenGL, HBM seems unnecessary when GDDR5X is just fine on the much faster Pascal cards, and NV's cards like the 1060 currently trade blows with the 480 in DX12 games but supposedly have a big performance upgrade coming with a driver update. AMD practically ignores the current/near-future needs and then as soon as the new tech starts to become relevant NV or Intel jump out with designs that are competitive with or beat AMD's while also being regarded as more 'premium' brands.

They can't be expected to completely redesign their arch every time a new feature or key ingredient gets added by Intel. What they did with the K8 and the Athlon64/x2 was think ahead and get there before Intel. They tried the same with the Phenom/BD/Zen. If the future catches up to them it puts Intel in a tight spot and if BD worked they would have had several years of a head start. Instead Ryzen is coming in at the cusp of real MT gaming where Intel has the silicon to fight back against AMD but the reluctance to take the shake up in Server CPU pricing it would take to offer it to consumers. But the big point if they can't look ahead and guess right, and pull it off, they are SoL because they don't have the resources to right the ship in a timely manner.
 
Not directed at anybody in particular, but a wise man once wrote:

"If you cannot conceive that your side can lose a competition, you have no business looking at a benchmark or commenting on it. You have lost touch with reality (if indeed you ever were in touch) and your opinion is worthless. You are terminally biased."


Some words to think about as we continue to bash each other.
 
Windows 7 in some cases is 18% faster than Win 10. This is before all of the board makers optimize bios, so this thing can take advantage of faster ram speeds. If I'm not mistaken some people are running 3000 Mhz at15. 15. 15 timings already.
Ryzen is quickly proving it is a much better buy than intel's 7700K new Celeron. Some people are simply blind and detached not realizing what a huge success this CPU is for AMD, but more importantly for us end users.
 
Ryzen is quickly proving it is a much better buy than intel's 7700K new Celeron.

You mean like how it is empirically worse at damn near every CPU limited gaming benchmark done in the last week? At it's current price point, Ryzen isn't "much better" for people who primarily want to play games.
 
Last edited:
Average sale times for monitors is the lowest of all 3 major components.

CPU is 4.5 years
GPU is 2.5 years
Monitors are 7years.

Monitor upgrading time has been going down in recent years because of certain technologies, I agree but still not to the same as CPU times and definitely not GPU times.

I agree completely that's why I stuck with a static resolution. I myself used a 1920x1080 23" from like 2012 till last christmas when I got the 27" 4k display I am using now. But I don't see how it changes anything. That was my point from the beginning. You have a monitor, you have a system you built and a video card you choose for it's performance in the game you want to play. You get a new card when it can't play whatever new game you want with your res and settings and get a new one. Each time you do this you are putting the onus back on the GPU as the original bottleneck and probably the new one as well. On older games you still have the previous cards as a baseline, if it was good enough then, than it should be good enough now.

Now on the reverse side. If games legitimately start utilizing CPU's more and put more stress on the CPU (lets say for AI, Physics, or something new) at all levels it won't be the R7 that will suffer the most. That won't happen without heavy threading and we are seeing that happen little bit more each release.

So the real issue then becomes that the R7 on several games for users using low rez with high refresh but FPS for most CPU's cap out at the 150-180 FPS territory, the R7 isn't a good chip for that. Even if it's frame rate is smoother as some reviews imply it probably hovers to close to the monitor's refresh rate to be comfortable. So I wouldn't suggest the R7 to those users. But I don't even know how many games that applies. Most of the 1080p reviews I saw at low or normal I saw 250-300 fps, and on Ultra they hovered at around 120 but they were like before GPU bottlenecked.

This isn't me saying it's a good gaming CPU or better than a 7700k or 6900k. It's not but I can't for the life of me find a situation where the performance delta actually impacts users. This is ignoring the possible latency, memory, bios, microcode, and windows bugs that still need to be worked out. I don't think we are seeing the best of R7 now and it's still in a pretty comfortable spot performance wise. Will it change, I hope so. I think so. But even if it doesn't I don't see how it can't be a legitimate performance CPU even for gaming.
 
Nah, man, burden of proof is on you here.

Well, that nails it.
First I am going to say you owe me an apology. I have never lied and ferried enough info that it should have been obvious I wasn't.

In these experiments, each micropattern consisted of two elements of different wavelength-620 mn (red) and 510 m,u (green). The ED was 20 msec for each element, and the rise and decay times were less than 0.4 msec. The Ss were required to report if the micropatterns were the same or different. The four types of temporal alterations illustrated in Fig. 1 and the same set of four combinations (Fig. 2) were used as in the auditory experiments. Identical psychophysical procedures were employed.

When exposed to a rapid sequence of these two elements, all Ss reported perceiving a yellow flash. However, they could readily discriminate between two micropatterns (in which temporal order of elements was reversed) and reported that they performed this discrimination by using a hue difference. AllSs reported that the appearance of the red-green sequence was slightly greenish-yellow, while the appearance of the green-red sequence was slightly orange-yellow. As the element asynchrony was decreased (in Type I alterations), discriminatory performance deteriorated (see Fig. 8). The experiment was then repeated with two elements having wavelengths of 600 and 528 mn. Since the results did not differ (at the 0.05 level), the data of both experiments were pooled. Figure 8 shows the pooled results for both AAs. The ordinate is the percentage of correct discriminations. The abscissa is the element asynchrony. The experiment was attempted for the third time with elements having wavelengths of570 and 545 mil. Both Ss found it impossible to discriminate between the micropatterns and were at a chance levelof performance at all values of element asynchrony

http://download.springer.com/static...95d115d4f2e9d82edead735038795056b98ae747513ea

It was an actual study and you can see that what I said was correct and within context of the paper and research.

Again you and any others that said I was lying or needed to prove my post, well I expect apologies from each of you and if you still feel that you don't agree then POST YOUR OWN LINKS to your own proof in response.
 
First I am going to say you owe me an apology. I have never lied and ferried enough info that it should have been obvious I wasn't.
Owing you an apology for you refusing to provide evidence when the burden of proof was on you? Nah. I will be grateful that you did provide it but that is about it. Actually, i will apologize for calling you out on BS, anyways.
It was an actual study and you can see that what I said was correct and within context of the paper and research.
Yes, now all that remains of you is to connect it to recognition of framerates in computer monitors. And yes, the study aligns with findings of my own childhood. I still can track 75fps animation down to a frame, however.
 
Owing you an apology for you refusing to provide evidence when the burden of proof was on you? Nah. I will be grateful that you did provide it but that is about it. Actually, i will apologize for calling you out on BS, anyways.

Yes, now all that remains of you is to connect it to recognition of framerates in computer monitors. And yes, the study aligns with findings of my own childhood. I still can track 75fps animation down to a frame, however.
Actually I didn't refuse I said I had to work and I cant be on my phone, which I wouldn't be able to link from anyway.

Think you and everyone else to varying degrees mix up Discerning a single frame at WHAT? rate against feeling smoothness. This is why Subliminal messages were so effective. Single frames that most if none could actually see. As in the test you might have seen something out of place but you could not consciously say what it was. The tests with fighter pilots shows high results but I find them to be unreliable in the fact that they know what it is they are to be looking for. They use plain screens and only flash the image at some fraction of a second, assuming a single frame. Problem is it isn't embedded and they know to be looking for the frame and what its contents are so far as a plane but not the exact make.

I have no problem with people saying they can in fact feel the smoothness and to some degree up to 100fps I feel they can tell the difference of extreme frame rates ie: 30, 60, 90 not 65, 70, 92, 95. Unfortunate most claim they can in fact see individual frames as in discern that actual image that lasted just one frame. But in the case of this discussion I was in defense of the Mod that one poster alluded to being an idiot and they should not have their position because of their statement of frame rates over 90, which as you have seen in my link has scientific founding.
 
You mean like how it is empirically worse at damn near every CPU limited gaming benchmark done in the last week? At it's current price point, Ryzen isn't "much better" for people who primarily want to play games.

If gaming were my only concern, I'd go with the Core i7 7700K hands down.
 
And in other news. My quest to actually get a motherboard seems in doubt again. B&H hasn't changed my order status to shipped or emailed me anything and it's almost 11PM EST.
 
you only game buy intel, and stop bitching. If you wanna game and do everything else that 6950k does and blow past 7700k multithreaded apps, want to encode and shit. Well get ryzen and be happy. I doubt you notice the fuckin difference at 1080p unless you are more concerned about the frame counter than the actual gaming experience.
 
And in other news. My quest to actually get a motherboard seems in doubt again. B&H hasn't changed my order status to shipped or emailed me anything and it's almost 11PM EST.
Maybe you will get well functioning motherboard as a result, instead of stuff presently seen all around.
 
Well now these are intial reviews man, not good,

just found this video for my friend he isn't the only one lol.



oddly enough this guy says 3 others were returning their ryzen products lol

So now another botched launch in 6 months first with Polaris and now with Ryzen


The funny thing... is the problem was probably the mainboard- defective ram is usually detected.

The funnier thing is that two "internet fame-mongers" did a long video while driving around, complaining about a defective product, and their diagnosis was probably wrong.

How can you present this as evidence of anything other than these reviewers being incompetent?
 
The funny thing... is the problem was probably the mainboard- defective ram is usually detected.

The funnier thing is that two "internet fame-mongers" did a long video while driving around, complaining about a defective product, and their diagnosis was probably wrong.

How can you present this as evidence of anything other than these reviewers being incompetent?


Cause my friend had the same problem, and that is why I said the motherboards are really raw in the post prior to that lol.

So its not them being incompetent, its AMD rushing things out before their partners were ready.

Well these motherboards are really rough, friend bought a ryan 1700x and motherboard , ended up getting a error 55 and wouldn't post, the manual says 55 is a memory error, ended up returning it, as the shop he got it from didn't have any other motherboards in stock.

Just asked him to make sure,yeah it was an asus crosshairs hero.
 
Really? My Asus Prime X370 Pro is functioning just fine. FUD a bit?
Yeah, yeah, we know, if you bought Phenom 9850, it would overclock to 4ghz just fine too.

How's memory overclocking support? Got 3200C14 functioning on good subtimings yet? Ah, right, you cannot, no external clock gen.
 
Back
Top