Starfalcon
[H]ard|Gawd
- Joined
- Jan 7, 2020
- Messages
- 1,334
will AIB custom cards be available on launch day?
I would guess so since they have been showing them off in PR.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
will AIB custom cards be available on launch day?
Nvidia finally posted their specs on their website. The 3080 is 285mm long (112mm wide) and the 3090 is 313mm long (138mm wide). The 3080 is listed at 320w while the 3090 is listed at 350w. Both recommend a 750w PSU.What I am dying to know is the physical size of these cards, both the 3080 and the 3090. I have a new build using a Fractal Design Node 804 case and for the graphics card I am limited to 290mm if I want to keep one of the front fans in place or 320mm if I am willing to remove one of the front fans. I am leaning towards the 3090 but it really comes down to space.
The 3080 has 8,704 CUDA cores and a boost clock of 1.71GHz. The 3090 has 10,496 CUDA cores and a boost clock of 1.7GHz.
AMD has to do something, Jensen just bullied the hell out of their perf/$ arguement in the highend section. It's like being in a fight at least scratch his ass.I'm curious if Big Navi announcement was waiting for today. I'm thinking AMD will announce before the Ampere cards are available. If not, they're probably not competitive. Thoughts?
My thought is AMD better have something good, or Nvidia just spanked them again.I'm curious if Big Navi announcement was waiting for today. I'm thinking AMD will announce before the Ampere cards are available. If not, they're probably not competitive. Thoughts?
The 3070 has 5,888 CUDA Cores, a boost clock of 1.73GHz, and has a length of 242mm (112mm width).Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.
Yes.HDMI 2.1?
I just wish they priced the 3090 to $1300 max. The 2080ti was around $1200 at launch right?
The 3070 has 5,888 CUDA Cores, a boost clock of 1.73GHz, and has a length of 242mm (112mm width).
the higher models have way more CUDA cores to handle the workload at a lower clock speed.
Yeah, and with the 3090 FE starting at $1499, that is certainly going to put a hurting on some wallets out there. But considering that it replaces the Titan line and has 24GB of memory, the price isn't bad even though it is more expensive than the outgoing 2080Ti FE when it was released. The Titan RTX still goes for $2,499.Correct. The cheapest 2080ti came in at $999 for the EVGA Black Edition but that was released months after the regular 2080tis hit the market.
Looks like Nvidia is counting 2x for actual cuda cores since Ampere can do 2 operations per clock on the shaders which Turning could do 1 operation per clock. This should greatly increase rasterization speeds and wonder why that was not highlighted or emphasis as much? Having a 360hz monitor, new G-Sync and then playing marbles at 30fps at 1440p I don't think many would want to do for any length of time, even being as pretty as it is....hitting the nail on the head. CUDA core count is way more important than boost clock.
1060 to 3070
That'd be a solid upgrade, right?
Just basic gaming 1080 @ 60. COD is probably the most graphically intense game I play.
Looks like Nvidia is counting 2x for actual cuda cores since Ampere can do 2 operations per clock on the shaders which Turning could do 1 operation per clock. This should greatly increase rasterization speeds and wonder why that was not highlighted or emphasis as much? Having a 360hz monitor, new G-Sync and then playing marbles at 30fps at 1440p I don't think many would want to do for any length of time, even being as pretty as it is.
I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.
Link for those interested: https://www.nvidia.com/en-us/geforc...id=nv-int-cwmfg-49069#cid=_nv-int-cwmfg_en-us
mmk. neat. I guess its PCI 4.0 too, which I already have when I switched to Team Red recently .Without a doubt. Massive upgrade, more than likely.
I'm actually kind of surprised at how close the boost clock is for all three models, especially for the 3090. The Titan and Ti's of the past typically had lower peak clocks than the lower models. To only be a difference of around .03GHz (30MHz) between the lowest and highest card is pretty sweet. Now the base clocks have a larger spread, but my guess that has to do with targeting certain power savings goals along with the higher models have way more CUDA cores to handle the workload at a lower clock speed.
A lot of people are going to discount the additional hardware and software stack that enable features for livestreaming... but this is technology that pushes capabilities wider. Previously you'd want twelve or sixteen cores to do that, a treated room, a sound setup that rivals the price of the compute hardware, and so on.After seeing them pump the gas on the streaming elements they will be enabling through their software stack, I know a lot of fledgling streamers will be interested, I know my son will want one.
Oh for sure. I'd be super pissed too. I don't have any regrets either with the 2080ti. I had a 2080 initially and did a Step-Up with Nvidia when the TIs came back in stock and I regret nothing. It's still an insanely powerful card and will be good for me at least for another couple years.I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.
I guess we'll never really see days of 70% straight increase watt for watt in raw fps like we did in older architectures. What are your thoughts on this?
When they know a new GPU is on the horizon and they buy the top end anyway, that's kind of their fault.I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.
I have no regrets owning the 2080 Ti for almost 2 years as I get to enjoyed highest FPS possible, but boy I do feel bad for new 2080Ti owners who bought theirs not too long ago.
Holy fucking shit... My 2080ti just got spanked by the 3070. lol. The amount of CUDA cores this generation are going to be absolutely bonkers. I can't wait to see some specs. I am totally happy with my 2080ti overall and it will be good for years to come, but those 3xxx cards are just insane.
Link for those interested: https://www.nvidia.com/en-us/geforc...id=nv-int-cwmfg-49069#cid=_nv-int-cwmfg_en-us
I think Nvidia is playing with the Cuda Core numbers a bit, as in using a different definition than the 20x0 series. I'm also guessing their "x is 2 times faster than y" benchmark in the presentation was probably in RTX. I have a hard time believing a 3080 is that much faster than a 2080ti in non-RTX games. Guess we'll have to wait for the benchmarks.
Side note - it's sad to not have HardOCP around to benchmark these cards. I know Kyle has a lot of stuff going on, but I really wish he could fire up the site again.
Digital Foundry put out a video that shows the 3080 running about 80% faster than a 2080 in non-RTX games. In RTX situations, it got a little closer to the 2x number Jenson went on about. Either way, 80% over a 2080 is ridiculous. A 2080ti is what, 30% faster than a 2080? So you’re still looking at a 40-50% increase over a 2080ti. That is massive.
Holy shit, really? Thats frickin nuts. in real world gaming?
Looks like I'll be selling/upgrading my EVGA 2080S hybrid and going to a 3080. Just waiting for EVGA's offering
Step-Up or are you past the 90 days?
Here are my thoughts on it. I am not disappointed in the clock speeds. In fact, I am quite impressed, more so for the 3090's clock speed seeing that its boost clock is so close to the 3070 and 3080 at only a 30MHz spread. Considering that the transistor count increased by 50%, CUDA cores at least doubled, and who knows about the actual number of RT and Tensor cores, to see them hold on a higher clock value is something. Now what will be interesting to see is how much higher the cards are able to boost while maintaining thermal targets, especially with this new cooling solution. The Founders Editions of the 1080 Ti (16nm) had a stock boost clock of around 1.58GHz, the 2080 Ti (12nm) at 1.63GHz, the Titan Xp at 1.58GHz (16nm), and the Titan RTX (12nm) at 1.77GHz. Going from the 16nm to 12nm fab didn't net too much of a gain clock speed-wise. I wouldn't have expected going from 12nm to 8/7nm would have netted much either, especially when factoring in the increase in complexity of the chip as well as maybe them focus on other areas of optimization versus going purely for clock speed (almost the Intel vs AMD argument currently going on). While I agree that the TSMC 7nm may have been a little more efficient than the Samsung, I don't think the end result difference was going to be huge considering the sizes of these chips versus the previous generations.That's what you get for samsung 8nm. Remember Turing was essentially an enhanced 16nm so the process had aged and improved allowing clock speeds up over 2ghz. It will be interesting to see if Nvidia stays the course with Samsung, or goes back to TSMC where they will have better improvements than samsung resulting in higher core clocks. Just look at the PS5 clocks as an example, beastly for the wattage and cooling that thing is going to be dealing with.
The GDDR6 really gimped them on memory amounts, it's too bad they weren't able to get to 12 or 16gb on closer to mainstream cards. Something with performance close to the 2070 and 16gb of memory for 500 bux will be hard to pass on for many games that still don't give a shit about RTX.
After seeing them pump the gas on the streaming elements they will be enabling through their software stack, I know a lot of fledgling streamers will be interested, I know my son will want one.
I'm really dissapointed we don't get any benchmarks today. There was one screen that really grabbed my attention, the performance/watt:
View attachment 275291
Here we see at 240w, turing is hitting 60 fps, with ampere at say 90 fps. That would give us the 33% performance gain typical of a recent refresh, not Entirely new Fab Node + architecture. At 320w, we see about 105fps for ampere, which would be 43% increase over turing. At (assuming) 350w (3090 spec) that would equate to a little over 50% faster. Now there are other considerations, the graph says Control at 4k, but doesn't mention ray tracing, which I would assume is off. So this is just a shader snapshot. I think that going on Samsung hurt them power wise, as they would have been able to get closer to these improvements near the same power envelope otherwise.
For those of us that aren't dying for more raytracing, and still want enough shader power, It's not as much as the presentation would have lead you to believe, yet still acceptible. I guess we'll never really see days of 70% straight increase watt for watt in raw fps like we did in older architectures. What are your thoughts on this?
The only card I can see myself getting is the 3090rtx. The 10gb memory on the 3080, pains me to no end, especially with what is on offer from console mfrs. I understand why they had to do it with the type of chips they wanted to use (see anandtech writeup about this), but puleeeeeeze, it's still 700 bux for the cheapest card 10gb 3080.
However, I'll probably be buying my son a 3070 when they get released, as I flipped my 2080ti last week for almost what I payed for it on launch.