i7 for 3D renderings.

Empty_Quarter

2[H]4U
Joined
Dec 23, 2007
Messages
2,247
Heya,

I am aware of the many "should I go with i7" threads out there, and many of you suggest sticking with regular 775 CPUs as there isn't much of an improvement with going i7 (which is the main reason why I am still sticking with my old gen 775 quad). While that mainly applies to gaming and everyday use. Would you justify getting an i7 mainly for the purpose of rendering 3D models?

I am currently on a Q6600, and I'm really pleased with it so far. But I tend to get really impatient when I render. Yeah, *very* long waiting times are part of the job description, but something a little faster would definitely be nice.

Of course, at this point, all would agree that the i7 is indeed faster than the Q6600, but by how much, and would it justify the cost?

Please excuse my n00bness, I haven't been in the OC loop at all lately, so I have no idea how to, and how easy it is to actually overclock the i7. I don't even know what motherboards are good for the job. I left off when the maximus II formula popped out :rolleyes:

I definately do intend on overclocking and I will be using a TRUE Black w/ 2x NF-P12 pull/push.

I am on a slight budget, and if I do decide to get i7, these are the parts that's im considering:

DFI LP DK X58-T3eH6 (No specific reason for this board, it seems to have a decent layout and have all that I need, that and I've never had a DFI board)
i7 920 (I can't afford any higher)
G.SKILL 6GB DDR3-1333 9-9-9-24 (on sale... so erm... yeah, would getting a DDR3-1600 kit be more worthwhile?)

bringing the total cost to about $680. Consider my max to be about $700.
 
I think it's worth it, HT is really something on i7, imho, giving 8 threads of powercomputing is nothing to sneeze at.
DFI is a solid board, peeps are known to oc the heck out of 920 on it (like 4.5ghz range) but then, you need a really good oc'ing cpu for that. Personally, I got biostar, seems to be a solid board with plenty of features at minimum cost. All x58 boards about the same as far as oc'ing goes, the only difference is in support and feature set.
920 is an excellent choice, no reason to get anything higher as they all clock about the same when oc'ing although unlocked multi on 965 does give it an edge. Personally, I've brought my 920 to 4ghz with HT on, I can't complain :)
Now, for memory - you might wanna get these instead - http://www.ewiz.com/query.php?categ...5&y=15&src=DC&sid=u184741t1163693f0fp0c0s1056 cheaper than your gskill and rated higher ;)
It's just easier to oc with 1600mhz memory since you'll need it to maintain 2:8 ratio and oc the cpu
 
Thank you TehQuick. I am now certain I should get i7. After looking at the DFI, I just realized that it, like many X58 motherboards has 6 PCIs, this messes things up as I kinda need 7 PCIs or at least a PCI-Ex1 above the first x16 slot. So I think I will settle with the vanilla P6T deluxe. It has the typical ASUS features as well as eSATA, so I think I'll get that then :).

Thank you for the memory link, I will get those then :D.
 
Does your renderer not support some kind of network or distributed rendering on multiple machines?

It seems like it would make much more sense to build a cheap secondary box to help with rendering. Depending on what you do the gains would be much larger than moving up to an i7 and also a lot cheaper.
 
Buy a FireGL or Quadro card, it will be much faster than i7. 3D rendering is much more GPU dependant than CPU.
 
^^^
That's great advice if you're doing videos, or just lots of frames. You can have each machine render the nth frame(for n number of machines, so every 2nd frame for 2 machines).

alg7; Do the firegl/quadro cards accelerate production rendering, or just viewport rendering? Wouldn't they have to use Gelato or something for production?
 
I thought the GPU was primarily useful in real-time rendering. For now, I am only sticking to still renders, no videos or animations.

For the moment, I am using 3Ds Max Design 2009 w/ good ol' mental ray, and I'm currently trying to get the hang of v-ray for my future models. I also use Cinema 4D and AutoCAD.

I have always wanted a quadro FX card, but are those cards bloody expensive or what?

My first and original option was to replace my E2180 w/ a Q6600 in my HTPC and setup a small rendering farm between my Rig and the HTPC. The problem with this... I don't even have the slightest clue on where I can start with doing such a thing.
 
Yeah, based on that page at autodesk the GPU is used for real-time stuff so you can better approximate what your scene look like when you hit render. (Edit: There are gpu-based rendering engines, but that may not be applicable.)

How much ram do your scenes use while rendering? AFAIK, i7 + 64-bit would allow you greater memory allocation, so if you're maxing out the available physical memory on 32-bit that could be a good option.
 
I'm not too worried on RAM, they aren't expensive for it to cause a problem, I will start out with 6GB, and you can be sure I'll definitely be getting 12GB eventually.
 
I thought the GPU was primarily useful in real-time rendering. For now, I am only sticking to still renders, no videos or animations.

Not anymore.
nVidia has been offering Gelato for a few years now (I think since the GeForce 6 series), which can do non-realtime rendering on the GPU.
I believe Mental Ray also can offload some operations to the GPU now, and there might be other GPU-accelerated renderers available.

For Gelato, you don't need a Quadro card. A regular GeForce will do the trick just fine.

Having said that, if you're going to render on CPU, Core i7 is by far the fastest option.
 
I use my i7 for rendering. I noticed a HUGE improvement. I'm currently using Vue7 and it can utilize all 8 "cores". I made a sample scene and benched my 4.4GHz E8600 at 6m7s, and on my 4.0GHz i7 920, it does 2m3s, HT on. HUGE difference when rendering day-long scenes.

DO IT.
 
I use my i7 for rendering. I noticed a HUGE improvement. I'm currently using Vue7 and it can utilize all 8 "cores". I made a sample scene and benched my 4.4GHz E8600 at 6m7s, and on my 4.0GHz i7 920, it does 2m3s, HT on. HUGE difference when rendering day-long scenes.

DO IT.

:D

That's not really very fair, you went from dual to quad, the bulk of your performance increase stems from the two extra cores, not the differences in architecture.

As he already has a quad, he's not going to see such a vast difference, if you play clock for clock he'll see perhaps a 15% to 20% improvement.

Nothing to write off, but, at the cost of a new motherboard, CPU, RAM, heatsink, PSU... is it worth it? I think not.
 
:D

That's not really very fair, you went from dual to quad, the bulk of your performance increase stems from the two extra cores, not the differences in architecture.

As he already has a quad, he's not going to see such a vast difference, if you play clock for clock he'll see perhaps a 15% to 20% improvement.

Nothing to write off, but, at the cost of a new motherboard, CPU, RAM, heatsink, PSU... is it worth it? I think not.

Well, if you do the math...
In a perfect world, where 4 cores are exactly twice as fast as 2 cores, that would mean he'd get 3m4s out of a Core2 Quad at 4.4 GHz. In reality, the actual performance would be much lower, since scaling is likely not going to be anywhere near 100%, and you most probably won't reach 4.4 GHz with a C2Q, more likely 4 GHz.
Instead he gets 2m3s out of the Core i7. That's a WHOLE lot faster even than the theoretic rendering of a C2Q at 4.4 GHz. The C2Q would take about 50% longer on every rendering.

So yes, that is indeed a HUGE difference. In practice the C2Q might take 75% longer or more.
 
Well, if you do the math...
In a perfect world, where 4 cores are exactly twice as fast as 2 cores, that would mean he'd get 3m4s out of a Core2 Quad at 4.4 GHz. In reality, the actual performance would be much lower, since scaling is likely not going to be anywhere near 100%, and you most probably won't reach 4.4 GHz with a C2Q, more likely 4 GHz.
Instead he gets 2m3s out of the Core i7. That's a WHOLE lot faster even than the theoretic rendering of a C2Q at 4.4 GHz. The C2Q would take about 50% longer on every rendering.

So yes, that is indeed a HUGE difference. In practice the C2Q might take 75% longer or more.

This is where I don't believe his results.

Read any review you wish, while the i7 is indeed faster, when used correctly, it is nowhere near as fast as he is claiming above.

If his results are factual, and not a guess, then there was something seriously wrong with his configuration before.
 
Integrated memory controller @ 28GB/s might be the cause of the huge advantage for large rendering jobs. But by a factor of 3x, that's pretty radical.
 
This is where I don't believe his results.

Read any review you wish, while the i7 is indeed faster, when used correctly, it is nowhere near as fast as he is claiming above.

If his results are factual, and not a guess, then there was something seriously wrong with his configuration before.

What the freak are you talking about dude? Why would I lie? One of my main reasons for the i7 was for rendering times. WHY would I give up a 4.4GHz solid OC otherwise. My gaming FPS are WORSE with this i7 than with my "old" E8600. Good grief. I'm providing a real-world example here. Vue7's own white papers talk about multi-core scaling. They say something like:

2 cores 100%
3 cores 96%
4 cores 92%

They don't mention hyper-threading, because the article was written before i7. And we all know that the i7's 4 other "virtual" cores don't equate to the speed of a real core.

That said, the results are almost to the second of what I was hoping for when I did a complete system rebuild hoping for at least 4.0GHz on the 920. I got it, it did it and I'm happy with it.

You mention 10%-20% not being a big deal in rendering speeds. Have you EVER rendered anything? That is a huge increase - that could potentially mean many, many hours or even days depending on the complexity of the scene.
 
What the freak are you talking about dude? Why would I lie? One of my main reasons for the i7 was for rendering times. WHY would I give up a 4.4GHz solid OC otherwise. My gaming FPS are WORSE with this i7 than with my "old" E8600. Good grief. I'm providing a real-world example here. Vue7's own white papers talk about multi-core scaling. They say something like:

2 cores 100%
3 cores 96%
4 cores 92%

They don't mention hyper-threading, because the article was written before i7. And we all know that the i7's 4 other "virtual" cores don't equate to the speed of a real core.

That said, the results are almost to the second of what I was hoping for when I did a complete system rebuild hoping for at least 4.0GHz on the 920. I got it, it did it and I'm happy with it.

You mention 10%-20% not being a big deal in rendering speeds. Have you EVER rendered anything? That is a huge increase - that could potentially mean many, many hours or even days depending on the complexity of the scene.

Whoa... simmer down there bucko.

I've been in CGI for almost 20 years, started on the Commodore Amiga and Imagine 2.0.

I have a great amount of experience with 3D Studio (original and MAX), Lightwave, Softimage (and XSI), Maya and various other graphics applications.

At times it's how I've made my living, the rest of the time has been IT, QA, repair, networking services.

I'm no kid and I'm not stupid.

I've used the hardware you're using, both the c2d, the c2q, i7's, ranging from stock to 4ghz+.

I'm saying your examples are not fair to the OP.

He already had a quad core, if he doesn't overclock his i7 and goes with the 920, he will see a 15%-20% performance increase.

In my world... yes, using computers for graphics, that simply doesn't justify the cost of upgrade! UNLESS he's making his living from it, and thus time = money.

However, as with most people on this forum, I doubt that is the case.
 
Whoa... simmer down there bucko.

I've been in CGI for almost 20 years, started on the Commodore Amiga and Imagine 2.0.

I have a great amount of experience with 3D Studio (original and MAX), Lightwave, Softimage (and XSI), Maya and various other graphics applications.

At times it's how I've made my living, the rest of the time has been IT, QA, repair, networking services.

I'm no kid and I'm not stupid.

I've used the hardware you're using, both the c2d, the c2q, i7's, ranging from stock to 4ghz+.

I'm saying your examples are not fair to the OP.

He already had a quad core, if he doesn't overclock his i7 and goes with the 920, he will see a 15%-20% performance increase.

In my world... yes, using computers for graphics, that simply doesn't justify the cost of upgrade! UNLESS he's making his living from it, and thus time = money.

However, as with most people on this forum, I doubt that is the case.

Sorry to jump on ya... man, but when you say there was something wrong with my configuration and you doubt my results, expect some friction. There was nothing wrong with my E8600 OC, rock solid, tight timings, low, low 10's 1M SPi's etc. - right on the money.

I had an Amiga 2000 - miss those days. Sculpt3D, NewTek.....Toaster.... the good ol' days.
 
An upgrade is over 1000 bucks to get lowest I7 available. Just not in cards time you buy ram, mb, and processor.
 
An upgrade is over 1000 bucks to get lowest I7 available. Just not in cards time you buy ram, mb, and processor.

WHAAAT??? Please say I read that out of context. Try more like $650-$700 to get into the lowest i7 with a decently overclocking motherboard and 6Gb of DDR3 1600. And if you go with the cheapest mobo and only 3Gb, you can sneak in around $550.
 
Buy a FireGL or Quadro card, it will be much faster than i7. 3D rendering is much more GPU dependant than CPU.

this statement couldn't be more false. Unless you're using a specific GPU renderer like nvidia gelato, 3D rendering is 100% CPU dependant. :)


to the OP, here is a benchmark showing the I7's versus some later core2quads. It 'may' be worth while to you, it might not. http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3448&p=17 . Either way I would deff. consider the idea of using multiple machines to handle the rendering
 
this statement couldn't be more false. Unless you're using a specific GPU renderer like nvidia gelato, 3D rendering is 100% CPU dependant. :)


to the OP, here is a benchmark showing the I7's versus some later core2quads. It 'may' be worth while to you, it might not. http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3448&p=17 . Either way I would deff. consider the idea of using multiple machines to handle the rendering

QFT. There are many professional 3D rendering apps that still do not GPU processing. Mine, unfortunately, still doesn't. Just can't justify dropping $2K on something that does quite yet.
 
have you played around with gelato at all? I didn't like it at all... much different than most renderers and didn't give as significant of a speed boost as you'd think it would :(


oh well, nothing can replace vray for me anyway... I've fallen in love with it :p
 
QFT. There are many professional 3D rendering apps that still do not GPU processing. Mine, unfortunately, still doesn't. Just can't justify dropping $2K on something that does quite yet.

+1

You're best off getting the i7 920 and stocking up on memory.
 
Back
Top