Cache and processors

ProOC

Limp Gawd
Joined
Jun 17, 2003
Messages
129
I remember in the 90's most of the cpus coming out didn't put much cache on their cpus and it didn't grow very much at all on anyones core and I was just wondering why.

Was it because the cache was so expensive?

I think I am starting to see why our new processors aren't yielding much "more" bang for the buck and I think I am going to throw up.

I also kind of thought that it could have also been because they were just pushing the mhz more and more and getting a higer frequency from the chip and that was giving them the performance they (and the end user) needed, without tons of cache (and cash :D ).

Now Intel has even stated they are not going to be releasing higher frequency chips in the near future and plan on focusing on put in place technologies and more cache to get more performance out of their design.

Well this is all fine and dandy if I get a lot more performance and I don't have to pay a lot more money. They start piling the cache on and it piles more cache on the price I am assuming. Is cache still really expensive to implement on chip?

Back in the day you had some high priced cpus hit pricewatch.com, but not a ton of them and not all the ones that would be a decent upgrade for you.

Even video cards are getting priced a little retarded. All this stuff is getting more complicated, hence more production costs and R & D costs. They still need to figure a way to develop high tech computing beyond the performance of my currently oc'ed machine and they need to do it efficiently and less costly. I think its possible here as they have done it for me in the past. Thank you Intel and AMD.

Uhm I guess this is a rant or something and I just wanted to get others peoples thoughts relating to the subject. I could be all wrong or just wired, but I just came up with this.
 
Well, adding more cache onto the die makes the die larger. This means fewer CPU's per wafer produced, which translates to lower yields, and higher costs to the end user. So, in theory, as long as they are able to keep the die of the CPU to a certain size, the yields should not be affected too much, so the price should remain relatively stable (and hopefully low).

One of the problems faced in the current situation is not just in the hardware implementation as well. To really take advantage of these kinds of things (more cache, dual core, etc.), new software algorithms must be written to take advantage of that. Its the same type of thing with a new instruction set. With a larger cache, the algorithm to deal with cache hits/misses becomes more complex and difficult to impleement. Dual core implementations are a bit closer situation to a change of instructions, as software will have to support that implementation to use its full capability (at least I think thats the way it is with dual core chips. I don't think that the CPU itself was in charge of process distribution between cores... Feel free to correct me if I'm wrong though).

So, I believe that is the current situation, and how we will be getting benefits out of new processors. It becomes more than just a hardware implementation challenge at this point, as software must be able to utilize this stuff.
 
one of the reasons they didnt put cache on old chips was they didnt need to.
Why? the memory was as fast or close to as fast as the cpu. Now that our cpus have fsb multipliers of 20 and such, you need a lot of cache to keep the cpu going. When a fsb multiplier is 1x or so, cache isnt as necesarry.

Of course back then, you had 8mhz cpus and such, and it was much easier to get ram to run at that speed :)
 
Is cache still really expensive to implement on chip?

define expensive?
at 90n, you can put about 50 times the number of SRAM cells (a bit of cache consists of an SRAM sell) in about the same space as you could on the first pentiums.
So the full 1mb of cache on prescott is only maybe 20-25% larger in physical size than the 16k of cache on the first pentiums. (actually maybe a little more than that, because cache architecture has increased in complexity as well as size)

in therms of power?
SRAM needs about 1/10th the active power (for the same area) logic does. So it ends up being much more power efficient to build a small execution core, that's kept active by a large cache.


and as ambientZ very correclty pointed out, RAM latency, in terms of CPU clock cycles, is far higher now that it was back in the 386, 486 days.

not to mention programs are much larger, data localtiy tends to be weaker.
 
ProOC said:
I remember in the 90's most of the cpus coming out didn't put much cache on their cpus and it didn't grow very much at all on anyones core and I was just wondering why.

Was it because the cache was so expensive?
Yes, because it was off-die, and before that, off-chip. It costs much more (relatively) to make separate packaging for the cache. Now that manfucturers have shrunk the size of transistors below 180 nm, it has become economically feasible to move all of the cache on-die. As manufacturers move into the 90 nanometer process and beyond, they free up space to include more and more cache.
Now Intel has even stated they are not going to be releasing higher frequency chips in the near future and plan on focusing on put in place technologies and more cache to get more performance out of their design.
Yes, due to technical and economic reasons, they have stopped the drive towards 4 GHz and beyond, for now. Their new strategy of adding cache and other internal enhancements is mostly, in my opinion, an interim solution while they prepare a fundamentally more superior design like the current Pentium M.
Well this is all fine and dandy if I get a lot more performance and I don't have to pay a lot more money. They start piling the cache on and it piles more cache on the price I am assuming. Is cache still really expensive to implement on chip?
No, it's not.

The manufacturer's cost can be broken down into yield and die area.

If yields are good, the number of wasted CPUs and silicon goes down, thus their internal cost goes down. The opposite is also true, and Intel is probably experiencing some of this with their high-end Pentium 4s.

Die area is the second major factor. This is where cache comes in. x amount of cache takes y amount of transistors, for the most part. So if you wanted 2x cache, you would need 2y number of transistors, roughly. Now, if we shrink the size of the transistors, those y number of transistors now take up less space, less die area.

Basically, if yields are good, the manufacturer's cost is related to the die area required per chip, regardless of how much cache is fit into that die area. Now, your cost might be higher for a chip of 2x cache than the same chip that has x cache. That's just marketing ;)
 
Back
Top