Intel new SSD 34nm?

Status
Not open for further replies.
Right now, the 64 gig X25-E model is $767. That's about $12 per gigabyte. If we believe the posts in this thread which talk about not filling the drive and leaving 25% free space, we're just shy of $16 per gig.

If we look at a high-end SAS drive, we find that it's about $1 per gig -- $320 for a Ultrastar 15K300 at 300 gigs is $0.9375. That is, an enterprise-class drive built around reputable technology is actually one-twelfth the price of the current SSD drives.

To get 300 gigs out of the SSDs, we'd need to buy five units. Today, that's a cost of $3835. For that price, we can buy a nice RAID controller card and eight of the SAS drives to configure in whatever way we'd like. The performance of the SSD drives isn't going to get that much better when built into a RAID array, but the SAS drives will show much higher concurrency and a better IOPS score.

Working alone, and when it's working correctly, the SSD drive is very fast. Thing is, it is very easy to make it slow down significantly. The drive tanks its IOPS rate when the workload includes more than a trivial number of writes. IOMeter doesn't have to run "all day"; just a few minutes is all it takes.

Because of the pricing and manufacturer's literature, I think it's obvious that these drives are marketed to the enterprise consumer, so I don't think this is a straw man at all.

The drives are inappropriate for consumers because of the cost. The drives are inappropriate for most enterprise applications because of their failure mode and limitations in the implementation. The prices will come down and the implementations will get better, but I think there needs to be a very significant breakthrough in the load-leveling implementation before anyone can take these drives seriously, particularly for those applications. TRIM is not that breakthrough.

While some consumers might end up buying them, and getting away with product's incredible limitations by showing the products reduced load, I don't think there's any confusion on my part about what the marketing materials for the drives say, or with their place in the pricing strata.

Indeed, better storage is key to PC performance. The slowest thing a PC can do is disk I/O, and while disks have gotten substantially larger over the years, they haven't gotten much faster. The industry is yearning for a breakthrough in this area, and while flash-based SSDs showed some promise, they haven't delivered either the pricing or reliability customers expected.
 
Nice post, mikeblas :)

I'd like to add too that Flash-based SSDs are relatively unproven technology so far. HDDs have been in use for many decades, while SSDs are more a thing of the last few years. Considering that with a mere 2 writes to each Flash block (512 kB) per second a 30 GB SLC SSD drive will be out of write cycles in less than an hour, it hardly seems fool-proof technology. Worse, the number of write cycles keeps declining with every process size shrink. I would rather use a RAM disk than an SSD to speed up database transactions and such within my company if I had to choose.
 
The writes-to-failure is another substantial concern for these drives. Let's say that load-leveling works great, and we deploy a rack of ten of these drives into a database server. The server is chugging along, running a company and printing money.

At a busy site, that database might see a few thousand I/O operations per second. They'll be distributed pretty evenly across the drives if we're using RAID. But the drives will eventually fail because we'll eventually exceed the limited amount of writes that the flash cells can handle.

What happens then is very interesting. If we had built the array out of spinning mechanical drives, we'd probably find that one fails every so often. But since we're using a safe RAID level, we can tolerate a failure or two. The controller alerts us, sends an email, somebody gets paged, and someone goes to the server room and puts in a new drive, triggers a rebuild, and we've got no worries.

But our array is made of flash SSD. So, maybe we're not so lucky.

First, we can't really use RAID5 for SSDs. RAID5 means that all the parity writes go to a single drive, and that single drive will "bake" faster than any other drive in the array--it'll hit its maximum write count faster than the others, and need to be replaced more frequently.

As write leveling algorithm on that drive starts to notice that the drive is wearing out, what does it do? Does it pitch a SMART event? Does it take longer and longer to complete writes as it searches for mappable space? Or does it just shut down, going into read-only mode to turtle itself and protect the data?

We can side-step that issue by choosing RAID6. Or, maybe we choose RAID1 or RAID10. I think we can consider those schemes together, since their access patterns are similar: some request for a logical sector is mapped to a physical sector on a physical drive. Of course, that "physical" request is then mapped again by the drive to the real location in the flash memory, or on the surface of one of the platters.

It's entirely possible that hot spots develop, though. Say I have a 25 gigabyte database file. The file probably has an allocation table, telling me which of the pages in the database file are in use, and which aren't. There's probably a table of metadata in the file, too, that tells me how many rows are in each table, the last access time of each table, and so on. These pages in the file get re-written constantly; almost with every other access to the database.

It's possible that the application develops its own hot spots, by modifying the same application-level data in the same place, often.

This can be a problem for physical drives, too; but it's just a performance issue that can be addressed by the configuration of the array. One drive is a little busier than others, but that doesn't mean it wears out faster.

For flash-based drives, though, that's precisely what it means: that drive will see writes far more often than other drives, and it will fail sooner.

If we assume all the drives are accessed equally, we have a more troubling problem: they'll all reach their maximum write count at the same time, and need to be replaced at once! Even if that's not strictly true, the distribution ofthe drive's arrival at their write count limit ends up being over a very short amount of time. Even if the drives fail gracefully, we're left with a higher likelihood that more than one drive will fail at once, defeating the array and causing expensive downtime.

I'm not sure these problems really happen in practice. On the other hand, I haven't found anything that tells me the drives have a feature (or features) that mean the problems won't happen.

Elledan's point about the technology not being proven is very important. We know what spinning drives do; we know how they work, how they perform, and how they fail. We--the industry-- has got very little experience with flash-based drives. It's new and treacherous territory.

High-end systems are written with the knowledge of the performance characteristics of their hardware components. We write software in a certain way because we know how CPUs and memory systems work. We write applications in a certain way because we know how hard drives react and perform. Flash-based drives change the rules--and change the rules in unanticipated and undiscovered ways. We're a long way from writing software that takes those performance characteristics into account.
 
First, we can't really use RAID5 for SSDs. RAID5 means that all the parity writes go to a single drive, and that single drive will "bake" faster than any other drive in the array--it'll hit its maximum write count faster than the others, and need to be replaced more frequently.
I agree with the point of the post, but would like to point out that RAID-5 rotates parity among all array members. RAID-3 and RAID-4 do not, which is why they are considered obsolete.

http://www.acnc.com/04_01_05.html
 
One thing that makes me hesitate on SSDs is that I'm not clear about what happens when they fail. Would an SSD fail entirely or start flaking out when they die like hard drives do, or will they start silently corrupting blocks of data at a time? Do these 'mainstream' drives even have ECC or parity to help mitigate that? What's the estimated non-recoverable error rate? Will SSDs become more or less reliable as the technology matures - especially considering the drive to make them cheaper by trying to store more bits per cell?

If given the choice between a mechanical hard drive that will generally fail outright or a SSD that silently corrupts my data long before I know about it (and consequently infect my backups), I'll choose the former.

Basically, all this talk about SSD performance is meaningless to me if it's not reliable. Unfortunately, all these press releases and news articles seem to address performance and price.
 
Right now, the 64 gig X25-E model is $767. That's about $12 per gigabyte. If we believe the posts in this thread which talk about not filling the drive and leaving 25% free space, we're just shy of $16 per gig.

If we look at a high-end SAS drive, we find that it's about $1 per gig -- $320 for a Ultrastar 15K300 at 300 gigs is $0.9375. That is, an enterprise-class drive built around reputable technology is actually one-twelfth the price of the current SSD drives.

To get 300 gigs out of the SSDs, we'd need to buy five units. Today, that's a cost of $3835. For that price, we can buy a nice RAID controller card and eight of the SAS drives to configure in whatever way we'd like. The performance of the SSD drives isn't going to get that much better when built into a RAID array, but the SAS drives will show much higher concurrency and a better IOPS score.

Working alone, and when it's working correctly, the SSD drive is very fast. Thing is, it is very easy to make it slow down significantly. The drive tanks its IOPS rate when the workload includes more than a trivial number of writes. IOMeter doesn't have to run "all day"; just a few minutes is all it takes.

Because of the pricing and manufacturer's literature, I think it's obvious that these drives are marketed to the enterprise consumer, so I don't think this is a straw man at all.

The drives are inappropriate for consumers because of the cost. The drives are inappropriate for most enterprise applications because of their failure mode and limitations in the implementation. The prices will come down and the implementations will get better, but I think there needs to be a very significant breakthrough in the load-leveling implementation before anyone can take these drives seriously, particularly for those applications. TRIM is not that breakthrough.

While some consumers might end up buying them, and getting away with product's incredible limitations by showing the products reduced load, I don't think there's any confusion on my part about what the marketing materials for the drives say, or with their place in the pricing strata.

Indeed, better storage is key to PC performance. The slowest thing a PC can do is disk I/O, and while disks have gotten substantially larger over the years, they haven't gotten much faster. The industry is yearning for a breakthrough in this area, and while flash-based SSDs showed some promise, they haven't delivered either the pricing or reliability customers expected.
we're talking about the consumer drives which are marketed to consumers and work fantastic for 99% of the people that buy them
yes, the price/GB is high, but most people don't need more than 80GB for applications so for $300 or less (Vertex) they can get a drive that will provide a bigger improvement in responsiveness and "feeling fast" than any other possible upgrade

but great job hijacking this thread about a new product with your ranting and raving about how SSDs won't hold up in your extremely unique server environment, something applicable to almost no one
 
Right now, the 64 gig X25-E model is $767. That's about $12 per gigabyte. If we believe the posts in this thread which talk about not filling the drive and leaving 25% free space, we're just shy of $16 per gig.

If we look at a high-end SAS drive, we find that it's about $1 per gig -- $320 for a Ultrastar 15K300 at 300 gigs is $0.9375. That is, an enterprise-class drive built around reputable technology is actually one-twelfth the price of the current SSD drives.

To get 300 gigs out of the SSDs, we'd need to buy five units. Today, that's a cost of $3835. For that price, we can buy a nice RAID controller card and eight of the SAS drives to configure in whatever way we'd like. The performance of the SSD drives isn't going to get that much better when built into a RAID array, but the SAS drives will show much higher concurrency and a better IOPS score.

Working alone, and when it's working correctly, the SSD drive is very fast. Thing is, it is very easy to make it slow down significantly. The drive tanks its IOPS rate when the workload includes more than a trivial number of writes. IOMeter doesn't have to run "all day"; just a few minutes is all it takes.

Because of the pricing and manufacturer's literature, I think it's obvious that these drives are marketed to the enterprise consumer, so I don't think this is a straw man at all.

The drives are inappropriate for consumers because of the cost. The drives are inappropriate for most enterprise applications because of their failure mode and limitations in the implementation. The prices will come down and the implementations will get better, but I think there needs to be a very significant breakthrough in the load-leveling implementation before anyone can take these drives seriously, particularly for those applications. TRIM is not that breakthrough.

While some consumers might end up buying them, and getting away with product's incredible limitations by showing the products reduced load, I don't think there's any confusion on my part about what the marketing materials for the drives say, or with their place in the pricing strata.

Indeed, better storage is key to PC performance. The slowest thing a PC can do is disk I/O, and while disks have gotten substantially larger over the years, they haven't gotten much faster. The industry is yearning for a breakthrough in this area, and while flash-based SSDs showed some promise, they haven't delivered either the pricing or reliability customers expected.
Hey Mike,

Scratch the gist of the reply, I didn't catch it was the X-25E you had started replying against, as the thread up to that point was mostly touching upon the 25m. Obviously at the 3x price range of the M series, the E should be held up to more scrutiny.
 
One thing that makes me hesitate on SSDs is that I'm not clear about what happens when they fail. Would an SSD fail entirely or start flaking out when they die like hard drives do, or will they start silently corrupting blocks of data at a time?

You will start losing drive capacity as more and more blocks will be marked as unwritable because you used their maximum write cycles. On other side due the way the algorithms in these drives work, you would probably reach the max write cycles on all block in +/- same time...
 
I agree with the point of the post, but would like to point out that RAID-5 rotates parity among all array members. RAID-3 and RAID-4 do not, which is why they are considered obsolete.
Oops! Yes, you're right. I always misremember the difference between RAID4/RAID5 and RAID5/RAID6 in this way.

but great job hijacking this thread about a new product with your ranting and raving about how SSDs won't hold up in your extremely unique server environment, something applicable to almost no one
There's nothing unique about my server environment. In fact, throughout this thread, I've made no reference to any of the server environments I run or use.

The issues I describe apply to any use of the drives in varying degree, depending on the application. They might just take longer to manifest.

The fact remains that the rumors about the new Intel drives built around the 34nm process don't make any mention of these issues being addressed.
 
Last edited:
You will start losing drive capacity as more and more blocks will be marked as unwritable because you used their maximum write cycles. On other side due the way the algorithms in these drives work, you would probably reach the max write cycles on all block in +/- same time...

What about the data that's not being written? At what point would the data just sitting there become corrupt? Do these mainstream SSDs have parity to prevent a flipped bit from corrupting data? What are the chances of that happening?
 
What about the data that's not being written? At what point would the data just sitting there become corrupt? Do these mainstream SSDs have parity to prevent a flipped bit from corrupting data? What are the chances of that happening?

Such systems are used in hardened or embedded applications (aerospace, automotive, etc) but I haven't heard if or what flash SSD drive manufacturers are using for error detection or correction in the device memory.
 
Sorry mikeblas, I do not agree with your comments and have not encountered any of the problems you are describing with enterprise class SSD in enterprise class environments.

http://www.stec-inc.com/ sources the SSD I use.

I have enjoyed consistent, predictable performance with a level of I/O throughput over a magnitude beyond what I could get with fiber channel or SAS drives. You need to stop comparing consumer solid state with enterprise magnetic. There are enterprise solid state solutions in place, now, and in use, and they shine.

The price point is high so they have to be targeted carefully. For high volume OLTP behind distribution systems, e-tailing etc., they work very well.

I think the point is that "enterprise" covers a wide range of requirements. I do agree that at the lower end, SAS trays work fine. I don't see HP offering solid state in a $30k MSA box, nor NetApp nor any of the others. The performance requirements to demand SSD just don't exist in such environments. At the upper end of the spectrum, things are very different.... I can do in two cool running and vibration free trays what otherwise would require a couple of chassis' of spindles and all the requisite stabilization, floor space, PDU and cooling etc. to do with conventional storage.
 
Last edited:
You need to stop comparing consumer solid state with enterprise magnetic.
He was talking about Intel's E series. Regardless, all of Intel's drives use the same controller, which is where the issues described originate.
 
Right now, the 64 gig X25-E model is $767. That's about $12 per gigabyte. If we believe the posts in this thread which talk about not filling the drive and leaving 25% free space, we're just shy of $16 per gig.

If we look at a high-end SAS drive, we find that it's about $1 per gig -- $320 for a Ultrastar 15K300 at 300 gigs is $0.9375. That is, an enterprise-class drive built around reputable technology is actually one-twelfth the price of the current SSD drives.

To get 300 gigs out of the SSDs, we'd need to buy five units. Today, that's a cost of $3835. For that price, we can buy a nice RAID controller card and eight of the SAS drives to configure in whatever way we'd like. The performance of the SSD drives isn't going to get that much better when built into a RAID array, but the SAS drives will show much higher concurrency and a better IOPS score.

Working alone, and when it's working correctly, the SSD drive is very fast. Thing is, it is very easy to make it slow down significantly. The drive tanks its IOPS rate when the workload includes more than a trivial number of writes. IOMeter doesn't have to run "all day"; just a few minutes is all it takes.

Because of the pricing and manufacturer's literature, I think it's obvious that these drives are marketed to the enterprise consumer, so I don't think this is a straw man at all.

The drives are inappropriate for consumers because of the cost. The drives are inappropriate for most enterprise applications because of their failure mode and limitations in the implementation. The prices will come down and the implementations will get better, but I think there needs to be a very significant breakthrough in the load-leveling implementation before anyone can take these drives seriously, particularly for those applications. TRIM is not that breakthrough.

While some consumers might end up buying them, and getting away with product's incredible limitations by showing the products reduced load, I don't think there's any confusion on my part about what the marketing materials for the drives say, or with their place in the pricing strata.

Indeed, better storage is key to PC performance. The slowest thing a PC can do is disk I/O, and while disks have gotten substantially larger over the years, they haven't gotten much faster. The industry is yearning for a breakthrough in this area, and while flash-based SSDs showed some promise, they haven't delivered either the pricing or reliability customers expected.

Don't you need something like 12 15k SAS drives to match the IOPs of just 1 X25-E? Also SSDs scale almost linearly if the controller can handle it. I don't know where you got the information that it can't..
 
Don't you need something like 12 15k SAS drives to match the IOPs of just 1 X25-E?
It depends on the read/write mix, the random/sequential mix, and whether or not the X25-E drive is degraded or not.

Sorry mikeblas, I do not agree with your comments and have not encountered any of the problems you are describing with enterprise class SSD in enterprise class environments.
The universe is full of things that still exist, though some individuals haven't seen them.

There are indeed a wide-range of "enterprise" solutions. Since Intel bills its line of drives as an "enterprise" solution, I'm calling it that. Large scale storage appliances like you're using do exist, and the ones I'm familiar with use an alloy of different technologies: batery-backed dynamic memory, for example, that absorbs writes and then flushes them to the flash-based storage atomically and asynchronously. This hides the problems of the underlying raw flash implementation, increases concurrency and longevity, and so on.

But that's not what this thread is about; we're talking about the Intel drives here. It's pretty apparent that Intel does intend users to build storage arrays out of them, like with HP's MSA line of enclosures and controllers. I'm not sure what basis you have to claim that a certain range of performance requirement doesn't exist, but I believe it does and given a drive that would work in such an application, people would build the arrays and use them.
 
We are of course talking about Flash-based SSDs, which were first released in 1995. The first SSDs were actually RAM-based :)
A decade and a half of enterprise usage is a far cry from "a couple of years".

rflcptr said:
He was talking about Intel's E series. Regardless, all of Intel's drives use the same controller, which is where the issues described originate.
Intel's E series is "Extreme" not enterprise (http://www.intel.com/design/flash/nand/extreme/index.htm). The fact is, even the people on [H] can't afford true enterprise level hardware (in general, never mind enterprise grade SSD's).

WTF are we even having this conversation? If you are using an X25-E for your Enterprise database storage you're an idiot.
 
It's entirely possible that hot spots develop, though. Say I have a 25 gigabyte database file. The file probably has an allocation table, telling me which of the pages in the database file are in use, and which aren't. There's probably a table of metadata in the file, too, that tells me how many rows are in each table, the last access time of each table, and so on. These pages in the file get re-written constantly; almost with every other access to the database.
I stopped reading at this point, as AFAIK SSDs don't re-write in place, they write to a new sector of the disk.
 
Last edited:
I stopped reading at this point, as AFAIK SSDs don't re-write in place, they write to a new sector of the disk.
One problem is that you resign too easily. Another might be that you don't see the difference between logical sectors and physical sectors. That translation and the slow erasure that happens behind that translation is exactly the problem with the drives.
Intel's E series is "Extreme" not enterprise (http://www.intel.com/design/flash/nand/extreme/index.htm).

Did you read the "for servers, storage and workstations" part? How about the "Enterprise applications" part? The part where the copy compares this drive to other server drives? How about the Solid state drives in the Enterprise whitepaper?

The fact is, even the people on [H] can't afford true enterprise level hardware (in general, never mind enterprise grade SSD's).
But the companies that we work for can.

WTF are we even having this conversation? If you are using an X25-E for your Enterprise database storage you're an idiot.
Your judgement is as inappropriate as it is wrong.
 
Last edited:
A decade and a half of enterprise usage is a far cry from "a couple of years".

Widescale use of Flash SSDs would take at least a decade more. Even then 15 years with widescale usage would only begin to approach the vast amount of data we have on HDDs in enterprise environments. The 1995 Flash SSDs would have been more akin to the 50 MB washing machine sized 'HDD' IBM was producing in the 1950s, which was used in enterprise environments (almost exclusively) and provided essentially the start of the HDD revolution. That's about 50 years of experience right there.
 
I think you're wrong, they cut prices a while back due to competition, I think this was back in April or so..

They cut prices last December too, citing the economy. Back then the only competition they had was JMicron.
 
Any ideas if those new Intel SSD will feature SATA 3.0 or 6Gb/s?
I think SATA 2.0 is the limiting factor of the current crop of SSD.
 
I ready to bet a buck that the new Intel SSD will feature SATA 3.0 and not SATA 6.0!
 
Intel's E series is "Extreme" not enterprise (http://www.intel.com/design/flash/nand/extreme/index.htm).
From your link:
All Intel® X25-E Extreme SATA Solid-State Drives are tested and validated on the latest Intel-based server and workstation platforms, for your peace of mind.
Enterprise applications place a premium on performance, reliability, power consumption and space. Unlike traditional hard disk drives, Intel Solid-State Drives have no moving parts, resulting in a quiet, cool storage solution that also offers significantly higher performance than traditional server drives. Imagine replacing up to 50 high-RPM hard disk drives with one Intel® X25-E Extreme SATA Solid-State Drive in your servers — handling the same server workload in less space, with no cooling requirements and lower power consumption. That space and power savings, for the same server workload, will translate to a tangible reduction in your TCO.
The Intel® Extreme SATA Solid-State Drive (SSD) offers outstanding performance and reliability, delivering the highest IOPS per watt for servers, storage and high-end workstations.
:p

And then:
WTF are we even having this conversation? If you are using an X25-E for your Enterprise database storage you're an idiot.
...

Testing them is idiotic?
 
Last edited:
Some people seem to be confusing the delays caused by having to first erase a block before rewriting data there (pre-TRIM), with mikeblas' issue which is related to heavy I/O overloading the wear-leveling algorithm which causes the drive to slow then die.

Reads aren't a problem, obviously. Sustained writes cause trouble, as they overwhelm the controller and the write-leveling algorithm. You can reproduce the problem with I/O Meter or SQLIO, or whatever tool you'd like. Just make a test with a mix of writes and reads, such that the writes will cover the whole drive's capacity. While you're reading from the device, the chipset on device's controller won't have time to prune its leveling list, and will fall behind, and end up degrading write performance and eventually bricking the drive.

Doing a lot of writes with some reads overloads the chipset trying to process the wear-leveling.

It sounds to me like simply using a faster/more powerful processor should take care of it. I'm sure that's easier said than done, but if the problem is really that the chipset can't keep up, then upgrading the chipset so that it can keep up seems like it would eliminate the problem.

Current SSD users ("power users") generally know about some of the SSD issues and how to avoid them. I'm guessing that most home users probably wouldn't run into this issue due to the special treatment they give SSDs and common home usage patterns. Not impossible, just not all that likely. Even a number of enterprise applications won't run into this. However, if a steady of stream writes totaling only a couple times the drive's capacity mixed with some reads can wreak havoc on an SSD like this, it really doesn't seem like a finished product.
 
Doing a lot of writes with some reads overloads the chipset trying to process the wear-leveling.

It sounds to me like simply using a faster/more powerful processor should take care of it. I'm sure that's easier said than done, but if the problem is really that the chipset can't keep up, then upgrading the chipset so that it can keep up seems like it would eliminate the problem.

Flash memory isn't like regular static or dynamic memory. Static and dynamic memory can be rewritten just by writing a new value into a particular address. Flash memory has to first be erased. Then, it can be written. This process is glacially slow when compared to static memory, and really slow when compared to dynamic memory.

Reading from flash memory isn't fast, either, but it's not bad and getting better. Writing to the memory is pretty slow, but it's the erasure step that is the killer: it takes forever, in relative terms. It's just plain slow, taking hundreds of microseconds when other operations take dozens of nanoseconds.

The firmware on the drive itself, then, has two duties. One is simply to find a good a sector that isn't occupied, and is a reasonable candidate because it isn't a hot spot -- it has more life left than the other available candidates. Another is to manage the queue of erasures that needs to happen. A cell can't be written unless it has been first erased. Finding a cell that's both erased and a good candidate takes a bit of time.

A good algorithm and a faster processor will be helpful in making the search take less time. I think it's a solved problem, though; allocation is a well understood problem, and the data volume isn't really that large.

What seems to be happening with the Intel drives is that it's very easy to overcome the queue of needed erasures. The host is trying to perform a write, but no erased block is available to accept the data. Note that this doesn't mean the drive is full -- it just means that all of the sectors are dirty. That can happen just by repeatedly re-writing the same sector faster than the drive can erase new sectors to accept the data.

The fix in the firmware seems to be that the drive no longer falls of the bus when the erasure work queue grows too large. This happens before the drive runs out of available erased sectors. The issue still remains that the drive -- the technology, in fact -- can end up falling behind the hosts' requests because the erasure takes so long. This is when the performance degrades; the drive is too busy erasing to accept writes.

I think some work can be done in order to help concurrency. That is, writes could be buffered, then go to flash parts that aren't busy while others are busy erasing. I think some of the vendors implement this, but it's still not quite enough. The real solution is either a huge amount of actual storage to implement a smaller amount of effective storage, or a flash process that simply doesn't require the erasure step. (That's not a part of the 34nm announcement.)
 
Flash memory isn't like regular static or dynamic memory. Static and dynamic memory can be rewritten just by writing a new value into a particular address. Flash memory has to first be erased. Then, it can be written. This process is glacially slow when compared to static memory, and really slow when compared to dynamic memory.

Reading from flash memory isn't fast, either, but it's not bad and getting better. Writing to the memory is pretty slow, but it's the erasure step that is the killer: it takes forever, in relative terms. It's just plain slow, taking hundreds of microseconds when other operations take dozens of nanoseconds.

The firmware on the drive itself, then, has two duties. One is simply to find a good a sector that isn't occupied, and is a reasonable candidate because it isn't a hot spot -- it has more life left than the other available candidates. Another is to manage the queue of erasures that needs to happen. A cell can't be written unless it has been first erased. Finding a cell that's both erased and a good candidate takes a bit of time.

A good algorithm and a faster processor will be helpful in making the search take less time. I think it's a solved problem, though; allocation is a well understood problem, and the data volume isn't really that large.

What seems to be happening with the Intel drives is that it's very easy to overcome the queue of needed erasures. The host is trying to perform a write, but no erased block is available to accept the data. Note that this doesn't mean the drive is full -- it just means that all of the sectors are dirty. That can happen just by repeatedly re-writing the same sector faster than the drive can erase new sectors to accept the data.

The fix in the firmware seems to be that the drive no longer falls of the bus when the erasure work queue grows too large. This happens before the drive runs out of available erased sectors. The issue still remains that the drive -- the technology, in fact -- can end up falling behind the hosts' requests because the erasure takes so long. This is when the performance degrades; the drive is too busy erasing to accept writes.

I think some work can be done in order to help concurrency. That is, writes could be buffered, then go to flash parts that aren't busy while others are busy erasing. I think some of the vendors implement this, but it's still not quite enough. The real solution is either a huge amount of actual storage to implement a smaller amount of effective storage, or a flash process that simply doesn't require the erasure step. (That's not a part of the 34nm announcement.)

How about splitting the drive into different functional zones so that you can simultaneously write and erase at the same time? Perform erasures simultaneously with write requests so that the write queue never beats the erase queue.
 
How about splitting the drive into different functional zones so that you can simultaneously write and erase at the same time? Perform erasures simultaneously with write requests so that the write queue never beats the erase queue.

I don't understand how this is different than what the drives try to do currently.
 
Interesting how this thread has turned into the general SSD discussion thread. I don't mind, but find it funny.

I've been lurking for months on the DSS forum, watching the evolution of SSD's. Almost sprung for an X-25M when they were first released, then again a few months ago when Newegg cut the price for their shell shocker deal.

I saw this thread when it started, and have been following it with excitement. I'd love to see more than just the Inquirer pick up this story. At this point, I just can't see myself paying a premium for "old" technology. I'll be getting the new, 34nm intel offering (even at a price premium) when it's released, and hope that'll be in the next few weeks.
 
I'm not sure what's funny about an interesting conversation. Conversations twist and turn; it's natural in human communication, particularly when many people involved.

What benefit do you think the 34nm process drives bring you that make them worth paying more?
 
I see that there are theoretical problems with SSDs. Are there any reports of real world implementations that have run into these problems?
 
I see that there are theoretical problems with SSDs. Are there any reports of real world implementations that have run into these problems?
Probably not. Since the issues are forseeable, nobody would put the drives into production use where they would fail.

to get a bit back on topic, the two weeks seem to be almost up. and no new news.
What I've been reading is "before the end of July". We're not there yet.
 
I don't understand how this is different than what the drives try to do currently.

Similar, but have a dedicated channel for erasing data which cannot be used for writing. So the drive is SIMULTANEOUSLY erasing a section in anticipation, or during idle time well before the writing can cause a bottleneck by waiting on erases (this should be able to improve copying to and from the drive also).
 
I'm not sure what's funny about an interesting conversation. Conversations twist and turn; it's natural in human communication, particularly when many people involved.

What benefit do you think the 34nm process drives bring you that make them worth paying more?

Just funny that such a narrow topic turned so broad. This topic was started about a specific SSD line from a specific manufacturer, with specific dates for a product refresh. It's turned into what I see as the best debate to date on this forum for the actual merits of an SSD. I applaud you for taking you stance, it's just funny that no one would know to find such a discussion in a thread titled "Intel new SSD 34nm".

As for why I'll be buying the new technology, the new size offering (320 gigs, although I may wind up getting the 160), coupled with supposedly better performance. Given what a great job Intel did with the X-25M (versus JMicron and Samsung based controllers, and even the newer Indilinx), I'm optimistic performance will be very good.
 
...As for why I'll be buying the new technology, the new size offering (320 gigs, although I may wind up getting the 160), coupled with supposedly better performance. Given what a great job Intel did with the X-25M (versus JMicron and Samsung based controllers, and even the newer Indilinx), I'm optimistic performance will be very good.

Have you found any other information, other than the anticipated announcement stuff on The Register and other sites, and the fact that Intel themselves confirmed the 34nm production agreements?
 
Status
Not open for further replies.
Back
Top