SSD compatability with drive-imaging software, truecrypt

PHiZ

Limp Gawd
Joined
Apr 7, 2006
Messages
416
My understanding is that because of wear-leveling and other SSD voodoo that there is another logical layer interspersed between the OS and the hardware. Does this have an effect on tools that access the hard-drive at a low-level?

For example, if I image a 80gig mechanical hard-drive and attempt to put that image/partition on an SSD, is it a straightforward process?

What about truecrypt in container mode? Truecrypt allocated a big-ass file, and then does a lot of operations on the internals of that file. What about truecrypt in whole-disk mode?

What about in other applications that I might not have thought of? (It is my understanding that defragging is largely redundant because of the way that the SSD natively spreads files across the whole of the disk.)
 
With truecrypt, due to the nature of the encryption schemes, access always appears random to the disk.You will see a significant boost in the access time of the container when you switch to a ssd because of the lack of a mechanical head. Because of the wear-leveling, writes are going to be slower than reads. But in the end you will be limited by how fast your cpu can decrypt the data. (a C2Q9550 on a P45 NB can hit about 400MB/s in the benchmarks)
 
Do you have a link to those benchmarks? That seems pretty interesting, I have always wanted to see some benches like that.
 
With truecrypt, due to the nature of the encryption schemes, access always appears random to the disk.You will see a significant boost in the access time of the container when you switch to a ssd because of the lack of a mechanical head. Because of the wear-leveling, writes are going to be slower than reads. But in the end you will be limited by how fast your cpu can decrypt the data. (a C2Q9550 on a P45 NB can hit about 400MB/s in the benchmarks)

Depends on the encryption level though..
 
My understanding is that because of wear-leveling and other SSD voodoo that there is another logical layer interspersed between the OS and the hardware. Does this have an effect on tools that access the hard-drive at a low-level?
yes and no..
For example, if I image a 80gig mechanical hard-drive and attempt to put that image/partition on an SSD, is it a straightforward process?
yes it is straight forward, in this case it doesn't make any difference to imaging or partitioning the drive. *unless you are trying to align the partitions etc (This was necessary with Jmicron drives, optional now: http://www.ocztechnologyforum.com/forum/showthread.php?t=48309)

What about truecrypt in container mode? Truecrypt allocated a big-ass file, and then does a lot of operations on the internals of that file. What about truecrypt in whole-disk mode?
I am not familiar with how truecrypt does it's voodoo, and can't comment on that, but I doubt it makes any difference to it.
What about in other applications that I might not have thought of? (It is my understanding that defragging is largely redundant because of the way that the SSD natively spreads files across the whole of the disk.)
defragging is not just redundant, but actually a bad idea on a SSD. defragging a SSD will just wear it out sooner.
 
yes and no..

yes it is straight forward, in this case it doesn't make any difference to imaging or partitioning the drive. *unless you are trying to align the partitions etc (This was necessary with Jmicron drives, optional now: http://www.ocztechnologyforum.com/forum/showthread.php?t=48309)

Thank you, I am going to read your link, and some of the links contained therein and try to better understand what you mean by aligning the partitions. I read somewhere that it was wise to use the Win7 built-in partition tool during a Wn7 install, because it did some "alignment" magic. If you would like to comment further, it would be appreciated.
 
What partition alignment is: Is getting the logical clusters or blocks in the partition to match the physical clusters or blocks on the drive.

Drives (both SSD and platter) store data in blocks and not individual bits, this is both to increase performance and decrease complexity. When you create a partition the OS doesn't pay attention to where those physical blocks are in relation to the logical blocks that it uses. If they don't match up exactly then it is likely that for any read or write the drive will have to actually read or write the physical blocks on both sides of the data to get all the data. For the old JMicron drives that had problems with lots of small data transfers, this really worsened their performance.
 
Back
Top