FlexRAID (Flexible RAID)

spectrumbx

[H]ard|Gawd
Joined
Apr 2, 2003
Messages
1,647
Introducing FlexRAID: www.flexraid.com

As a typical [H]TPC user, I ended up writting a RAID software since nothing out there fit my needs.

Things I was looking for were:
- Parity data protection (of course)
- No need to run a specialized OS
- No need for additional hardware
- Ability to turn off my hard disks when not in use
- Use hard drives of any size in the RAID
- Lot of flexibility

Well, as I wrote FlexRAID, I got all that and then some. :)

As stated on the website, FlexRAID is not meant to obsolete other RAID solutions. Rather, it provides one more option to consider among many.

I find it specially ideal for those of us backing our movies and other media content to hard drive.

Read more on it at: www.flexraid.com :)

Private release: http://www.openegg.org/forums/posts/list/3.page
Calling on developers: http://www.openegg.org/forums/posts/list/6.page
 
interesting stuff. I'm currently in dire need of a storage solution, and the prime cadidate has always been raid 5 with 3-4 500gb disks. This would store torrents/movies/filez etc in my machine which is usually on 24/7. I hadn't heard of unRAID or this before now, so I'll definitely investigate more....
 
hmm sounds interesting, i will take a closer look at this
 
just to make sure before I nuke something: am I safe if I want to play around with this but the only free space I have is a single 375gb partition that is half full (I just need to create some folders right?)

also: can I create a data set on a 120g drive that is 100% full (and use 120g of my 400g) as parity without losing any data?

^^sorry but I'm a RAID newb, and command lines terrify me.
 
just to make sure before I nuke something: am I safe if I want to play around with this but the only free space I have is a single 375gb partition that is half full (I just need to create some folders right?)

also: can I create a data set on a 120g drive that is 100% full (and use 120g of my 400g) as parity without losing any data?

^^sorry but I'm a RAID newb, and command lines terrify me.

1.
The number one thing in FlexRAID is data safety.
So, your data is perfectly safe.

2.
Yes, you can test FlexRAID using a single drive by using folders as data source and parity target.
So, you can have in your configuration:
data=C:\data1;C:\data2;C:\data3
parity=C:\parity1;C:\parity2

3. Yes, you can use 120GB of the 400GB drive as parity space without any risk to the existing data.
 
flexraidprob1yk6.jpg


I fail at software raid :(



edit: would you prefer us post here or the AVS thread? (or your forums?)
 
thrawn86: your problem is that you shut down the server before you ran the create task, start the server proccess, start the client, then run the create task
 
thanks, that worked perfectly. I tried it out with a few folders worth of music backups and all was restored perfectly. the checksums matched too!

great stuff you've got here spectrumbx, now I just need a few more drives :)
 
How is this implemented? Is there a possibility that I might change a file and the parity would not be updated? What happens when I delete files that are protected, does the parity have to get updated? What algorithm do you use for parity?

I think this is an interesting idea, but I'd want to see a lot more details, and possibly the source code, before I used it for my data.
 
flexraidprob1yk6.jpg


I fail at software raid :(



edit: would you prefer us post here or the AVS thread? (or your forums?)

Posting here, at AVS, or my forum should be fine.
Post wherever you feel you will get the best support.

You have also shown me a bug in that screenshot (thanks!).
The log4j.properties file is missing from the client's install directory.
It is a really a non-issue unless you want better logging on the client side.
The host service side is not affected.

To fix this bug, copy the log4j.properties file from the host service's install directory to the client's install directory.

In the log4j.properties , you can change the value of "log4j.logger.com.tchegbe=warn" to one of trace, debug, info, warn, error, fatal (verbose to less verbose).

You can also change the path where the log file is written [log4j.appender.default.File=log.log].
 
just echoing this from AVSforums, these are thigns I"m concerned about as well
avs said:
The Ugly
As has been stated...FlexRaid is a snapshot raid. That means that (other than adds) it doesn't handle file MACD at all.
Move: if FlexRaid host is running, moving a file errors-out the app and (if an rsynch is attempted) locks up the moved file.
Add: works great as advertised
Change: file changes are NOT picked up...even using rsynch. Parity files must be deleted and completely rebuilt. If you "restore" a changed file without rebuilding parity, you end up with the original file (even after a successful rsynch post-adding new files).
Delete: deletions are NOT picked up...even using rsynch. App errors-out unless you do a restore to get deleted files back or rebuild parity from scratch.
What does this mean? Unless you like frequent full parity calculations, FlexRaid is really most useful for completely static directory and file systems. That means movies, music (if you don't constantly update your tags), and pictures. FlexRaid is NOT useful for file backups, PVR files,

eagerly awaiting live!
 
How is this implemented? Is there a possibility that I might change a file and the parity would not be updated? What happens when I delete files that are protected, does the parity have to get updated? What algorithm do you use for parity?

I think this is an interesting idea, but I'd want to see a lot more details, and possibly the source code, before I used it for my data.
Most of your questions are answered on the website (www.flexraid.com).

Your data is very safe.
I have 4+TB of data parity-protected by FlexRAID. :)
It is also very easy to test how it works as thrawn86 has been doing.
You can also use it under VMware to truly mimic your storage topology .
 
Is there a possibility that I might change a file and the parity would not be updated?

well, he does say its only suitable for archival use atm. currently file delete/modify aren't working. basically you wouldn't want to use this on your docs folders because in order to pick up changes/deletes you'd have to rebuild the parity every time. on the other hand, its suitable for my puposes because I'm a data packrat and my files are fairly static.

I'm going to play with some encrypted volumes tonight to convince myself it will work alright.
 
I ordered a 5-bay backplane and 3x 500g drives today :D. I'll do 1 500g parity, 2x500g data, and throw my spare 400g and 120g in there as well.

eagerly awaiting the file change/delete detection update.
 
I ordered a 5-bay backplane and 3x 500g drives today :D. I'll do 1 500g parity, 2x500g data, and throw my spare 400g and 120g in there as well.

eagerly awaiting the file change/delete detection update.

Well, then you should do 3x500 for data and use the 120 + 400 for parity.
Remember, you only need actual parity space as you have actual written data.
So, you could start with only the 120GB as parity drive and add parity space later on as your data grows. Of course, you will never need more than 500GB total space for parity.
That said, you are free to use the 120GB and 400GB as data drives while having a single 500GB for parity.

I personally use 2x250GB external USB drives for parity space.
This way, I shut them off and store them until I need them again for a re-synch.

I will work on the delete/edit features over the weekend. ;)
 
yeah, i suppose you're right. the single 400 will be absolutely enough for parity for now. I'd rather not use the 120 for much as its an older ide and I'm worried it will kick the bucket. plus using the 500s for data would give me much more storage space to start off with.

couple random questions:

1) in the future, suppose I replace my parity drive with a bigger one. would I replace that parity by running create.txt to create a new parity, or could I just clone the old parity drive to the new one?

2)will the new versions of flexRAID be compatible, or will I need to recreate the array.

ah hell, I should have just ordered another small 320 or something for OS + games, this is getting ridiculous.
 
yeah, i suppose you're right. the single 400 will be absolutely enough for parity for now. I'd rather not use the 120 for much as its an older ide and I'm worried it will kick the bucket. plus using the 500s for data would give me much more storage space to start off with.

couple random questions:

1) in the future, suppose I replace my parity drive with a bigger one. would I replace that parity by running create.txt to create a new parity, or could I just clone the old parity drive to the new one?

2)will the new versions of flexRAID be compatible, or will I need to recreate the array.

ah hell, I should have just ordered another small 320 or something for OS + games, this is getting ridiculous.

1. Just copy over the old parity files. No need for a create or rsynch. :)
2. Compatibility will always be maintained. The worst could be that you will have to run a task to migrate the metadata.

Well, if you don't want to buy an additional hard drive, you could partition some space off the 3x500GB. For instance, you could use the first 20-40GB from each drive for the OS and games.
1. 20GB OS - 480GB data1
2. 20GB Games - 480GB data2
3. 20GB Games or Others - 480GB data3

You can even decide not to partition and create a folder in each drive for the data to be RAID'ed.
 
Okay, I like this product. But, just some quick observations:

1) I like the "leaveoff" parm. Could we please get an "onlyuse" parm, and have this parm be for individual parity locations? It's more flexible than leaveoff. For example, I realize the metadata is critical, so I want to have the "create" start off my parity creation on a hardware RAIDed drive, but I want most of my parity data on just a typical drive. So, I could say "onlyuse" say 100MB (or 1GB, whatever) for my first parity folder, then the rest of the parity folders are on non-hardware-RAIDed drives, so they can just use whatever amt of space they want. (I hope this made sense. Even without my reasoning, an "onlyuse" parm would be nice for other reason as well.)

2) Ability to set where the metadata is saved in the "create" run. This would negate my reasoning for #1. I could save my metadata in a RAIDed drive, and then ALL of the parity data could go to just a normal drive. While it is true I could just move the metadata afterwards, this parm would be nice. As I said, this would make #1 not as important. ("onlyuse" still has other uses, though...)

3) I like the "splitsize" parm. The reason for this is it allows me to keep the size of parity files to a minimum, if I wish. This is nice for if I want to keep a second copy of the parity data. My thoughts on this is that if I do a verify/scan, and find a parity file is corrupt, I can use my 2nd copy of the parity file to replace it. (kind of like a RAID-6). BUT, if the parity files are too big, this could get negated too easily. Example: if I have two copies of the same parity file, and the file is 1GB, and the first copy is corrupt in the first 100MB of the file, and the second copy is corrupt in the last 100MB of the file, well, both files are corrupt. BUT, if they were 100MB in size, I would have 10 parity files now, and in the first instance only the first file is corrupt, and the second instance only the last file is corrupt. So, between the two I was still able to get a copy of the parity file that wasn't corrupt.

(I hope this made sense. Also, I know you don't need 100% good parity of all parity files to get a restore done, but I am talking WORST CASE SCENARIO here. Where you need ALL of your parity data to do a restore... Like I said, I like it as it allows me to do a RAID-6 (and beyond) type of RAID if I wish...)

4) what are the limitations on the number of paths flexraid can raid? I like the idea of just creating parity at one location for ALL of my drives in all of my fileservers. But this can easily mean >26 drives. But, it appears from what I read that the drives do not need to be mapped (can just use \\computername\drive\folder\), which is awesome. But, still, what are the limitations?

5) make sure to note that it's important that each data location have at least a temp file in it. If the path is empty, flexraid errors out. (if going "live", it's preferable that you fix this, though...)

6) I am not sure how flexraid is implemented, but also note that if someone wants to minimize their parity filesize initially, then don't throw all of their data in one data location. example: If 200GB of data, and four data locations, if each data location only has 50GB init, your parity data will be 50GB. But, if your first location is 197GB, and the next three only have 1GB in them, your parity data is the 197GB. It's not wrong in any way, but if you only have to have 50GB of parity data created, it will limit the amount of time spent creating/saving that data. Just an FYI for users.

7) Will flexraid have the ability to only "heal" an area where it finds corruption? If it only finds one bad file, does it need to re-write that entire path, or can it only do that file? Also, if it finds corruption in the parity data, can that be healed, without just re-creating all of the parity data? It would be great if this could be implemented. (If it already is, I've only done minimal testing...)

I think I had other thoughts, but I don't remember them now.

I've only done minimal testing though, so if I missed something, my apologies.

But I plan to use flexraid, and it came at just the right time. My thoughts are only meant to help your progress in this great product. :)
 
Great news!

FlexRAID Basic (1.0 RC2 and up) now handles file deletes and file edits! :)
A new "validate" task is also being introduced and provides a much faster way of validating the RAID configuration using algorithms.
The "scan" and "verify" tasks will remain for bit for bit verification.

A private release of RC2 will open next weekend.
PM me if you are interested in participating in the private release.
 
awesome news. I'm still waiting on my hardware, but I'll get it all set up this week.

so, does rsynch has the 'fast' and 'slow' options like you mentioned on the avs thread?
 
1) I like the "leaveoff" parm. Could we please get an "onlyuse" parm, and have this parm be for individual parity locations? It's more flexible than leaveoff. For example, I realize the metadata is critical, so I want to have the "create" start off my parity creation on a hardware RAIDed drive, but I want most of my parity data on just a typical drive. So, I could say "onlyuse" say 100MB (or 1GB, whatever) for my first parity folder, then the rest of the parity folders are on non-hardware-RAIDed drives, so they can just use whatever amt of space they want. (I hope this made sense. Even without my reasoning, an "onlyuse" parm would be nice for other reason as well.)
Yeah, I have marked the leaveOff param as a bug that will be resolved in RC2.
"leaveOff" will apply per parity target.
I mentioned that in one of my previous posts here or at AVS forum.

2) Ability to set where the metadata is saved in the "create" run. This would negate my reasoning for #1. I could save my metadata in a RAIDed drive, and then ALL of the parity data could go to just a normal drive. While it is true I could just move the metadata afterwards, this parm would be nice. As I said, this would make #1 not as important. ("onlyuse" still has other uses, though...)
Not forcing the metadata to be in the first directory is a good idea. :)
I will offer an optional param to specify where the metadata goes.

3) I like the "splitsize" parm. The reason for this is it allows me to keep the size of parity files to a minimum, if I wish. This is nice for if I want to keep a second copy of the parity data. My thoughts on this is that if I do a verify/scan, and find a parity file is corrupt, I can use my 2nd copy of the parity file to replace it. (kind of like a RAID-6). BUT, if the parity files are too big, this could get negated too easily. Example: if I have two copies of the same parity file, and the file is 1GB, and the first copy is corrupt in the first 100MB of the file, and the second copy is corrupt in the last 100MB of the file, well, both files are corrupt. BUT, if they were 100MB in size, I would have 10 parity files now, and in the first instance only the first file is corrupt, and the second instance only the last file is corrupt. So, between the two I was still able to get a copy of the parity file that wasn't corrupt.

(I hope this made sense. Also, I know you don't need 100% good parity of all parity files to get a restore done, but I am talking WORST CASE SCENARIO here. Where you need ALL of your parity data to do a restore... Like I said, I like it as it allows me to do a RAID-6 (and beyond) type of RAID if I wish...)

Yeah, replacing the corrupted file will fix the RAID.
The worst case scenario, as you've mentioned, is that only the data recovered from the corrupted portion will be corrupted itself.

In all, I have quite a bit to do with FlexRAID in regard to data/parity corruption.
So far, FlexRAID has focused on data recovery from data loss due to drive failure or user error.
RC2 will do better in regard to data/parity corruption.

4) what are the limitations on the number of paths flexraid can raid? I like the idea of just creating parity at one location for ALL of my drives in all of my fileservers. But this can easily mean >26 drives. But, it appears from what I read that the drives do not need to be mapped (can just use \\computername\drive\folder\), which is awesome. But, still, what are the limitations?
No limitation. :)

5) make sure to note that it's important that each data location have at least a temp file in it. If the path is empty, flexraid errors out. (if going "live", it's preferable that you fix this, though...)
You will get an error only if all of the followings are true:
1. you are on the Windows platform
2. the volume(s) where the paths reside contains no files at all
3. you don't have write permission to the volume(s) where the paths reside

But yeah, I should document that.
Please let me know if you are getting the error under different circumstances than I have described.

6) I am not sure how flexraid is implemented, but also note that if someone wants to minimize their parity filesize initially, then don't throw all of their data in one data location. example: If 200GB of data, and four data locations, if each data location only has 50GB init, your parity data will be 50GB. But, if your first location is 197GB, and the next three only have 1GB in them, your parity data is the 197GB. It's not wrong in any way, but if you only have to have 50GB of parity data created, it will limit the amount of time spent creating/saving that data. Just an FYI for users.
Yep. The parity data size will always be as large (but never larger) as the largest data size in any of the data source paths.
Good FYI.

7) Will flexraid have the ability to only "heal" an area where it finds corruption? If it only finds one bad file, does it need to re-write that entire path, or can it only do that file? Also, if it finds corruption in the parity data, can that be healed, without just re-creating all of the parity data? It would be great if this could be implemented. (If it already is, I've only done minimal testing...)
RC1 is limited in that a corrupted parity file would need to be replaced from backup else the whole RAID would have to be re-created.
Read more on the changes coming for RC2 (next release).

I think I had other thoughts, but I don't remember them now.

I've only done minimal testing though, so if I missed something, my apologies.

But I plan to use flexraid, and it came at just the right time. My thoughts are only meant to help your progress in this great product. :)
You inputs are very much appreciated. :)
Only quality feedbacks will help make FlexRAID into a product we will all enjoy using.
 
awesome news. I'm still waiting on my hardware, but I'll get it all set up this week.

so, does rsynch has the 'fast' and 'slow' options like you mentioned on the avs thread?

Sure does. :)
Also, the performance hit seems to be less than what I thought it would be.

I will do some testing to validate all this next week.
 
No limitation. :)

Great to know! :)

You will get an error only if all of the followings are true:
1. you are on the Windows platform
2. the volume(s) where the paths reside contains no files at all
3. you don't have write permission to the volume(s) where the paths reside

But yeah, I should document that.
Please let me know if you are getting the error under different circumstances than I have described.

#3 didn't apply in my situation. It was just an empty folder. That appeared to be the only issue.

RC1 is limited in that a corrupted parity file would need to be replaced from backup else the whole RAID would have to be re-created.
Read more on the changes coming for RC2 (next release).

I can't seem to find any info on RC2. Link?


You inputs are very much appreciated. :)
Only quality feedbacks will help make FlexRAID into a product we will all enjoy using.

I'd like to help however I can! In the next few weeks I can even set up the hardware necessary to test this across a network for you! (That's how I hope to be using it...)


Oh, and I did some thinking and realized how Flexraid works. So, I did want to ask about the possibility of having it work just like raid-6 (or beyond). Like perhaps an option that asks how many levels of parity the user wishes to have? That way they could potentially run dozens of drives, and have 2+ drives fail, and have the parity available for it. It's just a thought. Because I realized what I mentioned earlier will help with having the data for a single drive failure, but could never help for a multi. I think it would be cool to have the option of creating parity for x # of drive failures. :)

And, even with no parity, I like the option of testing the drives for corruption that this will offer. Even if I cannot (at the moment) save multiple drives, at least I know who the bad ones are, and can move that data over to a good drive, replace that bad drive, and know what I need to restore from there. Knowing what's bad is half the battle! (beats having to assume it's all bad and doing a complete restore of a drive's contents...)

Oh, if I believe I know what drive is bad, is there an option in the future to just test a single data source, and not the entire raid? if I know one source if flaky, I just want to know how flaky the one is! :)

Oh, hey, that brings up a question...

Suppose I have four 750GB drives, four data drives (assume all are full), and three 250GB drives for parity. (assume for the moment ALL are full...) Now, I replace the first drive with a 1TB drive. (and move the data from the previous 750GB drive, so flexraid's unaware anything has happened...) Now, I replace the first 250GB parity drive with the 750GB drive to make it the first parity drive (again, I move the data from the first parity drive to it before I replace it, so flexraid's unaware anything has happened...)

So, I now have four data drives: 1TB, 750GBx3, and three parity drives: 750GB, 2x250GB.... Now, I put a full TB of data on the first data drive... All's fine, and there's plenty of parity drive space for the parity...

Catch is, the first 250GB of parity is on the first drive, second 250GB is on the 2nd, 3rd 250Gb is on the third... Does the first drive now have the 1st and 4th 250GB of parity data? Will flexraid do this and not care? (It's not wrong in any way, I am just curious if this might confuse flexraid...)

I like that flexraid can handle having your parity spread across multiple drives, and that you can easily expand your raid, but my example above shows how you can quickly get your parity data spread out all over the place on these multiple parity drives. (the first drive doesn't necessarily just hold the 1st range of parity data, the 2nd doesn't necessarily just hold the 2nd range of data, etc etc...)

I think for the above reason I plan to just have a single parity drive for everything, it's simple enough of a sacrifice. It certainly could end up being a waste of that drive, though...


To explain: I have a ton of 80GB and 160GB drives, and would use those for parity drives. But, as I upgrade the data drives (400GB to 750GB, 750GB to 1TB, etc etc), at some point I will need to add parity drives, or upgrade existing parity drives. I just don't want to upgrade existing parity drives, and have it cause confusion.

(I'm willing to bet it doesn't confuse flexraid at all, but I wanted to throw that thought out to see what you're aware of, and what you've tested, and maybe to see if you want that tested in the future...)

Either way, I have dozens of available drives, and multiple computer systems, that I believe I can help test for you whatever configuration you can think up that you might want tested...

(that's my invitation to test RC2 (and beyond) for you...)

I'm tired, so I hope the above made sense...
 
#3 didn't apply in my situation. It was just an empty folder. That appeared to be the only issue.

Please try to replicate that error again.
What OS are you using?

Oh, and I did some thinking and realized how Flexraid works. So, I did want to ask about the possibility of having it work just like raid-6 (or beyond). Like perhaps an option that asks how many levels of parity the user wishes to have? That way they could potentially run dozens of drives, and have 2+ drives fail, and have the parity available for it. It's just a thought. Because I realized what I mentioned earlier will help with having the data for a single drive failure, but could never help for a multi. I think it would be cool to have the option of creating parity for x # of drive failures. :)
Support for RAID 6 like functionality will be included at some point.

And, even with no parity, I like the option of testing the drives for corruption that this will offer. Even if I cannot (at the moment) save multiple drives, at least I know who the bad ones are, and can move that data over to a good drive, replace that bad drive, and know what I need to restore from there. Knowing what's bad is half the battle! (beats having to assume it's all bad and doing a complete restore of a drive's contents...)

Oh, if I believe I know what drive is bad, is there an option in the future to just test a single data source, and not the entire raid? if I know one source if flaky, I just want to know how flaky the one is! :)
Yeah, error detection is on many people's wish lists.
I will add the management features to alert and correct errors in future releases.

Oh, hey, that brings up a question...

Suppose I have four 750GB drives, four data drives (assume all are full), and three 250GB drives for parity. (assume for the moment ALL are full...) Now, I replace the first drive with a 1TB drive. (and move the data from the previous 750GB drive, so flexraid's unaware anything has happened...) Now, I replace the first 250GB parity drive with the 750GB drive to make it the first parity drive (again, I move the data from the first parity drive to it before I replace it, so flexraid's unaware anything has happened...)

So, I now have four data drives: 1TB, 750GBx3, and three parity drives: 750GB, 2x250GB.... Now, I put a full TB of data on the first data drive... All's fine, and there's plenty of parity drive space for the parity...

Catch is, the first 250GB of parity is on the first drive, second 250GB is on the 2nd, 3rd 250Gb is on the third... Does the first drive now have the 1st and 4th 250GB of parity data? Will flexraid do this and not care? (It's not wrong in any way, I am just curious if this might confuse flexraid...)
Yes, the first drive will have the 1st 250GB of parity and the 4th 250GB of parity.
FlexRAID is flexible enough to handle that. :)

(that's my invitation to test RC2 (and beyond) for you...)

I'm tired, so I hope the above made sense...

PM me so I will have you on the list for RC2.
Oh yeah, it all made sense. :)
 
This is an interesting piece of software. I use "software RAID1" (i.e. copy from drive A to drive B) to protect my backups of my coursework across two physical hard drives. This would automate it, would it not?
 
Anyone have a comparison/review to LVM2 ?

LVM is a volume manager for Linux (think dynamic disk on the Windows side).
FlexRAID Live! and FlexRAID NAS "might" have volume management features (or at least exploit the well known volume managers for each targeted platform).

For now though, FlexRAID Basic 1.0 RC2 will be a big improvement over RC1.
 
This is an interesting piece of software. I use "software RAID1" (i.e. copy from drive A to drive B) to protect my backups of my coursework across two physical hard drives. This would automate it, would it not?

FlexRAID will happily mirror your data if that's what you are asking.

The web client, once out, will have all the scheduling features.
The DOS and Shell clients have no scheduling features.
 
If there's anything specific you'd like to test I'd be happy to help. I just ran create last night on my new array, currently 3x500 with a 400 for parity (for now, its nowhere near full). Took about 6 hours with my dualcore and 4gb ram, does that sound about right?

what do you think about running hdparm -S to spin down my parity drives independently? I plan on doing a full rsynch maybe once a week.
 
If there's anything specific you'd like to test I'd be happy to help. I just ran create last night on my new array, currently 3x500 with a 400 for parity (for now, its nowhere near full). Took about 6 hours with my dualcore and 4gb ram, does that sound about right?

what do you think about running hdparm -S to spin down my parity drives independently? I plan on doing a full rsynch maybe once a week.

What was the processes count?
Buffer size?
Processor make and model?
How much data in each drive?

You should be able to do at least the RAID 5 figures here:
1. http://techreport.com/articles.x/9124/7
2. http://techreport.com/articles.x/9124/9
3. http://techreport.com/articles.x/9124/10

or read the whole thing starting here: http://techreport.com/articles.x/9124/1

The Intel ICH7R and NVIDIA nForce4 are hardware RAID solutions that offload to the CPU.
It is hard to do better without a dedicated hardware XOR engine as found in expensive RAID cards.

The amount of RAM should not matter as long as you as you have enough for the program to run. How much is enough also depends. It can be anything from a 2 or 3 MBs to 1 or 2 hundreds MBs depending on your configuration.

You should spin down or take your drive(s) offline when not in use.
My parity drives are external hard drives, and I power them off after use.
 
Oh, I would love it if there was some sort of XOR card that I could offload to.

I am planning on contacting several manufacturers and pitch them the idea.
FlexRAID branded hardware accelerator cards. ;)

Right now I am exploring the possibility to offload to a GPU as many current video cards have GPU with an XOR engine.
 
as per my sig, a windsor @ 2.4 . 4 gigs of ram. I left the buffer default, and processes to 2 I think. It was definitely not using both my cores fully, though. I did the math with those graphs, and the numbers seem to work out, assuming i started with 200gb data across 3 drives.

I do plan on spinning down parity/data drives, but I would like to have the OS drive set to a much longer timer, hence why I need individual control of hdparm. Ideally , I'd like full power control over each drive (ie:a hotswap bay. the one I got didn't fit :( ) instead of having to go into the case and unplug them (to prevent them from spinning up unnecessarily during shutdown/etc.)

edit: you still looking for testers for rc2?
 
edit: you still looking for testers for rc2?

Yeah, but it won't be available till the end of this weekend or maybe next week.
PM me so I have you on the list.

I am putting it through some intensive testing in order to assert of the stability.
 
LVM is a volume manager for Linux (think dynamic disk on the Windows side).
FlexRAID Live! and FlexRAID NAS "might" have volume management features (or at least exploit the well known volume managers for each targeted platform).

For now though, FlexRAID Basic 1.0 RC2 will be a big improvement over RC1.

Yep, but lvm does have mirroring & striping featuers and is SO flexible. How flexible is flexraid compared to lvm or do they have different goals ?
 
Yep, but lvm does have mirroring & striping featuers and is SO flexible. How flexible is flexraid compared to lvm or do they have different goals ?

LVM does not provide RAID, it supports it.
Again, LVM is just a volume manager just like dynamic disks on the Windows side.

You can RAID your dynamic volumes, but the volume manager does not provide the RAID feature.

So, is your question about how FlexRAID compares to Linux RAID + LVM?

Well:
1. Dynamic disks are as dangerous as it gets (much harder to recover from if ever)
2. Linux RAID + LVM won't get you the power savings that FlexRAID gets you
2. FlexRAID is far more flexible

In FlexRAID vs. Linux RAID + LVM, the true answer is: whatever best fits your needs.
 
FlexRAID will happily mirror your data if that's what you are asking.

The web client, once out, will have all the scheduling features.
The DOS and Shell clients have no scheduling features.

Yes, this is what I'm looking for.
 
Update: 04-18-2008
FlexRAID 1.0 RC2 is now ready for a private beta test.

PM me if you want to participate in the beta test.
Those that already PM'ed me will get the download info shortly.

Change list:
- File delete, edit, move, and rename are now fully supported.
- A new "validate" task has been added and provides a much faster way of validating the RAID.
- Data corruption detection added (errors from failing hard drive or other sources - self-healing feature is still in the works).
- Power-users can choose the validation fingerprint strength (checksum and/or digest implementation).
- Users can now specify where to put the metadata file when initializing the RAID.
- The "metadata" property can now refer to a directory (in which case, "flxr.meta" will be used as file name) or the full path to the metadata file.
- The "leaveOff" property can specify a value for each parity target.
Code:
	parity=/path1;/path2;/path3
	leaveOff=50MB;0;20GB
50MB will be left off path1, 0 byte on path2, and 20GB on path3

Note: The leaveOff property really applies per volume/partition.
If all 3 paths were on the same volume/partition, 20.05GB would be left off on that volume.

Warning: you should always leave off some space (1 or 2MB) on each parity volume/partition due to how the various filesystems handle written data.
It is possible to get away with leaving 0 byte or just a few KB, but that widely varies per filesystem.
Users RAID'ing large amount of data (Terabytes+) should be safe and leave at least 10MB on each volume hosting the parity data.

The leaveOff property is optional and the default behavior is to leave off 10MB on each volume/partition hosting the parity data.

Known issues:
- Program menu missing on Vista 64 bit (will investigate once I get a trial copy of Vista 64 bit)

Planned features (short-term):
- Offloading XOR computation to GPU (graphic card) or any other potential hardware
- Self-healing feature
- Web client (for advanced management features and scheduling)
 
LVM does not provide RAID, it supports it.
Again, LVM is just a volume manager just like dynamic disks on the Windows side.

You can RAID your dynamic volumes, but the volume manager does not provide the RAID feature.

So, is your question about how FlexRAID compares to Linux RAID + LVM?

Well:
1. Dynamic disks are as dangerous as it gets (much harder to recover from if ever)
2. Linux RAID + LVM won't get you the power savings that FlexRAID gets you
2. FlexRAID is far more flexible

In FlexRAID vs. Linux RAID + LVM, the true answer is: whatever best fits your needs.

LVM DOES do mirrors. man lvcreate. It also allows you to expand volumes, do snapshots, etc.
 
LVM DOES do mirrors. man lvcreate. It also allows you to expand volumes, do snapshots, etc.

True, LVM2 does have experimental mirroring support.
Snapshots in LVM are simply incremental backups (read data delta).

Again, most people will use Linux RAID on top of LVM.
And yes, as a volume manager, it does all sort of tricks with dynamic volumes.

FlexRAID can be used on top of LVM just like it can be used on top of dynamic volumes on Windows. :)
 
Back
Top