Provide I/O benchmarks compared to unraid. Establish a performance baseline for future driver updates #64
Replies: 3 comments 16 replies
-
|
The performance compared to a "real" UnRAID array on the same hardware should be equal - this is the same kernel driver running the array after all. Likewise the write penalty is the same, and as mentioned in the linked reddit discussion, the default write method is the read-modify-write cycle, but it can be changed to "turbo mode" ie. reconstruct write, where the write performance is better, as it doesn't need to do multiple seeks on the same disk, but as a downside all array disks must be spinning. I can try to run some simple validation performance tests at some point, testing disk write speeds when part of array and then "raw", but I don't think I personally will be able to test this with UnRAID. Like I mention in the project readme, I haven't actually used or even installed UnRAID ever, and I don't feel comfortable doing these kind of tests with a trial license. All NonRAID development has been based on the open source code distributed inside the install packages, and public docs + forum posts. For what it's worth, I've been running NonRAID "in production" in my own Ubuntu based diy NAS for a while now, without any issues and performance has been "what I expect" from spinning rust. If you need more I/O performance, then you can do separate "write cache disks" (like UnRAID has also) with mergerfs, and have a mover script handling moving the data to array disks, like: https://github.com/monstermuffin/mergerfs-cache-mover (That's something that I'm interested in trying out, but don't currently have hardware for.) |
Beta Was this translation helpful? Give feedback.
-
|
@qvr thanks for the reply! OK based on your suggestion I am trying to run some experiments but I ran into an issue and need your help. I am trying to use linux-bcache (/dev/bcacheN layers) as the storage disks for the array - which I speculate may give us NVME speeds on the entire array without dealing with pesky scripts moving data with mergerfs. I'm unable to create an array using /dev/bcache0p1 because it is not a physical disks being filtered by the Lines 2659 to 2681 in 378d6f6 My public notes on the experiment (so far) TheLinuxGuy/free-unraid@ea59101 |
Beta Was this translation helpful? Give feedback.
-
|
@qvr pvcreate doesn't recognize the nmdxp1 block device, so I had to create a partition table on it, and a partition (/dev/nmdxp1p1). I suppose that won't create an issue? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey, I stumbled across this project and honestly it feels like it could be the missing piece of the “golden” NAS puzzle (https://github.com/TheLinuxGuy/free-unraid). Big thanks for putting this out there and sharing it!
A couple thoughts/ideas I wanted to throw in for consideration:
• Would be awesome to see some baseline benchmarks. Like, running the current driver on the same hardware (disks, CPU, etc.) compared to Unraid. That way we’d have a solid baseline to measure future improvements against and also see how it stacks up to Unraid.
Also curious — does the same ~33% write penalty that Unraid has show up here too? (ref: https://www.reddit.com/r/unRAID/comments/1fvnmpn/comment/lqaupno/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
Beta Was this translation helpful? Give feedback.
All reactions