I'm thinking of using ZFS for my home-made NAS array. I would have 4 HDDs in raidz on a Ubuntu Server 10.04 machine.
I'd like to use the snapshot capability and dedup when storing data. I'm not so much concerned about the speed, since the machine is accessed via N wireless network and that is probably going to be the bottleneck.
So does anyone have any practical experience with zfs-fuse 0.6.9 on such (or simillar) configuration?
-
Why dimply don't you use opensolaris?
You get everything you need and the best performance.Mavrik : Main reason is that the machine I'm trying to install the system on has an incompatible network card and SATA controller. Plus I had bad experience getting some less supported software (like uPnP sharing etc) to compile and work on Solaris systems.joschi : Have you checked if your hardware is supported by newer releases of OpenSolaris? The problem about compiling your software is more of a userland problem. You should check out Nextenta Core (http://www.nexenta.org/) which is using the OpenSolaris kernel with a GNU userland on top (based on Ubuntu Linux).From Pier -
I have two 500GB drives in a zfs-fuse mirror setup on my home NAS (debian lenny). It has been running for almost 6 months now, and I have not had problems. More details here on my blog.
From Wim Coenen -
There is now a native linux port of ZFS. I only learned of this recently, and as such have not had a chance to test it. It is under active development, though, which is a good sign. It's probably worth trying, as long as you're not scared off by having to compile the kernel module and tools for yourself.
If you can get it working, it will, without a doubt, perform much better than zfs-fuse does.
From ErikA -
I ran ZFS-FUSE under Ubuntu for nearly a year without any issues before migrating the pool to OpenSolaris. That said, the memory requirements for Dedup on a multi TB pool will likely exceed memory of your home linux server. Dedup performance is terrible once your deduplication tables spill over from ARC (primary memory cache) unless you have an SSD for L2ARC to keep them readily available. Without the dedup tables in memory a number of operations can become unbelievably slow (deletion of directory of files, snapshot deletion, etc). Snapshots can function w/o dedup and have nearly no overhead on their own, so unless your storing a lot of redundant and have 8-16GB of ram and/or an SSD to throw at the problem, I'd skip dedup.
From notpeter -
There's also the 3+ years bug with ZFS ARC that still persists!
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6522017
(This one is nasty as it will also go out-of-bounds from the VM limits of a hypervisor!)
Have no idea if ZFS-fuse addresses this one...
ErikA : Once again, see my comment over here: http://serverfault.com/questions/144639/best-filesystem-choices-for-nfs-storing-vmware-disk-images/163972#163972: Ditto - http://serverfault.com/questions/162693/my-opensolaris-server-hangs-when-writing-large-files-after-upgrading-zpool/163955#163955
0 comments:
Post a Comment