<p dir="ltr">This is really interesting.<br>
Thanks for replying with the system specs. Honesty, I don't see anything too odd in your specs or in your zpool info.</p>
<p dir="ltr">What kernel version are you running? </p>
<p dir="ltr">-><br>
</p>
<div class="gmail_quote">On Feb 8, 2014 8:53 PM, <<a href="mailto:tclug@freakzilla.com">tclug@freakzilla.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Sat, 8 Feb 2014, Jake Vath wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Can you give us your system specs?<br>
</blockquote>
<br>
<br>
Yeah... I've used ZFS on Solaris in a professional capacity, but ZFS on Linux at home is very new to me. I've also not had to do a lot of crazy things on the Solaris side so I'm far from an expert. I really went with ZFS because ext3/ext4 couldn't handle the 16TB thing, and setting up ZFS pools was dead simple... or so I thought. There's a whole mess of threads about it a few months back.<br>
<br>
I'm not sure which system specs are relevant to this, but here goes, anyway...<br>
<br>
It's a home-built machine (obviously). Asus F2A55-M/CSM motherboard with an AMD A10-5800K quad-code CPU @3.8GHz.<br>
<br>
Has 16GB of RAM, I'm not sure what speed I got but likely DDR3/1600. The motherboard can support up to 64GB if I need to. Right now, top and free show not all of it is used, and zero of the swap space is being used.<br>
<br>
The array is using eight Western Digital RED 3TB drives. Each disk has has been tested before being put in the array (that took forever). They are in an external enclosure which uses two port multipliers (so four drives each) which are plugged into eSATA ports that are basically plugged into the system's regular SATAIII ports.<br>
<br>
Right now, with everything reading/writing to the pool disabled except a scrub, the load average is 1.93. With streaming to/from the disk it clibs up to the 3-4 range and even higher. Top shows a lot of zfs processes running (z_rd_int) and kworker. Those processes are the only thing taking up any CPU time, but none are taking a really high percentage.<br>
<br>
<br>
Here's my zpool get all:<br>
<br>
<br>
NAME PROPERTY VALUE SOURCE<br>
media size 21.8T -<br>
media capacity 41% -<br>
media altroot - default<br>
media health ONLINE -<br>
media guid 10980099153164009168 default<br>
media version - default<br>
media bootfs - default<br>
media delegation on default<br>
media autoreplace off default<br>
media cachefile - default<br>
media failmode wait default<br>
media listsnapshots off default<br>
media autoexpand off default<br>
media dedupditto 0 default<br>
media dedupratio 1.00x -<br>
media free 12.7T -<br>
media allocated 9.07T -<br>
media readonly off -<br>
media ashift 12 local<br>
media comment - default<br>
media expandsize 0 -<br>
media freeing 0 default<br>
media feature@async_destroy enabled local<br>
media feature@empty_bpobj enabled local<br>
media feature@lz4_compress enabled local<br>
<br>
<br>
______________________________<u></u>_________________<br>
TCLUG Mailing List - Minneapolis/St. Paul, Minnesota<br>
<a href="mailto:tclug-list@mn-linux.org" target="_blank">tclug-list@mn-linux.org</a><br>
<a href="http://mailman.mn-linux.org/mailman/listinfo/tclug-list" target="_blank">http://mailman.mn-linux.org/<u></u>mailman/listinfo/tclug-list</a><br>
</blockquote></div>