-
Notifications
You must be signed in to change notification settings - Fork 0
WIP #2
base: master2
Are you sure you want to change the base?
WIP #2
Conversation
usr/src/cmd/zpool/zpool_main.c
Outdated
|
|
||
| #ifndef MAX | ||
| #define MAX(x, y) ((x) > (y) ? (x) : (y)) | ||
| #endif /* MAX */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we get this from some header file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No idea why I didn't find it on my first search, turns out it sits in <sys/sysmacros.h> and isn't ifdef'ed to _KERNEL, so I'll pull it in from there.
usr/src/man/man5/zpool-features.5
Outdated
| set to \fBdisabled\fR, scrub and resilver process data in logical object | ||
| block order - this is analogous to opening a file and simply reading it | ||
| from start to finish in sequence. This approach is sensitive to how well | ||
| sequentially the data is layed out on the pool. If the data is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can remove well.
typo: laid
usr/src/man/man5/zpool-features.5
Outdated
| or host system reboot. Instead, the algorithm takes "checkpoints" at | ||
| approximately 1 hour intervals. If the pool is exported or the host | ||
| system reboots, the operation will be resumed from the last of these | ||
| checkpoints. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great info. I think we should copy or move the description of scrubbing to the zpool manpage (perhaps in the section on the zpool scrub subcommand). Keep in mind that the zpool-features manpage is primarily for understanding when/why the feature should be enabled. Users that already have the feature enabled are unlikely to visit this manpage to understand scrubbing (and in several years, this will be most users).
| * objects of at least sizeof (range_seg_t). The range tree will use | ||
| * the start of that object as a range_seg_t to keep its internal | ||
| * data structures and you can use the remainder of the object to | ||
| * store arbitrary additional fields as necessary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dictating that it be at the start is a little restrictive. I think it would be more general if we could pass in the offset to the range_set_t instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be removed with the integration of ss_fill tracking into range_tree itself.
| * allocations. This is useful for cases when close proximity of | ||
| * allocations is an important detail that needs to be represented | ||
| * in the range tree. See range_tree_set_gap(). The default behavior | ||
| * is not to bridge gaps (i.e. the maximum allowed gap size is 0). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to understand why this can't be done by the caller (or why it would be much worse). Hopefully it becomes clear as I read the rest of the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be removed with the integration of ss_fill tracking into range_tree itself.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| * 1) it must NOT be an embedded BP | ||
| * 2) it must have no more than 1 DVA | ||
| * 3) it must be a level=0 (leaf) block, otherwise we need to | ||
| * read it right away to use it in metadata traversal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this restriction (level==0) shouldn't be necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed, new code can reorder any level.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| avl_node_t qzio_addr_node; | ||
| list_node_t qzio_list_node; | ||
| } qzio_nodes; | ||
| } qzio_t; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's rename this since it isn't any sort of zio.
Not sure about this, but consider embedding the qblkptr_t's fields into this structure.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| typedef struct { | ||
| range_seg_t ss_rs; | ||
| avl_node_t ss_size_node; | ||
| uint64_t ss_fill; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ss_fill should be implemented in the same layer as the range tree gap code (i.e. in the range_tree itself, unless we want to move the gap code into the caller as well).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll move the implementation to be internal to range_tree.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| ASSERT(scn->scn_is_sorted); | ||
|
|
||
| if (scn->scn_phys.scn_state == DSS_FINISHING || | ||
| scn->scn_checkpointing || shutdown) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shutdown: will this take a long time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out this is a mechanism that is a leftover from some older iterations. So I'll kill the whole "shutdown" thing.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| * to parallelize processing of all top-level vdevs as much as possible. | ||
| */ | ||
| static void | ||
| dsl_scan_queues_run_one(queue_run_info_t *info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we may need to make this issue a few (say 1000) zio_t's to each device before moving on to the next device. This will ensure that we keep all devices busy even if it takes a bunch of CPU to issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll rewrite the queue handling so that we create a taskq with "ncpus" worth of threads and then divide the vdevs we need to handle evenly between them.
| * one DVA present (copies > 1), because there's no sensible way to sort | ||
| * these (how do you sort a queue based on multiple contradictory | ||
| * criteria?). So we exclude those as well. Again, these are very rarely | ||
| * used for leaf blocks, usually only on metadata. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pardon an ignorant question, but why the restriction on multiple DVAs? Shouldn't it be possible to put each DVA separately into the queues and sort by linear address as usual? (That is, make dsl_scan_queue_insert call bp2qio once for each DVA?) As it stands, it seems that this will not sort anything from copies=2 datasets, e.g.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for this is that we would need to split up the blkptr_t and create several "fake" 1-DVA ones to pass to zio_read(), because otherwise zio_read() handles all DVAs at the same time. At this stage, it was deemed more hassle than it's worth, given that using copies=2 for large datasets is quite rare. I dunno, maybe it's a trivial change. I'll give this some more thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the change would be pretty small, and worthwhile because it would speed up the traversal (block discovery). You would need to create a qzio_t (or whatever we're calling them now) for each DVA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've renamed it to "scan_qio_t" - hope that's an acceptable name.
And I can confirm that it works, I just did a quick prototype. It even improves performance a little, exactly as you had predicted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like what we're enqueing is a DVA, so it might make sense to include that in the name, e.g. scan_dva_t?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're queuing more than that, it's actually a whole set of parameters needed to construct a zio, hence why I wanted to include "io" somewhere in the name while keeping it reasonably short.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't a big deal so I won't insist, but here's my thinking:
The big picture of this is that we are collecting the block pointers and then later issuing the scrub i/os (but actually we are collecting each DVA separately). This data structure is used to collect the BP's / DVA's. It's true that it tells us that we want to later issue a zio for the DVA that it holds. I guess it's a matter of opinion which aspect is more relevant to naming the structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whole set of parameters needed to construct a zio
Specifically, it's:
- the abbreviated BP
- the bookmark
- the zio_flags, which are not really needed because they are constant for a given scan.
I'd argue that's essentially "the BP" (or "the DVA"), and the bookmark is not conceptually important (e.g. it could be omitted and the impact would only be reduced precision of error reporting).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally lean in favor of even dropping the "q" from the name and just calling it a scan_io_t. I'm currently working on renaming some of the internal functions, because many of the names are confusingly similar (e.g. "dsl_scan_queue_insert" for the block sorting queues and "scan_queue_insert" for the queue datasets to examine). I'd like to rename stuff having to do with the block sorting queues to "scan_io_queue_..." and the dataset queue to "scan_ds_queue_...".
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| dsl_scan_visitds(scn, | ||
| dp->dp_origin_snap->ds_object, tx); | ||
| } | ||
| ASSERT(!scn->scn_pausing); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgive another ignorant question, but, while I recognize that this is not a change introduced here, it's not obvious to me why this assertion holds? I don't see anything, from a few minutes look at dsl_scan_visitds that would ensure that we don't set this flag in this particular case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can pause when visiting block pointers (from dsl_scan_visitbp()). The dp_origin_snap does not have any block pointers, so we can't pause while visiting it. Visiting it serves only to add its "next clones" to the work queue.
2cacc87 to
2f86f9c
Compare
| uint64_t sio_prop; | ||
| uint64_t sio_phys_birth; | ||
| uint64_t sio_birth; | ||
| zio_cksum_t sio_cksum; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When ZFS encryption is integrated, we'll need more fields here. (IIRC, at least blk_fill and dva[2])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When that happens, we can make the structure somewhat polymorphic to allow for additional fields in case the block is encrypted.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
|
|
||
| typedef struct scan_io { | ||
| dva_t sio_dva; | ||
| uint64_t sio_prop; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a copy of blk_prop, right? Maybe we should name it sio_blk_prop to make that extra clear (as opposed to some other properties).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed.
| dva_t sio_dva; | ||
| uint64_t sio_prop; | ||
| uint64_t sio_phys_birth; | ||
| uint64_t sio_birth; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you remember why we need both logical and physical births for the scrub io? It would be nice to have a comment somewhere explaining that, so that nobody tries to "optimize" it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No idea, really. All I did was store the crucial bits that originally went into constructing the zio_read(). I didn't really give much thought to the birth numbers. I'll try to hunt down the exact reason later.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| dsl_scan_visitds(scn, | ||
| dp->dp_origin_snap->ds_object, tx); | ||
| } | ||
| ASSERT(!scn->scn_pausing); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can pause when visiting block pointers (from dsl_scan_visitbp()). The dp_origin_snap does not have any block pointers, so we can't pause while visiting it. Visiting it serves only to add its "next clones" to the work queue.
usr/src/uts/common/fs/zfs/dsl_scan.c
Outdated
| */ | ||
| bzero(&scn->scn_phys.scn_bookmark, sizeof (zbookmark_phys_t)); | ||
|
|
||
| /* keep pulling things out of the zap-object-as-queue */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
d4a1e8e to
e8afe15
Compare
29443c4 to
8077bda
Compare
|
I went to test this on a complicated pool in the middle of a resilver: by gdb says that this occurred thusly: Suggestions for what to look at? Is it possible that I am violating some invariant by invoking |
|
@nwf Directly calling spa_scan shouldn't be giving you any trouble - worst case is you'll get rejected with EBUSY. Why this occurred is a bit of a mystery to me. That assertion failure in avl.c is due to an inconsistency in the tree - this would indicate that somebody else is manipulating that tree in parallel without locking it. This shouldn't happen, because dsl_scan_io_queue_destroy is only invoked from syncing context. |
|
Because my line numbers are probably unique to me, "../../module/zfs/dsl_scan.c:2004" is the call to dsl_scan_done() inside the test for dsl_scan_restarting() in dsl_scan_sync(). Lemme run this with ZFS_DEBUG and see if there's anything more interesting before the assert trips. |
|
Well, running again, it looks like the system quiesces the scan thread does a transaction sync (elided) and then The Hope that provides some insight. |
8077bda to
e903501
Compare
|
I can't reproduce this. I tried to export & import in the middle of a spare replacement, but nothing broke. |
|
@skiselkov It looks like something is confused about the diff here. Maybe if you rebased on top of master, that would fix it? Or point me at the URL that will render just your changes. |
|
@ahrens Sure, can do. Give me a moment. Although this commit is not ready for upstreaming yet, I have some more changes I want to make (namely stabilize the range_tree rework and taskq restructuring that you have requested). The one I'm confused about is openzfs#172 - the TRIM PR. That's where I can't reproduce the test failure that zettabot spotted. |
e903501 to
fd8716b
Compare
|
@ahrens Rebase on top of master complete. |
|
@skiselkov I pulled in the range_tree_add fix and re-ran my wrapper and it ran the resilver to completion, so perhaps that was all that was wrong. Thanks! |
|
@nwf Thanks for the update, good to know. |
fd8716b to
ad6a803
Compare
1) Removed the first-fit allocator. 2) Moved the autotrim metaslab scheduling logic into vdev_auto_trim. 2a) As a consequence of #2, metaslab_trimset_t was rendered superfluous. New trimsets are simple range_tree_t's. 3) Made ms_trimming_ts remove extents it is working on from ms_tree and then add them back in. 3a) As a consequence of illumos#3, undone all the direct changes to the allocators and removed metaslab_check_trim_conflict and range_tree_find_gap.
1) Removed the first-fit allocator. 2) Moved the autotrim metaslab scheduling logic into vdev_auto_trim. 2a) As a consequence of #2, metaslab_trimset_t was rendered superfluous. New trimsets are simple range_tree_t's. 3) Made ms_trimming_ts remove extents it is working on from ms_tree and then add them back in. 3a) As a consequence of illumos#3, undone all the direct changes to the allocators and removed metaslab_check_trim_conflict and range_tree_find_gap.
Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Dan Kimmel <dan.kimmel@delphix.com> Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com> Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Prashanth Sreenivasa <pks@delphix.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Gordon Ross <gordon.w.ross@gmail.com> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Jason King <jason.brian.king@gmail.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Reviewed by: Yuri Pankov <yuripv@gmx.com> Reviewed by: Dale Ghent <daleg@elemental.org> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com> Reviewed by: John Kennedy <jwk404@gmail.com> Reviewed by: Toomas Soome <tsoome@me.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Dan McDonald <danmcd@joyent.com> Reviewed by: Ken Mays <maybird1776@yahoo.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com> Reviewed by: Gordon Ross <gordon.w.ross@gmail.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com> Reviewed by: Sebastian Wiedenroth <wiedi@frubar.net> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Ryan Zezeski <rpz@joyent.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Reviewed by: Patrick Mooney <patrick.mooney@joyent.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Reviewed by: Garrett D'Amore <garrett@damore.org> Reviewed by: Andy Stormont <astormont@racktopsystems.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Yuri Pankov <yuripv@gmx.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan Fields <dan.fields@nexenta.com> Reviewed by: Evan Layton <evan.layton@nexenta.com> Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com> Reviewed by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org> Reviewed by: Toomas Soome <tsoome@me.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Alex Deiter <alex.deiter@nexenta.com> Reviewed by: Evan Layton <evan.layton@nexenta.com> Reviewed by: Toomas Soome <tsoome@me.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan McDonald <danmcd@joyent.com> Reviewed by: Hans Rosenfeld <hans.rosenfeld@joyent.com> Reviewed by: Ken Mays <maybird1776@yahoo.com> Reviewed by: Yuri Pankov <yuripv@gmx.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Contributed by: Frank Salzmann <frank@delphix.com> Contributed by: Pavel Zakharov <pavel.zakharov@delphix.com> Reviewed by: Toomas Soome <tsoome@me.com> Reviewed by: Ken Mays <maybird1776@yahoo.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan McDonald <danmcd@joyent.com> Reviewed by: Yuri Pankov <yuripv@gmx.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: C Fraire <cfraire@me.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Reviewed by: Robert Mustacchi <rm@joyent.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Robert Mustacchi <rm@joyent.com> Reviewed by: Yuri Pankov <yuripv@gmx.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan McDonald <danmcd@joyent.com> Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com> Reviewed by: Patrick Mooney <patrick.mooney@joyent.com> Reviewed by: Ken Mays <maybird1776@yahoo.com> Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Igor Kozhukhov <igor@dilos.org> Reviewed by: Alexander Pyhalov <apyhalov@gmail.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com> Reviewed by: C Fraire <cfraire@me.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Toomas Soome <tsoome@me.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Reviewed by: Ken Mays <maybird1776@yahoo.com> Reviewed by: Yuri Pankov <yuripv@gmx.com> Approved by: Dan McDonald <danmcd@joyent.com>
5593 Recent versions of svcadm invoked for multiple FMRIs say "Partial FMRI matches multiple instances" 8688 svcadm does not handle multiple partial FMRI arguments Reviewed by: Dominik Hassler <hadfl@omniosce.org> Reviewed by: Chris Fraire <cfraire@me.com> Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com> Approved by: Robert Mustacchi <rm@joyent.com>
8635 epoll should not emit POLLNVAL 8636 recursive epoll should emit EPOLLRDNORM Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com> Reviewed by: Robert Mustacchi <rm@joyent.com> Reviewed by: Toomas Soome <tsoome@me.com> Reviewed by: Igor Kozhukhov <igor@dilos.org> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: C Fraire <cfraire@me.com> Reviewed by: Ken Mays <maybird1776@yahoo.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com> Reviewed by: C Fraire <cfraire@me.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com> Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com> Reviewed by: Matthew Ahrens <mahrens@delphix.com> Approved by: Dan McDonald <danmcd@joyent.com>
ad6a803 to
aee0dd9
Compare
ZFS scrub/resilver take excessively long due to issuing lots of random IO