Discussion:
Slow zfs writes
(too old to reply)
Ram Chander
2013-02-11 12:48:21 UTC
Permalink
Hi,

My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.


# zpool status -v
***@host:~# zpool status -v
pool: test
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
c2t5d0 ONLINE 0 0 0
c2t6d0 ONLINE 0 0 0
c2t7d0 ONLINE 0 0 0
c2t8d0 ONLINE 0 0 0
c2t9d0 ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
c2t12d0 ONLINE 0 0 0
c2t13d0 ONLINE 0 0 0
c2t14d0 ONLINE 0 0 0
c2t15d0 ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
c2t20d0 ONLINE 0 0 0
c2t21d0 ONLINE 0 0 0
c2t22d0 ONLINE 0 0 0
c2t23d0 ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
c2t24d0 ONLINE 0 0 0
c2t25d0 ONLINE 0 0 0
c2t26d0 ONLINE 0 0 0
c2t27d0 ONLINE 0 0 0
c2t28d0 ONLINE 0 0 0
c2t29d0 ONLINE 0 0 0
c2t30d0 ONLINE 0 0 0
raidz1-5 ONLINE 0 0 0
c2t31d0 ONLINE 0 0 0
c2t32d0 ONLINE 0 0 0
c2t33d0 ONLINE 0 0 0
c2t34d0 ONLINE 0 0 0
c2t35d0 ONLINE 0 0 0
c2t36d0 ONLINE 0 0 0
c2t37d0 ONLINE 0 0 0
raidz1-6 ONLINE 0 0 0
c2t38d0 ONLINE 0 0 0
c2t39d0 ONLINE 0 0 0
c2t40d0 ONLINE 0 0 0
c2t41d0 ONLINE 0 0 0
c2t42d0 ONLINE 0 0 0
c2t43d0 ONLINE 0 0 0
c2t44d0 ONLINE 0 0 0
spares
c5t10d0 AVAIL
c5t11d0 AVAIL
c2t45d0 AVAIL
c2t46d0 AVAIL
c2t47d0 AVAIL



# iostat -En

c4t0d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Vendor: iDRAC Product: Virtual CD Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c3t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRAC Product: LCDRIVE Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t0d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRAC Product: Virtual Floppy Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0


***@host:~# fmadm faulty
--------------- ------------------------------------ --------------
---------
TIME EVENT-ID MSG-ID
SEVERITY
--------------- ------------------------------------ --------------
---------
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC
Major

Host : host
Platform : PowerEdge-R810
Product_sn :

Fault class : fault.fs.zfs.io_failure_wait
Affects : zfs://pool=test
faulted but still in service
Problem in : zfs://pool=test
faulted but still in service

Description : The ZFS pool has experienced currently unrecoverable I/O
failures. Refer to http://illumos.org/msg/ZFS-8000-HCfor
more information.

Response : No automated response will be taken.

Impact : Read and write I/Os cannot be serviced.

Action : Make sure the affected devices are connected, then run
'zpool clear'.
r***@gmx.net
2013-02-11 14:15:35 UTC
Permalink
-----Original message-----
From: Ram Chander <***@gmail.com>
Sent: Mon 11-02-2013 13:49
Subject: [OpenIndiana-discuss] Slow zfs writes
Post by Ram Chander
Hi,
My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.
# zpool status -v
pool: test
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
c2t5d0 ONLINE 0 0 0
c2t6d0 ONLINE 0 0 0
c2t7d0 ONLINE 0 0 0
c2t8d0 ONLINE 0 0 0
c2t9d0 ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
c2t12d0 ONLINE 0 0 0
c2t13d0 ONLINE 0 0 0
c2t14d0 ONLINE 0 0 0
c2t15d0 ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
c2t20d0 ONLINE 0 0 0
c2t21d0 ONLINE 0 0 0
c2t22d0 ONLINE 0 0 0
c2t23d0 ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
c2t24d0 ONLINE 0 0 0
c2t25d0 ONLINE 0 0 0
c2t26d0 ONLINE 0 0 0
c2t27d0 ONLINE 0 0 0
c2t28d0 ONLINE 0 0 0
c2t29d0 ONLINE 0 0 0
c2t30d0 ONLINE 0 0 0
raidz1-5 ONLINE 0 0 0
c2t31d0 ONLINE 0 0 0
c2t32d0 ONLINE 0 0 0
c2t33d0 ONLINE 0 0 0
c2t34d0 ONLINE 0 0 0
c2t35d0 ONLINE 0 0 0
c2t36d0 ONLINE 0 0 0
c2t37d0 ONLINE 0 0 0
raidz1-6 ONLINE 0 0 0
c2t38d0 ONLINE 0 0 0
c2t39d0 ONLINE 0 0 0
c2t40d0 ONLINE 0 0 0
c2t41d0 ONLINE 0 0 0
c2t42d0 ONLINE 0 0 0
c2t43d0 ONLINE 0 0 0
c2t44d0 ONLINE 0 0 0
spares
c5t10d0 AVAIL
c5t11d0 AVAIL
c2t45d0 AVAIL
c2t46d0 AVAIL
c2t47d0 AVAIL
# iostat -En
c4t0d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c3t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t0d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
--------------- ------------------------------------ --------------
---------
TIME EVENT-ID MSG-ID
SEVERITY
--------------- ------------------------------------ --------------
---------
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC
Major
Host : host
Platform : PowerEdge-R810
Fault class : fault.fs.zfs.io_failure_wait
Affects : zfs://pool=test
faulted but still in service
Problem in : zfs://pool=test
faulted but still in service
Description : The ZFS pool has experienced currently unrecoverable I/O
failures. Refer to http://illumos.org/msg/ZFS-8000-HCfor
more information.
Response : No automated response will be taken.
Impact : Read and write I/Os cannot be serviced.
Action : Make sure the affected devices are connected, then run
'zpool clear'.
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Hi Ram,

I saw similar behavior with one of our zpools when one of our L2ARC SSDs was worn out. The SSD alwys deliverd the data finally, but was dead slow affecting the whole pool.


cu

Carsten
Ian Collins
2013-02-11 18:30:23 UTC
Permalink
Post by Ram Chander
Hi,
My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.
Does iostat -xtcMn 10 show any anomalies such as long wait times or high %b?

Your pool configuration is a bit odd, I assume this isn't a production
system?
--
Ian.
Robbie Crash
2013-02-11 20:54:07 UTC
Permalink
If you weren't having any issues with speed and they've progressively
gotten worse, I'd look at dedup. If you're using dedup, you better make
sure you've got 2.5GB RAM for every TB of unique data you have, otherwise
you'll be swapping your dedup tables constantly and your read/write
performance is going to die.
Post by Ian Collins
Post by Ram Chander
Hi,
My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.
Does iostat -xtcMn 10 show any anomalies such as long wait times or high %b?
Your pool configuration is a bit odd, I assume this isn't a production
system?
--
Ian.
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
--
Seconds to the drop, but it seems like hours.

http://www.openmedia.ca
https://robbiecrash.me
Sašo Kiselkov
2013-02-11 21:00:17 UTC
Permalink
Post by Robbie Crash
If you weren't having any issues with speed and they've progressively
gotten worse, I'd look at dedup. If you're using dedup, you better make
sure you've got 2.5GB RAM for every TB of unique data you have, otherwise
you'll be swapping your dedup tables constantly and your read/write
performance is going to die.
Also always remember to tune zfs_arc_meta_limit if you have lots of
dedup, since DDT entries count as metadata and by default the meta_limit
is 1/4 of arc_c_max (which for machines with lots of DRAM is ramsize
minus 1GG by default).

Cheers,
--
Saso
Jason Matthews
2013-02-12 01:09:49 UTC
Permalink
I am going to offer to obvious advice...

How full is your pool? Zpool performance degrades as the pool fills up and
the tools don't tell you how close you are to the cliff -- you find the
cliff on your own by falling off of it. As a rule of thumb, I keep
production system less than 70% utilized.

Here is a real life example. On a 14.5TB (configured) pool, I found the
cliff with 250+GB still reported as free. The system continued to write to
the pool but throughput dismal.

Is your pool full?

j.

-----Original Message-----
From: Ram Chander [mailto:***@gmail.com]
Sent: Monday, February 11, 2013 4:48 AM
To: Discussion list for OpenIndiana
Subject: [OpenIndiana-discuss] Slow zfs writes

Hi,

My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.


# zpool status -v
***@host:~# zpool status -v
pool: test
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
c2t5d0 ONLINE 0 0 0
c2t6d0 ONLINE 0 0 0
c2t7d0 ONLINE 0 0 0
c2t8d0 ONLINE 0 0 0
c2t9d0 ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
c2t12d0 ONLINE 0 0 0
c2t13d0 ONLINE 0 0 0
c2t14d0 ONLINE 0 0 0
c2t15d0 ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
c2t20d0 ONLINE 0 0 0
c2t21d0 ONLINE 0 0 0
c2t22d0 ONLINE 0 0 0
c2t23d0 ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
c2t24d0 ONLINE 0 0 0
c2t25d0 ONLINE 0 0 0
c2t26d0 ONLINE 0 0 0
c2t27d0 ONLINE 0 0 0
c2t28d0 ONLINE 0 0 0
c2t29d0 ONLINE 0 0 0
c2t30d0 ONLINE 0 0 0
raidz1-5 ONLINE 0 0 0
c2t31d0 ONLINE 0 0 0
c2t32d0 ONLINE 0 0 0
c2t33d0 ONLINE 0 0 0
c2t34d0 ONLINE 0 0 0
c2t35d0 ONLINE 0 0 0
c2t36d0 ONLINE 0 0 0
c2t37d0 ONLINE 0 0 0
raidz1-6 ONLINE 0 0 0
c2t38d0 ONLINE 0 0 0
c2t39d0 ONLINE 0 0 0
c2t40d0 ONLINE 0 0 0
c2t41d0 ONLINE 0 0 0
c2t42d0 ONLINE 0 0 0
c2t43d0 ONLINE 0 0 0
c2t44d0 ONLINE 0 0 0
spares
c5t10d0 AVAIL
c5t11d0 AVAIL
c2t45d0 AVAIL
c2t46d0 AVAIL
c2t47d0 AVAIL



# iostat -En

c4t0d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Vendor: iDRAC Product: Virtual CD Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c3t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRAC Product: LCDRIVE Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t0d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRAC Product: Virtual Floppy Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0


***@host:~# fmadm faulty
--------------- ------------------------------------ --------------
---------
TIME EVENT-ID MSG-ID
SEVERITY
--------------- ------------------------------------ --------------
---------
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC
Major

Host : host
Platform : PowerEdge-R810
Product_sn :

Fault class : fault.fs.zfs.io_failure_wait
Affects : zfs://pool=test
faulted but still in service
Problem in : zfs://pool=test
faulted but still in service

Description : The ZFS pool has experienced currently unrecoverable I/O
failures. Refer to
http://illumos.org/msg/ZFS-8000-HCfor
more information.

Response : No automated response will be taken.

Impact : Read and write I/Os cannot be serviced.

Action : Make sure the affected devices are connected, then run
'zpool clear'.
Ram Chander
2013-02-12 05:07:10 UTC
Permalink
So it looks like re-distribution issue. Initially there were two Vdev with
24 disks ( disk 0-23 ) for close to year. After which which we added 24
more disks and created additional vdevs. The initial vdevs are filled up
and so write speed declined. Now how to find files that are present in a
Vdev or a disk. That way I can remove and re-copy back to distribute data.
Any other way to solve this ?

Total capacity of pool - 98Tb
Used - 44Tb
Free - 54 Tb

***@host:# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
----------- ----- ----- ----- ----- ----- -----
test 54.0T 62.7T 52 1.12K 2.16M 5.78M
raidz1 11.2T 2.41T 13 30 176K 146K
c2t0d0 - - 5 18 42.1K 39.0K
c2t1d0 - - 5 18 42.2K 39.0K
c2t2d0 - - 5 18 42.5K 39.0K
c2t3d0 - - 5 18 42.9K 39.0K
c2t4d0 - - 5 18 42.6K 39.0K
raidz1 13.3T 308G 13 100 213K 521K
c2t5d0 - - 5 94 50.8K 135K
c2t6d0 - - 5 94 51.0K 135K
c2t7d0 - - 5 94 50.8K 135K
c2t8d0 - - 5 94 51.1K 135K
c2t9d0 - - 5 94 51.1K 135K
raidz1 13.4T 19.1T 9 455 743K 2.31M
c2t12d0 - - 3 137 69.6K 235K
c2t13d0 - - 3 129 69.4K 227K
c2t14d0 - - 3 139 69.6K 235K
c2t15d0 - - 3 131 69.6K 227K
c2t16d0 - - 3 141 69.6K 235K
c2t17d0 - - 3 132 69.5K 227K
c2t18d0 - - 3 142 69.6K 235K
c2t19d0 - - 3 133 69.6K 227K
c2t20d0 - - 3 143 69.6K 235K
c2t21d0 - - 3 133 69.5K 227K
c2t22d0 - - 3 143 69.6K 235K
c2t23d0 - - 3 133 69.5K 227K
raidz1 2.44T 16.6T 5 103 327K 485K
c2t24d0 - - 2 48 50.8K 87.4K
c2t25d0 - - 2 49 50.7K 87.4K
c2t26d0 - - 2 49 50.8K 87.3K
c2t27d0 - - 2 49 50.8K 87.3K
c2t28d0 - - 2 49 50.8K 87.3K
c2t29d0 - - 2 49 50.8K 87.3K
c2t30d0 - - 2 49 50.8K 87.3K
raidz1 8.18T 10.8T 5 295 374K 1.54M
c2t31d0 - - 2 131 58.2K 279K
c2t32d0 - - 2 131 58.1K 279K
c2t33d0 - - 2 131 58.2K 279K
c2t34d0 - - 2 132 58.2K 279K
c2t35d0 - - 2 132 58.1K 279K
c2t36d0 - - 2 133 58.3K 279K
c2t37d0 - - 2 133 58.2K 279K
raidz1 5.42T 13.6T 5 163 383K 823K
c2t38d0 - - 2 61 59.4K 146K
c2t39d0 - - 2 61 59.3K 146K
c2t40d0 - - 2 61 59.4K 146K
c2t41d0 - - 2 61 59.4K 146K
c2t42d0 - - 2 61 59.3K 146K
c2t43d0 - - 2 62 59.2K 146K
c2t44d0 - - 2 62 59.3K 146K
Post by Jason Matthews
I am going to offer to obvious advice...
How full is your pool? Zpool performance degrades as the pool fills up and
the tools don't tell you how close you are to the cliff -- you find the
cliff on your own by falling off of it. As a rule of thumb, I keep
production system less than 70% utilized.
Here is a real life example. On a 14.5TB (configured) pool, I found the
cliff with 250+GB still reported as free. The system continued to write to
the pool but throughput dismal.
Is your pool full?
j.
-----Original Message-----
Sent: Monday, February 11, 2013 4:48 AM
To: Discussion list for OpenIndiana
Subject: [OpenIndiana-discuss] Slow zfs writes
Hi,
My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.
# zpool status -v
pool: test
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
c2t5d0 ONLINE 0 0 0
c2t6d0 ONLINE 0 0 0
c2t7d0 ONLINE 0 0 0
c2t8d0 ONLINE 0 0 0
c2t9d0 ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
c2t12d0 ONLINE 0 0 0
c2t13d0 ONLINE 0 0 0
c2t14d0 ONLINE 0 0 0
c2t15d0 ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
c2t20d0 ONLINE 0 0 0
c2t21d0 ONLINE 0 0 0
c2t22d0 ONLINE 0 0 0
c2t23d0 ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
c2t24d0 ONLINE 0 0 0
c2t25d0 ONLINE 0 0 0
c2t26d0 ONLINE 0 0 0
c2t27d0 ONLINE 0 0 0
c2t28d0 ONLINE 0 0 0
c2t29d0 ONLINE 0 0 0
c2t30d0 ONLINE 0 0 0
raidz1-5 ONLINE 0 0 0
c2t31d0 ONLINE 0 0 0
c2t32d0 ONLINE 0 0 0
c2t33d0 ONLINE 0 0 0
c2t34d0 ONLINE 0 0 0
c2t35d0 ONLINE 0 0 0
c2t36d0 ONLINE 0 0 0
c2t37d0 ONLINE 0 0 0
raidz1-6 ONLINE 0 0 0
c2t38d0 ONLINE 0 0 0
c2t39d0 ONLINE 0 0 0
c2t40d0 ONLINE 0 0 0
c2t41d0 ONLINE 0 0 0
c2t42d0 ONLINE 0 0 0
c2t43d0 ONLINE 0 0 0
c2t44d0 ONLINE 0 0 0
spares
c5t10d0 AVAIL
c5t11d0 AVAIL
c2t45d0 AVAIL
c2t46d0 AVAIL
c2t47d0 AVAIL
# iostat -En
c4t0d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c3t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t0d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
--------------- ------------------------------------ --------------
---------
TIME EVENT-ID MSG-ID
SEVERITY
--------------- ------------------------------------ --------------
---------
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC
Major
Host : host
Platform : PowerEdge-R810
Fault class : fault.fs.zfs.io_failure_wait
Affects : zfs://pool=test
faulted but still in service
Problem in : zfs://pool=test
faulted but still in service
Description : The ZFS pool has experienced currently unrecoverable I/O
failures. Refer to
http://illumos.org/msg/ZFS-8000-HCfor
more information.
Response : No automated response will be taken.
Impact : Read and write I/Os cannot be serviced.
Action : Make sure the affected devices are connected, then run
'zpool clear'.
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Ian Collins
2013-02-12 09:32:56 UTC
Permalink
Post by Ram Chander
So it looks like re-distribution issue. Initially there were two Vdev with
24 disks ( disk 0-23 ) for close to year. After which which we added 24
more disks and created additional vdevs. The initial vdevs are filled up
and so write speed declined. Now how to find files that are present in a
Vdev or a disk. That way I can remove and re-copy back to distribute data.
Any other way to solve this ?
Please stick to one list or cross-post, multi-posting tends to waste
responder's time.
--
Ian.
Continue reading on narkive:
Loading...