I have a zfs pool that holds my backup. Every disk has only one partition.
~# zpool status backup_pool
pool: backup_pool
state: ONLINE
scan: resilvered 108K in 00:00:00 with 0 errors on Tue Jun 18 13:48:11 2024
config:
NAME STATE READ WRITE CKSUM
backup_pool ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
wwn-0x5000c500e4eb728d ONLINE 0 0 0
wwn-0x5000c500c2871419 ONLINE 0 0 0
wwn-0x5000c500c53c108f ONLINE 0 0 0
wwn-0x5000c500aa36deaa ONLINE 0 0 0
wwn-0x5000c500aa36dbd7 ONLINE 0 0 0
wwn-0x5000c500e4eb6f87 ONLINE 0 0 0
wwn-0x5000c500e4eb8122 ONLINE 0 0 0
wwn-0x5000c500c68c2b61 ONLINE 0 0 0
wwn-0x5000c500c5f7d53c ONLINE 0 0 0
wwn-0x5000c500c715f9a7 ONLINE 0 0 0
wwn-0x5000c500e4eb6c51 ONLINE 0 0 0
wwn-0x5000c500c712df17 ONLINE 0 0 0
wwn-0x5000c500c728ef7a ONLINE 0 0 0
wwn-0x5000c500aa36d8a5 ONLINE 0 0 0
wwn-0x5000c500e8304e87 ONLINE 0 0 0
spares
wwn-0x5000c500e4eb7579 AVAIL
errors: No known data errors
Lately, I noticed there is a high rate of disk failure on my disks.
One thought was zfs is actively using the disk in active raidz pool and causes I/O failure after X months. But spares wouldn’t have I/O, so the failure shouldn’t happen there.
Looking at the pool, I’m not using the disk space to its max capacity.
~# zfs list backup_pool
NAME USED AVAIL REFER MOUNTPOINT
backup_pool 3.81T 45.6T 3.81T /backup_pool
Therefore, I can reduce the number of disks in raidz3-0 and put them in as spares.
But
~# zpool offline backup_pool wwn-0x5000c500e8304e87
~# zpool remove backup_pool wwn-0x5000c500e8304e87
cannot remove wwn-0x5000c500e8304e87: operation not supported on this type of pool
~# zpool add backup_pool spare wwn-0x5000c500e8304e87
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/wwn-0x5000c500e8304e87-part1 is part of active pool 'backup_pool'
Question is: Does ZFS support what I’m trying to do? If so, how to perform it, properly?