I recently bought a QNap NAS and decided to try out a few NAS OS, amongst which TrueNAS. After creating a ZFS pool on TrueNAS and transfering a few files, I got disatisfied with TrueNAS and decided to try out QNap’s OS (QuTS). Both OS are installed on the NAS. I went through QuTS initialization, noticed that QuTS didn’t automatically import the ZFS pool and got disatisfied with QuTS’ interface. After what I decided to reinstall TrueNAS along with QuTS (QuTS had erased the disk on which TrueNAS was installed, which is different from the disks in the ZFS pool). And this is the current state of affairs.
The ZFS pool in question contains 3 disks of capacity 5To each. TrueNAS shows the pool in its interface, but is not able to mount it. The pool is named main-pool
. Following are different commands that I tried and their results:
# zpool import -a
no pools available to import
# zpool status -v
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:08 with 0 errors on Sun Sep 22 03:45:10 2024
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
errors: No known data errors
# zpool status -v main-pool
cannot open 'main-pool': no such pool
# zdb -l /dev/sd{b,c,e}
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2 (Bad label cksum)
------------------------------------
version: 5000
name: 'main-pool'
state: 0
txg: 4
pool_guid: 8298165464761202083
errata: 0
hostid: 1555077055
hostname: 'nas'
top_guid: 4887568585043273647
guid: 12714426885291094564
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 4887568585043273647
nparity: 1
metaslab_array: 128
metaslab_shift: 34
ashift: 12
asize: 15002922123264
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 12714426885291094564
path: '/dev/disk/by-partuuid/8b808c7a-cacd-46d1-b400-9f8c71d51b30'
whole_disk: 0
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 822100609061977128
path: '/dev/disk/by-partuuid/8965dbb6-9484-4da5-ba1b-d33303b19ae5'
whole_disk: 0
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 1731788408784132969
path: '/dev/disk/by-partuuid/fae9eb3e-e782-4115-bbc9-cde4dc72c408'
whole_disk: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
labels = 2 3
Weirdly, it seems like I can import each disk separatly, by dev path:
# zpool import -d /dev/sdb
pool: main-pool
id: 8298165464761202083
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
main-pool UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
sdb ONLINE
8965dbb6-9484-4da5-ba1b-d33303b19ae5 UNAVAIL
fae9eb3e-e782-4115-bbc9-cde4dc72c408 UNAVAIL
But not all of them together:
# zpool import -d /dev/sdb -d /dev/sdc -d /dev/sde
pool: main-pool
id: 8298165464761202083
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
main-pool UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
sdb UNAVAIL invalid label
sdc UNAVAIL invalid label
sde UNAVAIL invalid label
So… Is my ZFS pool lost?
If so, then how do people recover a ZFS pool when the system has crashed before the ZFS pool was correctly exported? For instance if the disk containing the system fails?