I have several raids in my computer (Debian system) build with mdadm. On one of those with 4 participants I have my virtualbox virtual disk containing a windows system which I need for my dictation program. Working on that Windows I got a hanging system (surprise!) and consequently a hdd error. Restarting the system was not possible and I found out that the the raid containing that system was gone.
I have partitioned the disk drives containing the raids in a 1/3 and 2/3 part due to my knowledge that the 1st 3rd is faster then the rest (I read that somewhere). Hence the maybe unusual distribution.
All the members of md1 seem to be present and accounted for and there is no error displayed. Still neither blkid nor gparted shows md1.
Any ideas?
cat /proc/mdstat produced the following:
Personalities : [raid0] [raid10] [linear] [multipath] [raid1] [raid6] [raid5] [raid4]
md0 : active raid10 sdi1[3] sdh1[2] sdg1[1] sdf1[0]
634615808 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 0/5 pages [0KB], 65536KB chunk
md1 : inactive sdi2[3](S) sdh2[2](S) sdg2[1](S) sdf2[0](S)
2636754944 blocks super 1.2
md2 : active raid0 sde1[0] sdd1[1]
1289975808 blocks super 1.2 512k chunks
md3 : active raid0 sde2[0] sdd2[1]
2616522752 blocks super 1.2 512k chunks
unused devices: <none>
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/md2 metadata=1.2 UUID=2f722de6:7e3adf97:aa6ce8c4:9a11c333 name=godard:md2
ARRAY /dev/md/md3 metadata=1.2 UUID=57ef00d5:abb69f50:929498ba:d5efca87 name=godard:md3
ARRAY /dev/md/md0 metadata=1.2 UUID=88a78a94:97c6cdfc:60869f29:ff5c0411 name=godard:md0
ARRAY /dev/md/md1 metadata=1.2 UUID=94febb30:d976ca57:ad39b8ee:53a43393 name=godard:md1
# This configuration was auto-generated on Sun, 28 Jul 2024 20:03:12 +0200 by mkconf
blkid produced that:
/dev/sdf1: UUID="88a78a94-97c6-cdfc-6086-9f29ff5c0411" UUID_SUB="971c277f-900d-5ee5-2a22-65808863cc94" LABEL="godard:md0" TYPE="linux_raid_member" PARTUUID="a42531af-5541-44cf-830c-1ccec959162a"
/dev/sdf2: UUID="94febb30-d976-ca57-ad39-b8ee53a43393" UUID_SUB="492a25c6-5dbb-a503-e821-2c7748fcc3f5" LABEL="godard:md1" TYPE="linux_raid_member" PARTUUID="bc7f30b5-8ffe-489e-93c7-584c2fc5eacd"
/dev/sdd2: UUID="57ef00d5-abb6-9f50-9294-98bad5efca87" UUID_SUB="dd74e5e3-27b3-3901-e6db-58945c5c6cf4" LABEL="godard:md3" TYPE="linux_raid_member" PARTUUID="19428072-5b9b-448b-bade-abb52795b78c"
/dev/sdd1: UUID="2f722de6-7e3a-df97-aa6c-e8c49a11c333" UUID_SUB="1b148f7d-d474-6349-2174-edfc2be677c8" LABEL="godard:md2" TYPE="linux_raid_member" PARTUUID="3f0396ee-a56c-413d-a7ba-24e0e6673827"
/dev/sdb1: PARTUUID="eb604d70-1e6f-4591-82a5-ba980e48cd8d"
/dev/md2: UUID="52678859-d2ec-4f70-ae29-fe4ec3810f93" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdi1: UUID="88a78a94-97c6-cdfc-6086-9f29ff5c0411" UUID_SUB="36a7bfe4-a554-76b7-973e-fc5f1278a7f0" LABEL="godard:md0" TYPE="linux_raid_member" PARTUUID="a5c1d844-0020-4009-9325-26c69fd26b15"
/dev/sdi2: UUID="94febb30-d976-ca57-ad39-b8ee53a43393" UUID_SUB="f90a6ec9-9e09-14fc-010e-4cf0599462b4" LABEL="godard:md1" TYPE="linux_raid_member" PARTUUID="742d5b3a-2afb-409e-8446-82638b81d7c1"
/dev/md0: UUID="a85f56bb-71a6-4160-81f0-da8c3289a8cf" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sr0: BLOCK_SIZE="2048" UUID="1999-06-15-11-17-00-00" LABEL="LARRY_COLL" TYPE="iso9660"
/dev/sdg1: UUID="88a78a94-97c6-cdfc-6086-9f29ff5c0411" UUID_SUB="3b78f88e-c294-4add-7419-a3c83d59dcc0" LABEL="godard:md0" TYPE="linux_raid_member" PARTUUID="9b9eaeb7-19cf-46e8-b4b8-e0d06f8594dd"
/dev/sdg2: UUID="94febb30-d976-ca57-ad39-b8ee53a43393" UUID_SUB="87276c03-f0eb-d550-bae4-1c39cf947238" LABEL="godard:md1" TYPE="linux_raid_member" PARTUUID="499c8894-77f8-4c86-87a1-710c270fc43d"
/dev/sde2: UUID="57ef00d5-abb6-9f50-9294-98bad5efca87" UUID_SUB="d1e8ecb2-9796-0a81-8156-3e4a58c18f5b" LABEL="godard:md3" TYPE="linux_raid_member" PARTUUID="2ad0e03f-8547-4b63-9dd3-7ef3ad4ab268"
/dev/sde1: UUID="2f722de6-7e3a-df97-aa6c-e8c49a11c333" UUID_SUB="d45dbcc8-4104-21c4-3acb-4c2a0225b8df" LABEL="godard:md2" TYPE="linux_raid_member" PARTUUID="ce9114a6-f507-48a8-9c18-23d44a4cc222"
/dev/sdc2: PARTLABEL="Microsoft reserved partition" PARTUUID="427c8c83-3cc6-4969-9d4e-25fde2050bd2"
/dev/sdc3: BLOCK_SIZE="512" UUID="7AC2766FC276300D" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="abd8fafe-d56d-4a09-9e96-e4353cf66b97"
/dev/sdc1: UUID="C257-6544" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="a5d3280b-be7e-4d91-829e-05b4cb13a217"
/dev/sdc4: BLOCK_SIZE="512" UUID="889A65C59A65B07C" TYPE="ntfs" PARTUUID="c025000f-ab4e-4610-bc8a-7bd9c255ddc2"
/dev/sda2: UUID="9806649e-d728-42c1-8009-a64e9d656dbe" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="d80b9367-eef6-48ea-9b91-cf754c753daa"
/dev/sda3: UUID="850115ff-0950-4a72-a5a7-a0ba7c042e85" TYPE="swap" PARTUUID="08e0cf05-1142-443f-81b6-30cbfadfb1d5"
/dev/sda1: UUID="876E-CF5A" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="6a6d4634-2944-45ee-9547-092eac232d97"
/dev/md3: LABEL="big" UUID="9673f921-d668-410b-8245-bb08a59bb8a7" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdj1: UUID="211d09bb-5709-4b04-8d1b-79db9b60bc1c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="01377cac-fb68-45f6-b97a-6ce6fa14246e"
/dev/sdh1: UUID="88a78a94-97c6-cdfc-6086-9f29ff5c0411" UUID_SUB="7d92e6b3-4065-543a-4b90-cb678ff1247a" LABEL="godard:md0" TYPE="linux_raid_member" PARTUUID="fe8a7032-c03b-459e-b5f5-bc3b2d88158d"
/dev/sdh2: UUID="94febb30-d976-ca57-ad39-b8ee53a43393" UUID_SUB="986a1817-d1ff-4739-9ecd-7783417e7687" LABEL="godard:md1" TYPE="linux_raid_member" PARTUUID="af0e754f-0c7b-4a77-bde4-3cbab92a94dc"
I tried to get md1 working again.
mdadm -A -s md1
didnt work.
Marcus Hamm is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.