RAID1 installé, besoin de contrôler

Je viens de finaliser l’installation d’une Debian Wheezy en vue d’une utilisation serveur, avec 2 disques durs en RAID1 + LVM.

Le système démarre quelque soit le DD branché - test de branchement alternatif effectué.

Je voudrais vérifier que l’installation est bien conforme à du RAID1 et fonctionnelle surtout.

La commande #parted -l m’indique [quote]Error: /dev/md1: unrecognised disk label[/quote] : première inquiétude donc!

Le contenu de la commande:

[quote]Model: ATA ST1000DM003-1CH1 (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 256MB 255MB primary fat32 boot, raid
2 257MB 1000GB 1000GB extended
5 257MB 1000GB 1000GB logical raid

Model: ATA ST1000DM003-1CH1 (scsi)
Disk /dev/sdb: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 256MB 255MB primary fat32 raid
2 257MB 1000GB 1000GB extended
5 257MB 1000GB 1000GB logical raid

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_home: 500GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 500GB 500GB ext4

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_var: 442GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 442GB 442GB ext4

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_swap: 7999MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 7999MB 7999MB linux-swap(v1)

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_root: 50,0GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 50,0GB 50,0GB ext4

Model: Linux Software RAID Array (md)
Disk /dev/md0: 255MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 255MB 255MB ext3

Error: /dev/md1: unrecognised disk label
[/quote]

Le contenu d’autres commandes :

[quote]/dev/sda:
MBR Magic : aa55
Partition[0] : 497664 sectors at 2048 (type fd)
Partition[1] : 1953021954 sectors at 501758 (type 05)
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 9ff36b7e:01402f40:ff95c22e:29c06d0c
Name : matrix:0 (local to host matrix)
Creation Time : Mon Jun 16 22:23:50 2014
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 497408 (242.92 MiB 254.67 MB)
Array Size : 248640 (242.85 MiB 254.61 MB)
Used Dev Size : 497280 (242.85 MiB 254.61 MB)
Data Offset : 256 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b03130ce:526f3d6d:0e559c47:3036009c

Update Time : Tue Jun 17 21:45:10 2014
   Checksum : bbdfae4 - correct
     Events : 27

Device Role : Active device 0
Array State : A. (‘A’ == active, ‘.’ == missing)
/dev/sda2:
MBR Magic : aa55
Partition[0] : 1953021952 sectors at 2 (type fd)
/dev/sda5:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : da0001eb:7144c010:08b0918a:31d63cea
Name : matrix:1 (local to host matrix)
Creation Time : Mon Jun 16 22:24:30 2014
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 1952759808 (931.15 GiB 999.81 GB)
Array Size : 976379712 (931.15 GiB 999.81 GB)
Used Dev Size : 1952759424 (931.15 GiB 999.81 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : a744287d:33c3d9f9:a3272172:cfeeec01

Update Time : Tue Jun 17 22:22:02 2014
   Checksum : 38468973 - correct
     Events : 532

Device Role : Active device 0
Array State : A. (‘A’ == active, ‘.’ == missing)
/dev/sdb:
MBR Magic : aa55
Partition[0] : 497664 sectors at 2048 (type fd)
Partition[1] : 1953021954 sectors at 501758 (type 05)
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 9ff36b7e:01402f40:ff95c22e:29c06d0c
Name : matrix:0 (local to host matrix)
Creation Time : Mon Jun 16 22:23:50 2014
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 497408 (242.92 MiB 254.67 MB)
Array Size : 248640 (242.85 MiB 254.61 MB)
Used Dev Size : 497280 (242.85 MiB 254.61 MB)
Data Offset : 256 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 4f48d804:f1271cfa:27ae942c:852c632a

Update Time : Tue Jun 17 22:21:21 2014
   Checksum : 42a42222 - correct
     Events : 53

Device Role : Active device 1
Array State : .A (‘A’ == active, ‘.’ == missing)
/dev/sdb2:
MBR Magic : aa55
Partition[0] : 1953021952 sectors at 2 (type fd)
/dev/sdb5:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : da0001eb:7144c010:08b0918a:31d63cea
Name : matrix:1 (local to host matrix)
Creation Time : Mon Jun 16 22:24:30 2014
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 1952759808 (931.15 GiB 999.81 GB)
Array Size : 976379712 (931.15 GiB 999.81 GB)
Used Dev Size : 1952759424 (931.15 GiB 999.81 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 7ee35839:a0f70ee8:1c8f7bd9:725adcb7

Update Time : Tue Jun 17 21:42:39 2014
   Checksum : fffa24bb - correct
     Events : 57

Device Role : Active device 1
Array State : .A (‘A’ == active, ‘.’ == missing)[/quote]

Le contenu de /proc/mdstat

[quote] Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices: [/quote]

Et le contenu de la commande mount :

[quote]sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=450916,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=362044k,mode=755)
/dev/mapper/lvmgrp-lvmgrp_root on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=724080k)
/dev/md0 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered)
/dev/mapper/lvmgrp-lvmgrp_home on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)
/dev/mapper/lvmgrp-lvmgrp_var on /var type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
[/quote]

Il s’agit bien de RAID1 fonctionnel… sur une patte.

[code]
Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices: [/code]
/proc/mdstat te signale qu’un disque manque.

[mono][2/1][/mono], un disque sur deux, un disque manquant comme le signale[mono][U_][/mono] .( Sans panne, [mono][2/2] [UU][/mono] , deux disques sur deux).

Consulte ton courrier ($ mail). Tu devrais y lire plusieurs messages signalant les pannes des RAID.

Peux-tu donner les résultats des commandes stp :

mdadm --detail /dev/md1 mdadm --detail /dev/md0

les mails reçus correspondent à /proc/mdstat:

[quote] From root@matrix.leblais.net Tue Jun 17 21:44:15 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Tue, 17 Jun 2014 21:44:15 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1WwzIg-0000aT-Uf
for root@matrix.leblais.net; Tue, 17 Jun 2014 21:44:11 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/1:matrix
Message-Id: E1WwzIg-0000aT-Uf@matrix.leblais.net
Date: Tue, 17 Jun 2014 21:44:10 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0]
248640 blocks super 1.2 [2/1] [U_]

unused devices:

From root@matrix.leblais.net Tue Jun 17 21:50:06 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Tue, 17 Jun 2014 21:50:06 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1WwzOP-0000Yr-27
for root@matrix.leblais.net; Tue, 17 Jun 2014 21:50:05 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/1:matrix
Message-Id: E1WwzOP-0000Yr-27@matrix.leblais.net
Date: Tue, 17 Jun 2014 21:50:05 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices:

From root@matrix.leblais.net Tue Jun 17 21:50:09 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Tue, 17 Jun 2014 21:50:09 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1WwzOP-0000ee-Rw
for root@matrix.leblais.net; Tue, 17 Jun 2014 21:50:06 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/0:matrix
Message-Id: E1WwzOP-0000ee-Rw@matrix.leblais.net
Date: Tue, 17 Jun 2014 21:50:05 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices:

From root@matrix.leblais.net Tue Jun 17 21:56:13 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Tue, 17 Jun 2014 21:56:13 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1WwzUK-0000Z4-N6
for root@matrix.leblais.net; Tue, 17 Jun 2014 21:56:13 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/1:matrix
Message-Id: E1WwzUK-0000Z4-N6@matrix.leblais.net
Date: Tue, 17 Jun 2014 21:56:12 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices:

From root@matrix.leblais.net Tue Jun 17 21:56:13 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Tue, 17 Jun 2014 21:56:13 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1WwzUL-0000dm-Bd
for root@matrix.leblais.net; Tue, 17 Jun 2014 21:56:13 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/0:matrix
Message-Id: E1WwzUL-0000dm-Bd@matrix.leblais.net
Date: Tue, 17 Jun 2014 21:56:13 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices:

From root@matrix.leblais.net Wed Jun 18 06:25:03 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Wed, 18 Jun 2014 06:25:03 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1Wx7Ql-00017g-5t
for root@matrix.leblais.net; Wed, 18 Jun 2014 06:25:03 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/1:matrix
Message-Id: E1Wx7Ql-00017g-5t@matrix.leblais.net
Date: Wed, 18 Jun 2014 06:25:03 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices:

From root@matrix.leblais.net Wed Jun 18 06:25:03 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Wed, 18 Jun 2014 06:25:03 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1Wx7Ql-00017k-C9
for root@matrix.leblais.net; Wed, 18 Jun 2014 06:25:03 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/0:matrix
Message-Id: E1Wx7Ql-00017k-C9@matrix.leblais.net
Date: Wed, 18 Jun 2014 06:25:03 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[1]
248640 blocks super 1.2 [2/1] [_U]

unused devices:

From root@matrix.leblais.net Tue Jun 17 21:44:12 2014
Return-path: root@matrix.leblais.net
Envelope-to: root@matrix.leblais.net
Delivery-date: Tue, 17 Jun 2014 21:44:12 +0200
Received: from root by matrix.leblais.net with local (Exim 4.80)
(envelope-from root@matrix.leblais.net)
id 1WwzIh-0000et-Nh
for root@matrix.leblais.net; Tue, 17 Jun 2014 21:44:11 +0200
From: mdadm monitoring root@matrix.leblais.net
To: root@matrix.leblais.net
Subject: DegradedArray event on /dev/md/0:matrix
Message-Id: E1WwzIh-0000et-Nh@matrix.leblais.net
Date: Tue, 17 Jun 2014 21:44:11 +0200
Status: RO

This is an automatically generated mail message from mdadm
running on matrix

A DegradedArray event had been detected on md device /dev/md/0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1]
md1 : active raid1 sda5[0]
976379712 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0]
248640 blocks super 1.2 [2/1] [U_]

unused devices:

[/quote]

J’ai vérifié les branchements des 2 DD: c’est OK

mdadm --detail /dev/md1:

[quote]/dev/md1:
Version : 1.2
Creation Time : Mon Jun 16 22:24:30 2014
Raid Level : raid1
Array Size : 976379712 (931.15 GiB 999.81 GB)
Used Dev Size : 976379712 (931.15 GiB 999.81 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Wed Jun 18 21:18:27 2014
      State : clean, degraded 

Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

       Name : matrix:1  (local to host matrix)
       UUID : da0001eb:7144c010:08b0918a:31d63cea
     Events : 1335

Number   Major   Minor   RaidDevice State
   0       8        5        0      active sync   /dev/sda5
   1       0        0        1      removed

[/quote]

mdadm --detail /dev/md0:

[quote]/dev/md0:
Version : 1.2
Creation Time : Mon Jun 16 22:23:50 2014
Raid Level : raid1
Array Size : 248640 (242.85 MiB 254.61 MB)
Used Dev Size : 248640 (242.85 MiB 254.61 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Wed Jun 18 20:52:28 2014
      State : clean, degraded 

Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

       Name : matrix:0  (local to host matrix)
       UUID : 9ff36b7e:01402f40:ff95c22e:29c06d0c
     Events : 63

Number   Major   Minor   RaidDevice State
   0       0        0        0      removed
   1       8       17        1      active sync   /dev/sdb1

[/quote]

Tu souhaites que quelle partition se retrouve dans le raid md0 ?
sda1 ?
Et pour le raid md1 ?
sdb5 ?

Si oui essaie de les ajouter au raid pour voir ce que ça donne. Mais attention tes partitions doivent être bien taillées.

mdadm --manage /dev/md0 --add /dev/sda1 mdadm --manage /dev/md1 --add /dev/sdb5

Ça donne quoi ?

Si ça a bien été ajouté tu peux voir l’état de synchro de ton raid avec à nouveau :

commandes déroulées avec succès!!

recovery en cours sur md1: active raid1 sdb5 sda5

environ 115mn…

mdo: [2/2] [UU]

le traitement est terminé!!

cat /proc/mdstat :

[quote]Personalities : [raid1]

md1 : active raid1 sdb5[2] sda5[0]
976379712 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[1]
248640 blocks super 1.2 [2/2] [UU]

unused devices:
[/quote]

par contre parted -l m’indique toujours :

[quote]> Model: ATA ST1000DM003-1CH1 (scsi)

Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 256MB 255MB primary fat32 boot, raid
2 257MB 1000GB 1000GB extended
5 257MB 1000GB 1000GB logical raid

Model: ATA ST1000DM003-1CH1 (scsi)
Disk /dev/sdb: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 256MB 255MB primary fat32 raid
2 257MB 1000GB 1000GB extended
5 257MB 1000GB 1000GB logical raid

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_var: 442GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 442GB 442GB ext4

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_home: 500GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 500GB 500GB ext4

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_swap: 7999MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 7999MB 7999MB linux-swap(v1)

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/lvmgrp-lvmgrp_root: 50,0GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 50,0GB 50,0GB ext4

Model: Linux Software RAID Array (md)
Disk /dev/md0: 255MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop

Number Start End Size File system Flags
1 0,00B 255MB 255MB ext3

Error: /dev/md1: unrecognised disk label

[/quote]

Qu’est-ce qui te choque dans le fait qu’un volume physique LVM ne contient pas de table de partition ?

Compare le comportement de parted face aux LVM à celui de fdisk, cfdisk,sfdisk. Regarde la complexité de ton RAID+LVM alors que tu n’as que deux disques RAID (attention les yeux, couleurs):

Les RAID sont rétablis. Les LVM ne présentent pas de défauts de détection et de fonctionnement. Tutto va bene.Ignore le message d’erreur de parted.

Utilise les outils appropriés [mono]pvs[/mono](pv:physical volume)[mono]lvs[/mono](lv:logical volume) et [mono]vgs[/mono](vg:volume group) pour voir de quel bois sont faits les volumes logiques.

Une fois que la chose est comprise, plus rien!!

/dev/md1 n’est pas un disque et n’a donc pas de table de partition.

Pas forcément utile de faire appel à parted donc.

@etxeberrizahar: merci pour ce schéma qui reprend bien ce que je m’étais noté sur un coin de feuille!

C’est une première pour moi, j’ai encore pas mal de choses à savoir…a commencer par les outils que tu proposes.

Un volume RAID logiciel peut très bien être partitionné (même si c’est rare). Inversement, un disque dur peut ne pas être partitionné (même si c’est rare). Les deux sont partitionnables, donc parted va y chercher une table de partition (“disklabel”). Je trouve que parted est un peu con, il ne permet pas de spécifier un disque à examiner avec l’option -l. Je trouve aussi qu’il apporte de la confusion avec ces tables de partition virtuelles “loop” qui indiquent en fait que le volume examiné n’est pas partitionné.