New disk mdadm viewed as a spare

mmmax

New Member
Credits
24
Hi,

I have a soft raid 10 on my server, and 1 disk failed, i replaced it and re synced it using mdam command

At the beginning :

Code:
:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 md1[0] md2[1]
      11720778752 blocks super 1.2 512k chunks


md2 : active raid1 sdd1[2](S) sdc1[0]
      5860389696 blocks super 1.2 [2/1] [U_]


md1 : active raid1 sda1[0] sdb1[1]
      5860389696 blocks super 1.2 [2/2] [UU]

then i added "sdd1"
Code:
sudo mdadm --add /dev/md2 /dev/sdd1"
The disk begins to sync
Code:
md2 : active raid1 sdd1[2] sdc1[0]      5860389696 blocks super 1.2 [2/1] [U_]    
 [>....................]  recovery =  4.7% (276815616/5860389696) finish=512.4min speed=181595K/sec
But problem, after sync, the new synced disk is viewed as a spare
Code:
Number   Major   Minor   RaidDevice State      
       0       8       33       0      active sync   /dev/sdc1
       1       0        0        1      removed


       2       8       49        -      spare   /dev/sdd1
How to change the disk from "spare" to "active" ?

I've read lot of old topics about it but no real answers..

Thanks
Max
 


mmmax

New Member
Credits
24
I tried to put 2 , then 3

With 2 :
Code:
mdadm --grow --raid-devices=2 /dev/md2
raid_disks for /dev/md2 set to 2
Code:
 Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       0        0        1      removed

       2       8       49        -      spare   /dev/sdd1
With 3:
Code:
mdadm --grow --raid-devices=3 /dev/md2
raid_disks for /dev/md2 set to 3
Code:
Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       0        0        1      removed
       2       0        0        2      removed

       2       8       49        -      spare   /dev/sdd1
Is there a risk to put --raid-devices=1 --force , then immediately re-put --raid-devices=2 ?
 

Lord Boltar

Active Member
Credits
2,611
I personally would not use --force it may cause issues - you may need to just create a new array - see below - I think you need 4 disks for mdadm to work correctly on RAID 10, but I might be wrong about that

 
$100 Digital Ocean Credit
Get a free VM to test out Linux!

Members online


Latest posts

Top