[SELF-SOLVED] Loading initramdisk became slower after adding new HDD

rado84

Well-Known Member
Joined
Feb 25, 2019
Messages
776
Reaction score
633
Credits
4,916
As of today I have 3x HDD and 2x SSD, with the 500GB HDD being reattached today ("reattached" because I had it removed 2 years ago and stashed it, now I found it and put it back into the computer). Up until the attaching of this HDD when booting the OS there was always something called "loading initramdisk" which so far has always taken about half a second before proceeding to loading linux. But after attaching the said HDD this initramdisk loading process became 5-6 seconds longer. Though I suspect this has something to do with this HDD, I'm not an expert and I have no idea why this is happening, so if you do, please share them.
I did a HDD test and it's in perfect health. Not a single physical or logical bad sector. Considering that this HDD was used for about 6 months before stashing it, it's not surprising and even before I did the HDD I suspected it would be OK. Which leaves me confused as to why the loading of initramdisk became slower.

I did some Googling for that problem but found nothing conclusive. In one topic on reddit the others asked the OP what partitions and filesystems he had, so I'm answering that question here, in case you need that info:
1x ext4 where Linux is.
5 more NTFS partitions, one of which is Windows 7. The rest are just for storage.
 


I would speculate that when you added the extra disks to your system that the processes that are called by your initramdisk are simply taking longer to run because of the demand your system is making. There are processes of premounting, mounting, setting filesystems info and lots more which is done for each disk. In my case of debian, it's possible to see what's in the initramfs (as distinct from your initramdisk) with the command:
Code:
 lsinitramfs -l /boot/<RamDiskImage>
There are a multitude of processes that are associated with settings for the extra disks in there on my system.

You could also consider running:
Code:
systemd-analyze
to check the actual times processes are taking to boot up since there may be something in there that is informative.
 
Last night I asked a friend about this problem and he said the 4TB and 500GB hard disks are fighting for CPU's attention and that I have to change IRQ timers. I found some software about IRQ in the repo but since I have no idea which one I'll need, I left this for another time, so I did the only thing I could do - changed some settings in UEFI: disabled the serial port and changed the IRQ setting to this:

IRQ-timers-UEFI.jpg


After this setting the loading of initramdisk became a lot closer to what it used to be.

Now, if anyone can tell me which of these programs offers manual changing of IRQ timers or at least automatically setting them to be non-conflicting, that would be great:

Code:
[rado@arch]: ~>$ search irq
extra/irqbalance 1.8.0-1
    IRQ balancing daemon for SMP systems
community/rtirq 20210329-1 [realtime]
    Realtime IRQ thread system tuning.
community/tuna 1:0.14.1-3 [realtime]
    Thread and IRQ affinity setting GUI and cmd line tool
aur/irq-tools 0.1-1 [1+] [0.00%] [20 Nov 2015]
    irq-tune for set smp affinity and irqstat for better watch /proc/interrupts, designet for NUMA systems
aur/irqbalance-git 1.8.0.r0.g99ae256-1 [2+] [0.00%] [15 Apr 2021]
    A daemon to help balance the CPU load generated by interrupts across all of a systems CPUs
 
Last edited:
I fixed this and you wouldn't believe what the cause was! Apparently, in UEFI, for unknown reasons, it has been set to load a HDD driver for that SSD instead of an SSD driver. And now, with the adding of the last HDD (all 6 SATA slots are now occupied) that has had a visible impact on the performance. So I carefully checked each device to have the proper driver loaded for it, depending on its type and now initramdisk returned to the normal loading times.
 
I fixed this and you wouldn't believe what the cause was! Apparently, in UEFI, for unknown reasons, it has been set to load a HDD driver for that SSD instead of an SSD driver.
Can you share exactly how you did that? I have never had to do that for an ssd or hdd, would be interesting to hear to kill my curiosity and for others that may come across this topic.
 
rado84 wrote:
Apparently, in UEFI, for unknown reasons, it has been set to load a HDD driver for that SSD instead of an SSD driver.
Congratulations on solving this problem. I am also very interested in how it was done.
 
I thought it was clear since I mentioned UEFI, but OK - here's how.



Apparently for unknown reasons this SSD had a HDD chosen type for it which would make the motherboard load the wrong driver for it. I haven't chosen this myself, that's for sure bc if I had, I wouldn't be having this problem. It's possible that the motherboard has detected this SSD as a HDD. And while the number of storages was 3 or 4, there was no obvious impact on performance. But since recently I bought a 4TB HDD and soon after that I found the old 500GB HDD where I had it stashed in (old as in "previous one", not old as in "age of it"), increasing the number of storages has had a visible impact on the performance. IDK much about initramdisk but based on what I read about it, is that initramdisk is loading the appropriate drivers based on what drivers the motherboard has loaded. At least I don't see another explanation since the performance issue was fixed once I chose the correct type of devices for all the 5 storages (the 6th one is the BR writer).
 
Last edited:
I thought it was clear since I mentioned UEFI, but OK - here's how.
When I hear drivers I think of drivers in the OS, I never heard of being able to change drivers in UEFI before that's why I was wondering what exactly you changed in your UEFI. Thanks for sharing!
 
rado84 wrote:
Apparently for unknown reasons this SSD had a HDD chosen type for it which would make the motherboard load the wrong driver for it.
Thanks for the info. As I understand it, the UEFI BIOS will hand over to the linux kernel, and then it's the kernel that does its own probing and then enabling the appropriate drivers. It's reliance on the UEFI BIOS for drivers or anything much at all, isn't really the case as I understand it. The commands for the relevant ata drivers for ssd are in the initramfs/initramdisk. I suspect and speculate that it is firmware on the motherboard that exists rather than drivers. I have no idea though how that could confuse a kernel, but it's imaginable I guess. Above my pay grade.
 
Last edited by a moderator:
rado84 wrote:

Thanks for the info. As I understand it, the UEFI BIOS will hand over to the linux kernel, and then it's the kernel that does its own probing and then enabling the appropriate drivers. It's reliance on the UEFI BIOS for drivers or anything much at all, isn't really the case as I understand it. The commands for the relevant ata drivers for ssd are in the initramfs/initramdisk. I suspect and speculate that it is firmware on the motherboard that exists rather than drivers. I have no idea though how that could confuse a kernel, but it's imaginable I guess. Above my pay grade.
What I said about drivers from the motherboard was merely a guessing bc IDK what else to call it. It's above my paygrade too. :)
 


Latest posts

Top