10th NVMe drive not showing up in O/S installation

Bashed

New Member
Joined
Nov 2, 2020
Messages
21
Reaction score
0
Credits
225
Trying to install AlmaLinux 8 on a Dell R640 equipped with 10 x 7.68TB NVMe U.2 drives. All 10 show up in iDrac storage inventory so they're recognized, but only 9 show up in AlmaLinux install (via IPMI). I have replicated this issue with AlmaLinux 9, Ubuntu 20, Centos 8 Stream as well. I have power cycled a few times also. Something is off.

Curious why and if there's a fix?
 


Trying to install AlmaLinux 8 on a Dell R640 equipped with 10 x 7.68TB NVMe U.2 drives. All 10 show up in iDrac storage inventory so they're recognized, but only 9 show up in AlmaLinux install (via IPMI). I have replicated this issue with AlmaLinux 9, Ubuntu 20, Centos 8 Stream as well. I have power cycled a few times also. Something is off.

Curious why and if there's a fix?
Cross posting is not how to get your question answered
 
No, it's double-posting and entirely unnecessary on a site this small. Pretty much everyone will have read your post. If they knew the answer, they'd have replied.

However, it's easy enough to deal with. I locked your previous post, leaving you with this newer post. The link in the 1st reply will enable people to view that content as well. So, it's nothing major.

Next time, after it has been a reasonable amount of time, you can kind of bump the thread. And, if it's reasonable, you can also add a reply to your own thread with new information.
 
Trying to install AlmaLinux 8 on a Dell R640 equipped with 10 x 7.68TB NVMe U.2 drives. All 10 show up in iDrac storage inventory so they're recognized, but only 9 show up in AlmaLinux install (via IPMI). I have replicated this issue with AlmaLinux 9, Ubuntu 20, Centos 8 Stream as well. I have power cycled a few times also. Something is off.

Curious why and if there's a fix?
Hi... NVMe drives run on the PCI-e bus, which is one of the reasons they can acquire such high speeds, however there are a finite number of connections that can be routed. Since you did not divulge which CPU exactly you have, I would speculate you do not have a sufficient number of PCI-e lanes to service all NVMe's (for optimal performance in your case you would require the system to support 40).

Having less than the required amount, the chipset can compensate, but it leads to performance loss and communication issues between your software and hardware. Also having a sufficiently outdated bios to the point where it no longer fully supports your CPU of choice may leave it being unable to fully tap into all the PCI-e lanes even if you do have a sufficient amount.

Here is a few things you can do that might help:
  • Check your BIOS version and cross-reference it with data on the manufacturer website to see if it fully supports your CPU.
  • Check the S.M.A.R.T. data from each individual NVMe to investigate drive health.
  • Check if your system has sufficient PCI-e lanes available for all your devices.
  • If you use a PCI-e extension card that lets the CPU tap directly into the drives on that card without intermediate chipsets, then you must enable PCI-e bifurcation in the BIOS menu to give each drive their own individual pathways to the CPU.
  • Ultimately if you do not have sufficient PCI-e lanes, then you can reduce the available lanes per device from X4 to X2 and regrettably take the performance hit if storage space is that important to you. To my knowledge, there is no in between setting.
Please note that this is a bit of a supposition, based on the available data you've given me. I would need to get my hands on it. On the other hand, I have a bit of a reputation for being a cruel and merciless overlord, so if were to ship it to me, the first thing it would do after you'd power it on, would be to apologize and promise to never break down again just so you wouldn't ever have to send it back to me! :D

But in all seriousness, I do only local now, since I got me hands full on my budding welding career.
 

Staff online

Members online


Latest posts

Top