16+ usb to sata drives, low pwr mgmt, uuid & file caching/freezing, + GPIO???

1branchonthevine

New Member
Joined
Apr 20, 2021
Messages
1
Reaction score
0
Credits
43
I am a bit of a noob when it comes to linux developement, and I am sensing a bigger project than I intended... But here goes... I am trying to set up an ultra low power system with 16-24+ sata drives connected via usb. Think of this system as a drive archival NAS type system, where it would not be me pulling up a drive to connect it, but a stack of addressable drives, kind if like one of those old sony 400 DVD/BD disk changers, but in this case, no mechanical robotic features, only electrically automated.

Anyway, I only expect 1-2 drives (of many) to ever be accessed or turned on at a time in the beginning (to keep overall power requirements down), have it set up in a Jbod config (no raid sets in mind just yet), and all the other drives must go into a low power, (or better yet) power off state.

I realize usb to sata often has limitations on SMART and drive power management, so i may have to use GPIO addressing to switch off the physical power to each unused drive, and possibly certain usb ports/hubs.

I am shooting for +/-10watts in standby, with an indefinite amount of drive scalability. Since only 1-2 drives of many will ever be used at a time, then simply using 16-24 port usb hubs, or design my own physical usb signal addressing/routing backplane connected to two or more primary mainboard usb ports, should be more than sufficient. If i have to design a new usb to sata pcb interface, i will, maybe i dont have to though?

Pci would be nice, but sata controllers are too power hungry, and I am not sure my system could handle hotplugging or powering off/on pci hci devices on the fly.

The part i am probably overthinking and hope for some direction on is how to get the OS to cache the UUID's, drive parameters, and file names, so that when a drive or hub is electrically disconnected, that the OS doesn't loose the drive and have to repoll it each time it reconnects. The OS hopefully will always be able to see every drive and its file/directory list, regardless of whether the drive is connected or not. Hopefully this can be cached and stick after reboots, but i could compromise on this.

To do the caching, I have considered making compressed dd images (squashfs maybe) of the drives, but replacing the folders and files in the drive image with symlinks, then mounting the 'false" drive image as the representative drive, but i havent quite resolved how i could handle linking to false drive to the GPIO to turn on the real drive/usb port/hub, i presume the added latency from waiting for the system to poll and remount a drive would also throw a timeout error.

Again, maybe i am overthinking all of this, and maybe their is an easier way already built into the kernel/os, possibly a way to freeze the drive state, like an invisible unmount/disconnect. Secondly, the developer noob comment comes into play more on this GPIO to drive power/usb interface, can this simply all just be scripted, or is this along the lines of a driver or kernel mod?

This drive state "freezing" approach would be more ideal than using false drive mounted images, since i do not want "mirrored" duplicate drives popping up each time a drive is turned back on (mirrored being the false mounted image plus the real mounted drive). Or to put another way, if I have 24 drives, i only hope to ever see 24 drives listed in the OS, even when only 2 of the 24 drives are physically connected, so not 24 + 2.

My ultimate goal is to repurpose a stockpile of older 120gb to 2tb drives, but i absolutely cannot justify leaving them all connected and running 24/7 in multiple 1000watt drive servers. Eventually FreeNas/TrueNas integration would be another goal.

Thanks a ton!
 
Last edited:

Members online


Latest posts

Top