strange case in sdb

zapeador

Member
Joined
Jan 15, 2022
Messages
44
Reaction score
13
Credits
401
Hello, to see if you can help me in this strange case, I will try to explain it as best as possible

let's see I have a main machine called "machine 1" that among other functions is to be a virtual machine server that has 2 disks, an sda that is where the machines are and others and that in its 3rd partition I mean sda3 is the / and this is 100G
and then I have an sdb1 that in "machine1" is mounted in /mnt/bak and this is 20 TERAs and that same disk is mounted in "machine 2" in /mnt/data

The problem is that when creating a file on "machine1" in the path /mnt/bak SOMETIMES it is created but it takes several minutes to appear and other times it is not created

look at the uuide of the album and it's the same... this topic drives me crazy
 


What filelsystem are you using on /dev/sdb1 that is mounted on machine1 and machine2? Normal filesystems(such as ext4, xfs, btrfs, etc..) don't support being mounted on multiple systems at once, you would need to use a file system that supports that or you would need to use a network share.
 
Last edited:
This still means that the disk with it's filesystem can only be mounted on one system/location. If you want to share the data of system1(/mnt/bak) system2 with your current setup you are going to have to create an nfs server which shares the location(/mnt/bak) of system1 with system2, then you can mount the shared location as an a network share on system2.
 
these are df -h of the 2 machines, even from what I see they don't even match the percentages of use for that I put below the uuid of the disks and you can see that they are exactly the same


machines 1

S.ficheros Tamaño Usados Disp Uso% Montado en
/dev/sda2 94G 14G 76G 16% /
/dev/sdb1 21T 12T 8,8T 56% /mnt/back
10.70.9.60:/backups 1,8T 1,4T 331G 81% /maquina_bakups

blkid
/dev/sdb1: UUID="17799b02-1134-4102-9c86-1ea11adde8f3" TYPE="ext4" PARTUUID="a7017e92-63d4-f448-9c3c-b5b8a88475b7"


machines 2

S.ficheros Tamaño Usados Disp Uso% Montado en
/dev/sda3 490G 140G 325G 31% /
/dev/sdb1 21T 7,7T 13T 39% /mnt/datos
/dev/sda2 473M 86M 363M 20% /boot
/dev/sda1 511M 3,3M 508M 1% /boot/efi


blkid
/dev/sdb1: UUID="17799b02-1134-4102-9c86-1ea11adde8f3" TYPE="ext4" PARTUUID="a7017e92-63d4-f448-9c3c-b5b8a88475b7"


this topic really drives me crazy
 
With that guide from the proxmox environment, I mount the same FS on 2 machines and the 2 are read and write

Proxmox uses fencing. If you don't use something like a clustered file system, or fencing you can
get read/write conflicts. It might work, but it's pretty risky.
 
With that guide from the proxmox environment, I mount the same FS on 2 machines and the 2 are read and write
It's still not supported for the filesystem you are using, all you did was do a pass-through of a physical disk and mount them on two vm's. For normal fileystems like xfs and ext4 that isn't supported, the file system contains a journal and it can become corrupt if two systems write to it at once or other strange things can happen. Maybe that's why they are appearing with different sizes on each vm. The point is what you are doing is not supported by the filesystem you are using, you would need to use a distributed filesystem such as glusterfs and I think zfs can handle that too.
 
Proxmox uses fencing. If you don't use something like a clustered file system, or fencing you can
get read/write conflicts. It might work, but it's pretty risky.
Only if you have setup multipe Proxmox nodes and want to migrate vm's across hosts and vm's with passthroughs can't be migrated to another host.
 
. . . and I think zfs can handle that too.
ZFS could handle a significant portion of the issues I see raised in this and other forums. It eliminates the need for Timeshift, for one thing. Solves many Linux RAID concerns, as well. If using a 20.04 'buntu-derived distro, why not?
 

Members online


Latest posts

Top