Block Device Issue/ resizing: SOLVED

BigBullShinner

New Member
Joined
Jun 13, 2022
Messages
13
Reaction score
8
Credits
129
Hello, Hopefully I haven't missed this discussion in other posts but after searching a bit I am not finding any about my specific issue.

My problem is that I have a Block Device that is apparently the home of my filesystem and is 128 gb in size with about 40 GB free. I recently upgraded my internal ssd to a 500GB and then restored it from a clone of the 128GB SSD that was previously installed.

My hope was that there would be more space on the restored SSD obviously which is why I upgraded. But the Block Device remains at 128GB and I am unnable to use the remaining disk space on my Linux installation.
Is there a way to resize the Block Device to match the SSD size without erasing the installation and reinstalling from scratch?

I am using LVM and LUKS encryption on the internal 500GB SSD.
 


@BigBullShinner welcome to linux.org

I likely won't be the one to assist you further as I do not use LUKS nor LVM.

However you could tell us some more, namely

1. What Linux Distro and version you are using?
2. What software was used for the cloning and restoring?

Cheers

Chris Turner
wizardfromoz
 
1. What Linux Distro and version you are using?
2. What software was used for the cloning and restoring?
Thanks for the welcome!
More info on my setup:
1. On ZorinOS 15.3
2. Used Clonezilla to create clone and to restore. I used Disks to format the SSD before restoring.
3. I used G-parted after restoring to grow the partition of the restored Image, but the Block Device didn't grow along with it.
 
Last edited:
Can you share the output of the following?
Code:
sudo fdisk -l /dev/sdX
lsblk
Replacing the X with your ssd device. It's possible to grow lvm volumes that are on top of a luks device, shrinking however is not done so easy.
 
Welcome to the forums
I, too, do not use LVM or LUKS, but it appears re-sizing LUKS partitions is not recomended
I see that it is not recommended but I'm still considering giving it a try because it will avoid having to do a fresh install possibly.
 
I can see a fresh install in your immediate future.
 
Can you share the output of the following?
Code:
sudo fdisk -l /dev/sdX
lsblk
Replacing the X with your ssd device. It's possible to grow lvm volumes that are on top of a luks device, shrinking however is not done so easy.
Here's the output. I *'d out the disk identifier because I don't know if that is a security risk, let me know if needed:

Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: **********

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 1499135 1497088 731M 83 Linux
/dev/nvme0n1p2 1501182 949465087 947963906 452G 5 Exten
/dev/nvme0n1p3 949465088 964210687 14745600 7G 82 Linux
/dev/nvme0n1p5 1501184 949465087 947963904 452G 83 Linux

Partition table entries are not in disk order.

...sorry I forgot the other command. Will post soon.
 
Last edited:
Here is the full output from Terminal:

Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: **********

Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 1499135 1497088 731M 83 Linux
/dev/nvme0n1p2 1501182 949465087 947963906 452G 5 Exten
/dev/nvme0n1p3 949465088 964210687 14745600 7G 82 Linux
/dev/nvme0n1p5 1501184 949465087 947963904 452G 83 Linux

Partition table entries are not in disk order.


$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 65.2M 1 loop /snap/gtk-common-themes/1519
loop1 7:1 0 55.5M 1 loop /snap/core18/2344
loop2 7:2 0 254.1M 1 loop /snap/gnome-3-38-2004/106
loop3 7:3 0 255.8M 1 loop /snap/brave/164
loop4 7:4 0 55.5M 1 loop /snap/core18/2409
loop5 7:5 0 81.3M 1 loop /snap/gtk-common-themes/1534
loop6 7:6 0 255.8M 1 loop /snap/brave/163
loop7 7:7 0 162.2M 1 loop /snap/firefox/1406
loop8 7:8 0 113.9M 1 loop /snap/core/13308
loop9 7:9 0 132M 1 loop /snap/chromium/2011
loop10 7:10 0 248.8M 1 loop /snap/gnome-3-38-2004/99
loop11 7:11 0 530M 1 loop /snap/datagrip/140
loop12 7:12 0 61.9M 1 loop /snap/core20/1494
loop13 7:13 0 131.9M 1 loop /snap/chromium/2000
loop14 7:14 0 4K 1 loop /snap/bare/5
loop15 7:15 0 111.7M 1 loop /snap/core/13250
loop16 7:16 0 20.6M 1 loop /snap/keepassxc/1563
loop17 7:17 0 162.3M 1 loop /snap/firefox/1443
loop18 7:18 0 530.7M 1 loop /snap/datagrip/144
loop19 7:19 0 424.2M 1 loop /snap/kde-frameworks-5-91-qt-5-15-3-co
loop20 7:20 0 61.9M 1 loop /snap/core20/1518
loop21 7:21 0 160.8M 1 loop /snap/midori/550
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 384M 0 part /media/******/Swap
└─sda2 8:2 0 462.9G 0 part
└─luks-********-****-****-****-************
253:3 0 462.9G 0 crypt /media/******/*****
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 731M 0 part /boot
├─nvme0n1p3 259:3 0 7G 0 part
└─nvme0n1p5 259:4 0 452G 0 part
└─sda5_crypt 253:0 0 452G 0 crypt
├─zorin--vg-root
│ 253:1 0 117.6G 0 lvm /
└─zorin--vg-swap_1
253:2 0 980M 0 lvm [SWAP]
 
I'm assuming this it the device you are talking about?
Code:
Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors

nvme0n1 259:0 0 465.8G 0 disk 
├─nvme0n1p1 259:1 0 731M 0 part /boot
├─nvme0n1p3 259:3 0 7G 0 part 
└─nvme0n1p5 259:4 0 452G 0 part 
└─sda5_crypt 253:0 0 452G 0 crypt 
├─zorin--vg-root
│ 253:1 0 117.6G 0 lvm /
└─zorin--vg-swap_1
253:2 0 980M 0 lvm [SWAP]
The system sees your 500G ssd and currently your logical root volume is 117.6G, this is where your /home also resides. It looks like all the space is partitioned so I would think there is space left in your volume group to assign to your logical root volume. Can you share the output of the following?
Code:
vgs zorin
lvs zorin
 
I'm assuming this it the device you are talking about?
Code:
Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors

nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 731M 0 part /boot
├─nvme0n1p3 259:3 0 7G 0 part
└─nvme0n1p5 259:4 0 452G 0 part
└─sda5_crypt 253:0 0 452G 0 crypt
├─zorin--vg-root
│ 253:1 0 117.6G 0 lvm /
└─zorin--vg-swap_1
253:2 0 980M 0 lvm [SWAP]
The system sees your 500G ssd and currently your logical root volume is 117.6G, this is where your /home also resides. It looks like all the space is partitioned so I would think there is space left in your volume group to assign to your logical root volume. Can you share the output of the following?
Code:
vgs zorin
lvs zorin
Yes, that is the device. I only know there is not enough space because when I try to import more virtual machines into virtualbox it says not enough space. Also in Disks I can select the Block device and it gives the free space available there.

After running those commands both came out with the same result:

WARNING: Running as a non-root user. Functionality may be unavailable.
/run/lvm/lvmetad.socket: access failed: Permission denied
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
/dev/mapper/control: open failed: Permission denied
Failure to communicate with kernel device-mapper driver.
Incompatible libdevmapper 1.02.145 (2017-11-03) and kernel driver (unknown version).
/run/lock/lvm/V_zorin:aux: open failed: Permission denied
Can't get lock for zorin
Cannot process volume group zorin


It seems like this is getting to be more and more complicated as we go. Leaning pretty heavily towards a fresh install at this point unless someone has an obvious fix. Tinkering around much more might be a poor return on invested time vs. the process of a fresh install.

Thanks for the responses and effort though. Cheers!!
 
Last edited:
My bad, I forgot to add that you need to run them with escalated privileges.
Code:
sudo vgs 
sudo lvs
 
+1 to just rebuilding your OS from scratch. But ... this would be a good way to get more familiar with the various tools at your disposal. And if you screw up, you can just reclone it again.

I never understood the resistance to reinstalling, as it often means the user may not have a good backup plan. Fresh install is often the fastest way to get back up and running, and tends to reduce clutter and aftereffects and errors of restoration.

Another option could be to just partition the remaining space, and mount it somewhere on your root filesystem as "extra" storage.
 
+1 to just rebuilding your OS from scratch. But ... this would be a good way to get more familiar with the various tools at your disposal. And if you screw up, you can just reclone it again.

I never understood the resistance to reinstalling, as it often means the user may not have a good backup plan. Fresh install is often the fastest way to get back up and running, and tends to reduce clutter and aftereffects and errors of restoration.

Another option could be to just partition the remaining space, and mount it somewhere on your root filesystem as "extra" storage.
+10 for pragmatism.
OP, check out the phrase "diminishing returns". You've expended all this time and ergy trying to solve the problem, posting on this forum, waiting for responses, trying more stuff... You could've been finished polishing by now. Also, as Slow Coder mentioned, a fresh install will be like housecleaning. What to do (IMO):
1) Ensure you have your original 128 drive and your new 512 drive connected.
2) Using Gparted or fdisk from any boot media, partition however much you want to use for /boot, /, /tmp, /var, /home, and swap (if you believe in tradition. I recommend: 512MB /boot, 40-50GB /, 5GB /tmp, 10GB /var, and 8GB swap (the RAM x 2 thing is BS). If you want a rescue partition, use 10GB. If you like round numbers, make /boot 1024MB (1GB).
3) With the liveCD, install your system on the new drive using the partitions you made.
4) After install, don't reboot. Just copy your configs and data across after mounting your 128GB.
5) If you have any extra stuff to install, either chroot into your fresh install and do so from the liveCD (can have caveats) or reboot and press "e" at GRUB and add "single" and "text" to your kernel parameters (text should be implicit in single user mode, but still) so as to boot in non-graphical mode as root. Then you can install all the remaining software without an xsession messing up your configs.
6) Now reboot and there should be about 10 minutes of "housekeeping" for the minor things (like maybe a path to a resource change here and there).
Note that you'll have to adjust if you change your username, so beware that. Rather keep your original name and ren
 
+1 to just rebuilding your OS from scratch. But ... this would be a good way to get more familiar with the various tools at your disposal. And if you screw up, you can just reclone it again.

I never understood the resistance to reinstalling, as it often means the user may not have a good backup plan. Fresh install is often the fastest way to get back up and running, and tends to reduce clutter and aftereffects and errors of restoration.

Another option could be to just partition the remaining space, and mount it somewhere on your root filesystem as "extra" storage.
I am not really resistant to rebuilding but am trying to delay it while maximizing what I can do with my current setup (so yah, probably a bit resistant ;). Getting my setup dialed in actually has taken me quite a long time and I haven't come close to diminishing returns by checking out my options here.

I will definitely do a fresh install eventually but in the meantime would like to see if I can pull this off. Like you said, it will be a good learning process and at worst I will simply need to do a fresh install.
 
Last edited:
+10 for pragmatism.
OP, check out the phrase "diminishing returns". You've expended all this time and ergy trying to solve the problem, posting on this forum, waiting for responses, trying more stuff... You could've been finished polishing by now. Also, as Slow Coder mentioned, a fresh install will be like housecleaning. What to do (IMO):
1) Ensure you have your original 128 drive and your new 512 drive connected.
2) Using Gparted or fdisk from any boot media, partition however much you want to use for /boot, /, /tmp, /var, /home, and swap (if you believe in tradition. I recommend: 512MB /boot, 40-50GB /, 5GB /tmp, 10GB /var, and 8GB swap (the RAM x 2 thing is BS). If you want a rescue partition, use 10GB. If you like round numbers, make /boot 1024MB (1GB).
3) With the liveCD, install your system on the new drive using the partitions you made.
4) After install, don't reboot. Just copy your configs and data across after mounting your 128GB.
5) If you have any extra stuff to install, either chroot into your fresh install and do so from the liveCD (can have caveats) or reboot and press "e" at GRUB and add "single" and "text" to your kernel parameters (text should be implicit in single user mode, but still) so as to boot in non-graphical mode as root. Then you can install all the remaining software without an xsession messing up your configs.
6) Now reboot and there should be about 10 minutes of "housekeeping" for the minor things (like maybe a path to a resource change here and there).
Note that you'll have to adjust if you change your username, so beware that. Rather keep your original name and ren
Wow, thanks for the rundown. I really appreciate that detailed process outline. I am still gun-shy of getting into the custom installation/partitioning process. I tend to get pretty bogged down with new techniques, which custom installs and partitioning is for me still. Even though I have done a bit, I have just been sticking with the basic install option from liveCD to avoid issues (theoretically). This whole restoring expedition was obviously a bit outside of my wheelhouse.

Thanks again, I will certainly be coming back to this thread when I'm ready to tackle that one!
 
My bad, I forgot to add that you need to run them with escalated privileges.
Code:
sudo vgs
sudo lvs
here's what I got:

results:
$ sudo vgs
[sudo] password for ******:
VG #PV #LV #SN Attr VSize VFree
zorin-vg 1 2 0 wz--n- <452.02g <333.50g

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root zorin-vg -wi-ao---- <117.57g
swap_1 zorin-vg -wi-ao---- 980.00m
 
$ sudo vgs
[sudo] password for ******:
VG #PV #LV #SN Attr VSize VFree
zorin-vg 1 2 0 wz--n- <452.02g <333.50g

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root zorin-vg -wi-ao---- <117.57g
swap_1 zorin-vg -wi-ao---- 980.00m
Looks like you root volume current has a size of 117.57G which comes close to the size of your old disk and in your volume group you have 333.50G free. 117.57 +333.50 = 451.07, which is closer to the size of your new disk. If you want to expand your logical root volume you cod do the following.
Code:
sudo lvextend -L +333.50G /dev/zorin-vg/root -r
What this will do is grow the root lv with 333.50G and then resize the filesystem.
 
Last edited:

Members online


Latest posts

Top