Help with GParted -- no swap file after standard installation.

the hard disk I think is a SANDISK, but I can't tell the model number. It's a 2.5 inch SATA drive if that makes any difference.

I ran the lsblk command and this is what I got:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1.2G 0 part /boot
└─sda2 8:2 0 930.3G 0 part
├─data-root 252:0 0 926.5G 0 lvm /
└─data-swap 252:1 0 3.8G 0 lvm
└─cryptswap 252:2 0 3.8G 0 crypt [SWAP]
sdb 8:16 1 29.8G 0 disk
└─sdb1 8:17 1 29.8G 0 part /media/christopherlaurenzano/SANDISK
sr0 11:0 1 1024M 0 rom


I don't know if this helps, but maybe it does. I got the computer from my mother who didn't need it any more, and she had a new SSD installed after the other one gave out. I'm not sure if the place that put it in would have her record, but I can try and get back to you.
You will have to power the system down, and remove your 9 year old SSD so you can read the label or printing; then we'll know the make and model of it. :-)
 


the hard disk I think is a SANDISK, but I can't tell the model number. It's a 2.5 inch SATA drive if that makes any difference.

<snip>
To get the disk model and manufacturer from software you can run the following command as root:
Code:
fdisk -l
In the output there's a line with: Disk model. Sometimes one can identify the exact disk from this output, but if not, then enter the alpha-numeric output into an online search engine and it should identify the exact model. There's usually no need to rummage around inside the box or laptop confines.

An alternative to finding the model and manufacturer without running as root is to look at the output of the disk in the directory: /dev/disk/by-id, and extract the alpha-numeric associated with the disk which again can be entered into a search engine to find the exact make and model details. This means also sees the serial number which one may wish to keep private.

On the original issue, the output of of the lsblk command in post #20 shows a SWAP partition of 3.8G.
 
Last edited:
1740351467987.png
 
NAME TYPE SIZE USED PRIO
/dev/dm-2 partition 3.8G 0B -2

This means you're using LVM. or (LVM2)
You don't have a physical swap partition, you have a logical swap partition.
Programs like Gparted are fine for physical disks and physical partitions, but they don't know anything about about logical volumes.

What is the output of..
Code:
df -h

Code:
sudo lvs

Code:
sudo swapon --show

Code:
lvdisplay

The output from these commands should show your swap partition (if you are using one).
 
Last edited:
Linux Distros now come with a Swap File of 2GB that's created when you install your Distro and you don't need to touch it.

Back in the old days of 32 bit systems...you were limited to a small amount of Ram which meant you needed a Swap Partition of the same size of Ram installed.

Those days are long gone...today Swap is not needed because you can add heaps of Ram...
1740435297063.png


As you can see...I have 16GB of Ram and a Swap File of 2GB that's not used and will never be used because I can add another 16GB of Ram if I like. The more you multi-task...the more Ram you need...more is always better.
1740436416766.gif
 
Here's some details that explain the issue of swap usage on older SSDs.

The Core Problem: Limited Write Endurance

  • How SSDs Work (Simplified):
    • SSDs store data in flash memory cells.
    • Writing data to these cells causes wear and tear.
    • Each cell has a limited number of write cycles (P/E cycles) before it becomes unreliable.
  • Older SSDs and Write Endurance:
    • Early SSDs had significantly lower write endurance compared to modern drives.
    • They also lacked sophisticated wear-leveling algorithms.
    • Wear leveling is a crucial technology that distributes write operations evenly across all memory cells, preventing any single cell from wearing out too quickly.
  • Swap's Heavy Write Load:
    • Swap space (whether a partition or a file) is used by the operating system to move less frequently used data from RAM to the storage drive.
    • This process involves frequent read and write operations, especially when RAM is limited.
    • On older SSDs, this constant writing can rapidly consume the available write cycles, leading to premature failure.
Why Swap is Particularly Damaging to Older SSDs:
  • Constant Activity:
    • Swap is not just used occasionally. In systems with limited RAM, it can be accessed very frequently.
    • This makes it a source of constant writes.
  • Random Writes:
    • Swap files and partitions are written to and read from in a very random pattern. This random pattern is less ideal for older SSDs, than sequential writes.
  • Lack of Wear Leveling:
    • Older SSDs lacked the sophisticated wear-leveling algorithms found in modern drives. This means that the swap space could repeatedly write to the same memory blocks, quickly wearing them out.
Explained simply:

Imagine an old notebook where you can only erase and rewrite each page a limited number of times. Swap space is like constantly using the same few pages of that notebook over and over again. On older SSDs, which had a much lower limit on how many times each 'page' could be rewritten, this constant use could wear them out very quickly. Modern SSDs are much better at spreading out the wear, like using all the pages of the notebook evenly.

In summary ...
  • Older SSDs had significantly lower write endurance.
  • Swap space involves frequent write operations.
  • The lack of effective wear leveling in older SSDs made them vulnerable to premature failure from heavy swap usage.
  • Modern SSDs are much more robust, and this issue is much less of a concern.
I just couldn't find this info before. Sorry.
 
Last edited:
Now, what year did write leveling become common?

TL;DR
From what I found, it was considerably longer than 9 years ago!

You don't have to be concerned. Go ahead and use a swap partition or swap file if you want, according to the other answers here.

By the early to mid 2010s, wear leveling was a very common feature of SSDs.

It would still be nice to know the make and model. :)
 
Speaking of SSDs...when you install a Distro...one of the very first things you do is Optimize the SSD.

By doing this your SSD will run more efficiently and last a long time...I have Trim set to run daily but I can run the Trim command manually any time. View attachment 24380
https://easylinuxtipsproject.blogspot.com/p/ssd.html
https://www.baeldung.com/linux/solid-state-drive-optimization

Thank you. I've never heard of F2FS.

A cron job (?) here runs fstrim -av, but I must confess that I occasionally run sudo fstrim -av ;)
 
Last edited:
One thing missing from the second link is that we should enable AHCI mode for optimal SSD performance.
 
Last edited:
You can also run this command...
Code:
sudo fstrim -v /

I also have two portable SSDs and I run the above now and then.
1740521247693.gif
 
To give another perspective, I haven't manually run the trim command for at least a decade and likely longer (other than to run it out of curiosity or for the sake of writing an answer or article). Modern SSDs are pretty awesome and Linux automatically does what it needs to do.

I have SSDs that are at least that old.

I was an early adopter where SSDs are concerned.

You DEFINITELY needed to pay attention to trim back then. The early drives were not very good. They weren't even all that reliable. Back then, I kept things backed up on traditional hard drives. Today, my NAS is stuffed with SSDs, with an NVMe kept for caching.

So, personally, I'd not worry too much about using an SSD or even paying attention to it. They've come a long way and the MTBF is much longer than I'm likely to use a drive - by a mater of years.
 
If you want the reasonably easy way to set the whole thing up, I followed the instructions from :


quite some time ago. It works. How do I know it works?....by copy/pasting in this: journalctl | grep fstrim.service

which produces:
brian@brian-desktop:~$ journalctl | grep fstrim.service
Feb 17 00:04:42 brian-desktop systemd[1]: Starting fstrim.service - Discard unused blocks on filesystems from /etc/fstab...
Feb 17 00:05:10 brian-desktop systemd[1]: fstrim.service: Deactivated successfully.
Feb 17 00:05:10 brian-desktop systemd[1]: Finished fstrim.service - Discard unused blocks on filesystems from /etc/fstab.
Feb 24 04:54:50 brian-desktop systemd[1]: Starting fstrim.service - Discard unused blocks on filesystems from /etc/fstab...
Feb 24 04:55:20 brian-desktop systemd[1]: fstrim.service: Deactivated successfully.
Feb 24 04:55:20 brian-desktop systemd[1]: Finished fstrim.service - Discard unused blocks on filesystems from /etc/fstab.
brian@brian-desktop:~$

I initially had weekly trimming in place, but decided later that weekly was not enough....mainly due to the amount of use my pc gets.....so I switched it to Daily

You can use the following blurb anyway...regardless if you have set it up before or not.

You can switch your system to automatic daily trimming as follows:

a. Copy/paste the following command line into the terminal, in order to create a new folder:

sudo mkdir -v /etc/systemd/system/fstrim.timer.d

Press Enter. Type your password when prompted. In Ubuntu this remains entirely invisible, not even dots will show when you type it, that's normal. In Mint this has changed: you'll see asterisks when you type. Press Enter again.

b. Copy/paste the following command line into the terminal, in order to create a new file in that new folder:

sudo touch /etc/systemd/system/fstrim.timer.d/override.conf

Press Enter.

c. Copy/paste the following command line into the terminal, in order to edit the new file:

xed admin:///etc/systemd/system/fstrim.timer.d/override.conf

(Note: The three consecutive slashes aren't a typo, but intentional! For Ubuntu: type gedit instead of xed.)

Press Enter.

d. Now copy/paste this blue text into that empty text document:

[Timer]
OnCalendar=
OnCalendar=daily

Note: The double entry for OnCalendar is no mistake but intentional!

Save the modified file and close it.

e. Reboot your computer.

f. Confirm that you've successfully edited trim's configuration by executing this terminal command:

systemctl cat fstrim.timer

Your output should look approximately like this:

# /lib/systemd/system/fstrim.timer
[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

# /etc/systemd/system/fstrim.timer.d/override.conf
[Timer]
OnCalendar=
OnCalendar=daily

Let's take a closer look at this output. The first part shows the default setting (weekly), the second part shows the overriding setting that you've applied. That overriding setting contains two elements: first the existing OnCalendar setting (weekly) is being deleted by specifying nothing after the = sign, and then a new OnCalendar setting is being applied (daily).

If you ever want to check whether fstrim has actually happened, and when it happened, you can use this terminal command:

journalctl | grep fstrim.service
 
One thing missing from the second link is that we should enable AHCI mode for optimal SSD performance.

I missed this.

Sadly, AHCI can mess with certain distros. While it's not too common, we sometimes have to have people toggle AHCI in their BIOS (technically UEFI, I suppose) settings. If you are curious, use the search for 'AHCI' and skim a few topics.

No, I do not know why. I only know that we sometimes have to have them toggle it - usually turning it off.
 
Interesting. How does AHCI mess with certain distros?

No, I do not know why.

It won't boot to the distro after installation. That's the most common thing. I believe in some instances it has also failed during the installation process when writing to disk.

I only know that it sometimes needs to be toggled from whatever setting it was on and that it's usually turning it off that solves the problem. I say 'usually' because I dimly recall an instance where they actually had to enable it to work. That could be a faulty memory.

I do not know the mechanism behind it but have stumbled across it myself on an Intel system.

At the same time, I would not suggest it as a default. Try the installation with AHCI enabled and then toggle it if you have problems. It's the same view I take with secure boot. Try with it enabled. If it doesn't work, turn it off.
 



Latest posts

Top