Solved How Often Do You Trim Your SSD ?

Solved issue
I run 2TB Samsung SSDs, like Samsung 980 Pro, or 970 Plus.
I create a 1.6TB partition and leave the rest unused. It must be creating its own overprovisioning like the article said because it never gives you the full 2TB, I get 1.8TB, so there about 200TB overprovisioned.

It's probably a waste because I plan to upgrade to Samsung 990 Pro 4TB M.2 PCIe/NVMe and with these, I will probably leave at least 10% over provisioned. Or more.
 


I run 2TB Samsung SSDs, like Samsung 980 Pro, or 970 Plus.
I create a 1.6TB partition and leave the rest unused. It must be creating its own overprovisioning like the article said because it never gives you the full 2TB, I get 1.8TB, so there about 200TB overprovisioned.

It's probably a waste because I plan to upgrade to Samsung 990 Pro 4TB M.2 PCIe/NVMe and with these, I will probably leave at least 10% over provisioned. Or more.
Thanks!

I only now understand why output of lsblk says my SSD is 465,8G in size but on the box it says 500GB, therefore 500/465=1,07 or 7% is provisioned by manufacturer.

I see this math is in line with the screenshot from the following answer on superuser:
 
Last edited:
@bob466
Very good thread! without you I would probably completely forgot to trim my SSD!
Output below shows 345GB was trimmed but my SSD is 500GB lol

Bash:
sudo fstrim -av

/boot/efi: 505 MiB (529530880 bytes) trimmed on /dev/sdb1
/boot: 277,1 MiB (290580480 bytes) trimmed on /dev/sdb2
/: 345,2 GiB (370706583552 bytes) trimmed on /dev/mapper/msi--vg-root

@etcetera
Anyone one knows how can we tell how much of SSD space is provisioned by manufacturer if any?

This command for external SSDs...
Code:
sudo fstrim / -v

m1212.gif
 
Don't use a Swap Partition on an SSD either...do so at your own risk.
t3603.gif

https://www.makeuseof.com/tag/optimize-linux-ssds/
Interesting but I don't agree, I need swap for hibernation so not having swap partition is not an option for me.
And the reason why I don't agree is that I don't see any significant use of the swap partition during normal computing, it's barely used.
I have 16GB RAM and 24GB swap partition, swap is used very rarely from what I saw in conky, maybe max 2% or 1GB or so.

And SSD's are no longer that expensive also like they were when they become a thing.
 
I only now understand why output of lsblk says my SSD is 465,8G in size but on the box it says 500GB, therefore 500/465=1,07 or 7% is provisioned by manufacturer.

No, that is because of the difference between the GB/TB the manufacturers advertise, and output in GiB/TiB.

Let me illustrate with the screenshot below

qvvE0bF.png


1. The top entry is for my 256 GB Micron M2 SSD, shown in inxi output as

vendor: Micron model: 1100 SATA 256GB size: 238.47 GiB

2.The 2nd entry is for @CaffeineAddict , so it has a capacity of 465.8 GiB, or 500GB as is on the box.

3. The 3rd is for @etcetera , whose figures show that his example Samsung seems to indicate that it is not overprovisioned at all, and in fact may fall about 20 GB short of the manufacturer's advertised capacity.

BTW
...so there about 200TB overprovisioned.
should be 200 GB I think you'll find.

Note we are somewhat derailing @bob466 's thread subject here, so we can take this elsewhere if he prefers. :)

Wizard
 
In the discussion on trim and swap it's worth noting a few aspects which may be helpful in understanding the situation.

The kernel has supported trim since the 2.6.33 version with trimming of swap partitions occurring each time the kernel boots normally into a system on an ssd that supports trim, which generally modern ssds do. It's the kernel's default behaviour. From wikipedia:
Linux swap partitions are by default performing discard operations when the underlying drive supports TRIM
See here: https://en.wikipedia.org/wiki/Solid-state_drive

Apart from the kernel's default behaviour, there are at least two man pages on using trim on swap explaining how a user can use the discard option to run trim:
From man mount:
If set, causes discard/TRIM commands to be issued to the block device when blocks are freed. This is useful for SSD devices and sparse/thinly-provisioned LUNs.
To run the discard flag on the swap partition, it can be set in the /etc/fstab file in the mount options, e.g.:
UUID=1...3 none swap sw,discard 0 0.

From man swapon:
Enable swap discards, if the swap backing device supports the discard or trim operation. This may improve performance on some Solid State Devices, but often it does not. The option allows one to select between two available swap discard policies:
--discard=once to perform a single-time discard operation for the whole swap area at swapon; or
--discard=pages to asynchronously discard freed swap pages before they are available for reuse.
If no policy is selected, the default behavior is to enable both discard types.
The expression referring to how the option "often does not" improve ssd performance can be understood as not improving it any more than the kernel does in it's default activity. Modern ssds however are likely to respond with enhanced behaviour.

The advantage of swap on ssds is speed compared to spinning hdds which parallels the speed advantage ssds have over hdds generally.

Users who wish to use hibernation, need a swap partition at least as large as RAM for the current state of the system to be accommodated in swap. The speed of swap for such relatively large data transfers likely makes the swap partition on the ssd more work effective, to say the least.

There is quite bit of information online about how ssds have improved over time and that concerns about their "wearing out" or similar vulnerabilities do not apply currently and haven't applied for some time.

In all cases with ssds, one has to rely on the disk internal controller to deal with the writing into swap. The controlling technology is in the firmware of the ssd and helps even out the wearing on the cells. The ssds can be conceived to have their own little OSs managing the storage in conjunction with the user's OS.

On the matter of spinning hdds and trimming, of two common types of hdd, one can be trimmed.
If the hdd is uses SMR technology (Shingled Magnetic Recording), it can be trimmed, compared to the non-trim hdd which uses CMR (Conventional Magnetic Recording).

Not having any experience with SMR, nor informed of the nuances, there is however a web site where one can enter a model identity and check what the unit is which may be helpful:
 
I'm very surprised some people don't know...what you see on the box is not what you get...it's been that way for years.

A 500GB SSD...install Mint (2.9GB) space now on Drive about 460GB because formatting takes up space.
1713856099639.gif
 
In the discussion on trim and swap it's worth noting a few aspects which may be helpful in understanding the situation.

The kernel has supported trim since the 2.6.33 version with trimming of swap partitions occurring each time the kernel boots normally into a system on an ssd that supports trim, which generally modern ssds do. It's the kernel's default behaviour. From wikipedia:

See here: https://en.wikipedia.org/wiki/Solid-state_drive

Apart from the kernel's default behaviour, there are at least two man pages on using trim on swap explaining how a user can use the discard option to run trim:
From man mount:

To run the discard flag on the swap partition, it can be set in the /etc/fstab file in the mount options, e.g.:
UUID=1...3 none swap sw,discard 0 0.

From man swapon:

The expression referring to how the option "often does not" improve ssd performance can be understood as not improving it any more than the kernel does in it's default activity. Modern ssds however are likely to respond with enhanced behaviour.

The advantage of swap on ssds is speed compared to spinning hdds which parallels the speed advantage ssds have over hdds generally.

Users who wish to use hibernation, need a swap partition at least as large as RAM for the current state of the system to be accommodated in swap. The speed of swap for such relatively large data transfers likely makes the swap partition on the ssd more work effective, to say the least.

There is quite bit of information online about how ssds have improved over time and that concerns about their "wearing out" or similar vulnerabilities do not apply currently and haven't applied for some time.

In all cases with ssds, one has to rely on the disk internal controller to deal with the writing into swap. The controlling technology is in the firmware of the ssd and helps even out the wearing on the cells. The ssds can be conceived to have their own little OSs managing the storage in conjunction with the user's OS.

On the matter of spinning hdds and trimming, of two common types of hdd, one can be trimmed.
If the hdd is uses SMR technology (Shingled Magnetic Recording), it can be trimmed, compared to the non-trim hdd which uses CMR (Conventional Magnetic Recording).

Not having any experience with SMR, nor informed of the nuances, there is however a web site where one can enter a model identity and check what the unit is which may be helpful:

There's heaps of info out there...I've read much of it and it's very easy to become paranoid...at the end of the day people will do what they want regardless of what we say.
1713856832682.gif
 
No, that is because of the difference between the GB/TB the manufacturers advertise, and output in GiB/TiB.
I'm very surprised some people don't know...what you see on the box is not what you get.
Uh, no, I did know, I mean I know about the difference between GiB's and GB's but was under impression lsblk shows output in GB's but it appears it doesn't, when I run lsblk -b is shows 500GB (in bytes)

So back to square one:

Anyone one knows how can we tell how much of SSD space is overprovisioned by manufacturer if any?
 
To run the discard flag on the swap partition, it can be set in the /etc/fstab file in the mount options, e.g.:
UUID=1...3 none swap sw,discard 0 0.
Why is this discard option not mentioned in online man fstab?

There is also error= option which is also not mentioned, I wonder why and which man page would show all the options for fstab?
 
In relation to the outputs of commands on the capacity of hard drives, one needs to be mindful of the difference between gigabytes (x1000) and gibibytes (x1024).

Here are some examples on a machine with a 500GB disk.
Code:
[root@mon ~]# fdisk -l
Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
<snip>
Result is ~465GiB, which is ~500GB (500107862016/1000)

Code:
[fen@mon ~]$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sr0          11:0    1  1024M  0 rom
nvme0n1     259:0    0 465.8G  0 disk
├─nvme0n1p1 259:1    0   476M  0 part /boot/efi
├─nvme0n1p2 259:2    0  14.9G  0 part [SWAP]
└─nvme0n1p3 259:3    0 450.4G  0 part /
Result is ~465GiB, which is the same as the fdisk output since lsblk output is in gibibytes, so it's showing ~500GB!

Code:
[fen@mon ~/]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.7G     0  7.7G   0% /dev
tmpfs           1.6G  2.5M  1.6G   1% /run
/dev/nvme0n1p3  443G   19G  401G   5% /
tmpfs           7.7G     0  7.7G   0% /dev/shm
tmpfs           5.0M   12K  5.0M   1% /run/lock
efivarfs        192K   99K   89K  53% /sys/firmware/efi/efivars
tmpfs           7.7G  194M  7.5G   3% /tmp
/dev/nvme0n1p1  476M  4.4M  471M   1% /boot/efi
tmpfs           1.6G   64K  1.6G   1% /run/user/1000
Result is ~466GB but again this is output in GiB, so the result is ~500GB!

This command shows full size of disk:
Code:
[root@mon ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme0n1          /dev/ng0n1            2331E865DC90         CT500P3PSSD8                             0x1        500.11  GB / 500.11  GB    512   B +  0 B   P9CR40A
Result is ~500GB.

This command also shows full size:
Code:
[root@mon ~]# smartctl -a /dev/nvme0n1 | grep -i capacity
Namespace 1 Size/Capacity:          500,107,862,016 [500 GB]
Result is 500GB.

This command also shows full size:
Code:
[fen@mon ~]$ lsblk -b
NAME        MAJ:MIN RM         SIZE RO TYPE MOUNTPOINTS
sr0          11:0    1   1073741312  0 rom
nvme0n1     259:0    0 500107862016  0 disk
├─nvme0n1p1 259:1    0    499122176  0 part /boot/efi
├─nvme0n1p2 259:2    0  16000221184  0 part [SWAP]
└─nvme0n1p3 259:3    0 483606396928  0 part /
Result is 500107862016/1000 = ~500GB

So, in all cases, the readings are accurate as to the size of the disk, and one needs to not be tripped up by the difference between GiB and GB.

There are no allowances in these measures for what trim may be doing to the disk drives.
 
Last edited:
Why is this discard option not mentioned in online man fstab?

There is also error= option which is also not mentioned, I wonder why and which man page would show all the options for fstab?
I cannot answer your question, but I do know there are many undocumented kernel options and command options that one finds when researching various problems online. In relation to linux documentation such as the man pages, they are written at a certain point in time but the software of the commands and programs they are describing is constantly in development. Inevitably it will leave some man pages a little dated. They are likely to catch up eventually, but they also need to have people to write them who are not necessarily the developers.
 
I cannot answer your question, but I do know there are many undocumented kernel options and command options that one finds when researching various problems online. In relation to linux documentation such as the man pages, they are written at a certain point in time but the software of the commands and programs they are describing is constantly in development. Inevitably it will leave some man pages a little dated. They are likely to catch up eventually, but they also need to have people to write them who are not necessarily the developers.
Not a problem, but I think I figured out myself in the meantime, I'm not used to run man in the terminal but instead read manuals online but only now see that's not wise idea.

I've just run man fstab in the terminal and found this (see bold part):
The fourth field (fs_mntops).
This field describes the mount options associated with the filesystem.

It is formatted as a comma-separated list of options. It contains at least the type of mount (ro or rw), plus any additional options appropriate to the filesystem type (including performance-tuning options). For details, see mount(8) or swapon(8).
Therefore man fstab directs user to man mount which under FILESYSTEM-INDEPENDENT MOUNT OPTIONS lists a load of more options than fstab alone!
 
I just let the kernel do its thing and have for quite a while. I haven't manually invoked trim since kernel 2.x and haven't had a problem. To top it off, I often go weeks without rebooting.

For example:

Code:
$  uptime
 11:15:17 up 50 days, 20:59,  3 users,  load average: 0.00, 0.01, 0.00

So far, so good.
 
If you're not running a server but a personal workstation then what's the reason to keep your system up for so long?

Laziness, really. Ease of access, if otherwise asked. I have a server that has been up for years now.

Isn't that waste of hardware lifetime?

Not so much these days. Computers are pretty good at going into a low-power mode with the monitor off.

I also give my hardware away fairly regularly as I upgrade my devices to newer devices.

When I get a device, I take the SSD out of it. I then put my own SSD (though I will sometimes clone an SSD to a more modern SSD that's faster) into the system. When I give the device away (or send it off for proper recycling) I pull the drive. If the computer is to be given away, I put the original drive back into the system.

However, as of late, I've felt less of an urge to upgrade. Modern hardware has reached the point where I'm satisfied with the performance and my devices are lasting longer. I haven't had to deal with a hardware failure in a while, but I did have an episode of hardware failing a couple of years ago. Those devices still had a reasonable lifespan, a span that's acceptable to me and my needs.
 


Top