This is impossible. It makes no logical sense.

Trenix25

Well-Known Member
Joined
Aug 15, 2017
Messages
673
Reaction score
366
Credits
6,139
Ok, I'm having this weird problem that defies mathematical logic. When I try to create an ext4 file system on a hard drive partition using 2495091104 512 byte logical sectors I get a file system with 1226836688 total 1k blocks. When I try to create a file system using 2495091103 512 byte logical sectors I get a file system with 1226798416 total 1k blocks. These total block values do not count the system overhead associated with the ext4 file systems themselves. So just how many 512 byte logical sectors do I need to use to get 1226833920 total 1k blocks as displayed by /usr/bin/df, without including the system overhead? My hard drive has a physical sector size of 4096 bytes so whatever I use must be an integer multiple of that. It is worth pointing out that the 2495091103 512 byte logical sectors do not match up with an integer number of physical sectors so I would really only be using 2495091096 512 byte logical sectors, so when I create a file system with 2495091096 512 byte logical sectors I get a total of 1226798416 1k blocks, which is the same as when I use 2495091103 512 byte logical sectors because the added logical sectors are not able to be used in the file system. One might conclude that I have found myself on the threshold of two different block sizes of file system overhead which is creating a real problem. I need an ext4 file system that lands on an exact boundary of 5 GiB in total 1k blocks not counting system overhead associated with the ext4 file system itself. I have just slightly over 1170 GiB of space left on my hard drive to be partitioned and used. This must be contained in one single partition. I need to use as much of that as I can, but it must be partitioned in 5 GiB increments, when counting total 1k blocks without counting file system overhead.

Signed,

Matthew Campbell
 


I read this a few times, but I'm still not sure what you are trying to do here exactly.
What is the purpose of this?
 
Last edited:
I'm trying to create ext4 file systems with a specific number of total "1k" blocks instead of just creating a partition of a specific size instead. I decided to use 1150 GiB instead of 1170 GiB because the larger one wouldn't work. Now I have an extra 20 GiB left blank and unused at the end of my new hard drive. I was creating a work of art when partitioning my new hard drive. I like the mathematical alignment.

Signed,

Matthew Campbell
 
fdisk /dev/sdb

Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.


Command (m for help): p

Disk /dev/sdb: 489.05 GiB, 525112713216 bytes, 1025610768 sectors
Disk model: Crucial_CT525MX3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 214CD305-8D57-49C3-A244-B232E66D2141

Device Start End Sectors Size Type
/dev/sdb1 2048 698367 696320 340M EFI System
/dev/sdb2 698368 1394687 696320 340M EFI System
/dev/sdb3 1394688 7686143 6291456 3G Linux extended boot
/dev/sdb4 7686144 102057983 94371840 45G Linux root (x86-64)
/dev/sdb5 102057984 175458303 73400320 35G Linux variable data
/dev/sdb6 175458304 200624127 25165824 12G Linux swap
/dev/sdb7 200624128 1008027647 807403520 385G Linux home

It's funny, when you buy a hard drive it says it's 512GB. But they never really are.
In this case mine is actually 489GB.
It has 1025610768 total sectors. Each sector in my case is 512 bytes.
The logical sectors don't have to match the physical sectors but it's usually more efficient for them to match.

I can determine how large each sector is by multiplying the the number of sectors allocated to a partition by
the sector size. For example, partition /dev/sdb4 is 94371840 sectors. In reality it's actually 94371839.
But you always add one at the end. I don't know why, I should look it up sometime.

Number of sectors=end sector−start sector+1
Sector size in 512 bytes.

So, 94371840 ÷ 512 bytes = 48,318,382,080 bytes
48,318,382,080 ÷ (1024 x 1024 x 1024) = 45GB

When you use mks.ext4 you can specify the sector size if you want to, but if you don't it just uses the default.

Code:
sudo mke2fs -t ext4 -b 4096 /dev/sdb4

or

Code:
sudo mke2fs -t ext4 -b 512 /dev/sdb4

But this causes additional processing overhead, and will slow your read and writes down slightly.
So, it's usually a good idea for the physical and logical sector sizes to match.

So, to figure out how many sectors we need to create a 20GB partition.

20GB= 20 ×1024 ×1024 ×1024 bytes = 21,474,836,480 bytes
21,474,836,480 ÷ 512 = 41,943,040 sectors (assuming you are using 512 bytes)

But your disk partitions will rarely fall on exact boundaries, so you might have to use 19.9GB or 20.1GB.
It varies from disk to disk. But using fdisk will show how many free sectors you have left, so you don't
really have to guess.
 
It's funny, when you buy a hard drive it says it's 512GB. But they never really are.
In this case mine is actually 489GB.
Because manufacturers measure their disks in GB's, however OS will report GiB's instead of GB's resulting in less space:
1 GiB = 1024 MB
1 GB = 1000 MB

In other words, when they say 512GB what they're actually saying is 512 x 0.9313226 = 476 GiB
(because 1GB = 0.9313226 GiB)

Manufacturers are tricking consumers because they should report how many GiB's is their disk instead of putting GB's on the label.
 
Manufacturers are tricking consumers because they should report how many GiB's is their disk instead of putting GB's on the label.

I was looking into this, and you're right. But also, some newer drives have a "spare" space.
It turns out this is used for bad sector mapping. You never see this "extra" space in fdisk or gparted.
It's hidden from you, and you can't use it directly. In the "old days" when a disk got a bad sector on it
you just lost a little space on your partition ( we are talking bytes here ) but eventually if you got enough
bad sectors your partition would effectively shrink over a period of time. To combat this, the manufacturers added
the "spare" space and the bad sectors get replaced here.

Code:
:~# smartctl -a /dev/nvme0n1
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.12.9-200.fc41.x86_64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       WD_BLACK SN850X 2000GB
Serial Number:                      23515E805052
Firmware Version:                   620361WD
PCI Vendor/Subsystem ID:            0x15b7
IEEE OUI Identifier:                0x001b44
Total NVM Capacity:                 2,000,398,934,016 [2.00 TB]
Unallocated NVM Capacity:           0
Controller ID:                      8224
NVMe Version:                       1.4
Number of Namespaces:               1
Namespace 1 Size/Capacity:          2,000,398,934,016 [2.00 TB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            001b44 8b4cfff4c5
Local Time is:                      Sun Jan 19 06:04:39 2025 PST
Firmware Updates (0x14):            2 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x00df):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Verify
Log Page Attributes (0x1e):         Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Pers_Ev_Lg
Maximum Data Transfer Size:         128 Pages
Warning  Comp. Temp. Threshold:     90 Celsius
Critical Comp. Temp. Threshold:     94 Celsius
Namespace 1 Features (0x02):        NA_Fields

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     9.00W    9.00W       -    0  0  0  0        0       0
 1 +     6.00W    6.00W       -    0  0  0  0        0       0
 2 +     4.50W    4.50W       -    0  0  0  0        0       0
 3 -   0.0250W       -        -    3  3  3  3     5000   10000
 4 -   0.0050W       -        -    4  4  4  4     3900   45700

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         2
 1 -    4096       0         1

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        43 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    4,549,700 [2.32 TB]
Data Units Written:                 6,994,748 [3.58 TB]
Host Read Commands:                 26,039,681
Host Write Commands:                114,312,826
Controller Busy Time:               118
Power Cycles:                       119
Power On Hours:                     5,842
Unsafe Shutdowns:                   37
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0

Error Information (NVMe Log 0x01, 16 of 256 entries)
No Errors Logged

say "spare space" three times fast. :)
 
Last edited:
This is the deal with logical and physical sectors. A hard drive can only read and write data from/to one or more physical sectors at a time. You cannot send a command to the hardware drive controller to read from or write to part of a sector. As such whenever you allocate blocks in a file system you must use a block size that is an integer number of physical drive sectors. This is also why fdisk will complain if a partition does not start at the beginning of a physical sector. Unless the Linux kernel has some kind of code that compensates for this then you will overwrite data by trying to use one physical sector to hold data from two or more different "blocks" on a file system.

Signed,

Matthew Campbell
 
This is the deal with logical and physical sectors. A hard drive can only read and write data from/to one or more physical sectors at a time. You cannot send a command to the hardware drive controller to read from or write to part of a sector. As such whenever you allocate blocks in a file system you must use a block size that is an integer number of physical drive sectors.

Be careful mixing sectors and blocks, they are not the same thing.
 

Members online


Latest posts

Top