Solved Having issues formatting external ssd.

Solved issue

/dev/pie

New Member
Joined
Jul 10, 2020
Messages
22
Reaction score
11
Credits
190
Hello I am having issues formatting and setting up a partition on my external ssd. When I try to format the disk as exfat I get this error message:
1732497843671.png


So I tried the repair option which completes however when I go to mount the drive I get this error message:
Screenshot From 2024-11-24 17-26-35.png


I am running Arch Linux and on the current version of Gnome. Any help would be appreciated.
 
Last edited:


Hello I am having issues formatting and setting up a partition on my external ssd. When I try to format the disk as exfat I get this error message:
View attachment 22926

So I tried the repair option which completes however when I go to mount the drive I get this error message:
View attachment 22927

I am running Arch Linux and on the current version of Gnome. Any help would be appreciated.
The output in post #1 shows:
the "Disk is OK",
the temperature of 40 degrees Celsius is within normal limits,
the disk has at least one partition which is /dev/sda1,
the disk has an ext4 filesystem on it.

The reason for the failure to mount the system in the second instance was output as "probably corrupted filesystem". That suggests repairing the filesystem. One can run the fsck command with options to either just report problems, or to fix the problems. If, however, you wish to change the filesystem from ext4 to exfat, then you can delete the existing partitions and filesystem, then create new partitions and write the new filesystem.

There are a few options for renewing the filesystem and the partitioning:

Using a partitioning tool, for example, fdisk on the command line, or Gparted for a GUI, delete all the current partitioning to create free space. Repartition the drive with partitions of choice. Write the new partitioning to disk. Then write a filesystem to the partitions with the relevant mkfs command, e.g. mkfs.exfat, if that's the filesystem you want.

Another option is to "clear" the disk, which means get rid of all the data from it, then repartition, and add the filesystem, as above. Tools to "clear" include command line commands like dd and blkdiscard, but since you are on arch linux, see here:

Another option for clearing disks may be available in the BIOS/UEFI. For example, with some ASRock motherboards, there is a "Secure Erase" option in the BIOS/UEFI.

Bear in mind that when partitioning and writing filesystems, the target disk needs to be unmounted.

There may be some interesting ideas shown here:
 
Last edited:
@osprey Thanks for the reply I went through the process with fdisk and I was still unable to mount the drive. It gives that same error. However when I went through the process with Gparted it gave me this error:

<i>e2label: Superblock checksum does not match superblock while trying to open /dev/sda1
Couldn't find valid filesystem superblock.</i>

<i>tune2fs 1.47.1 (20-May-2024)</i>

<i>tune2fs: Superblock checksum does not match superblock while trying to open /dev/sda1
Couldn't find valid filesystem superblock.</i>

<i>Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: b15170f6-73b4-4d10-8062-75ac65c862e9
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index orphan_file filetype extent flex_bg metadata_csum_seed sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 61054976
Block count: 244190208
Reserved block count: 12209510
Overhead clusters: 4112461
Free blocks: 240077228
Free inodes: 61054964
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 965
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
RAID stripe width: 8191
Flex block group size: 16
Filesystem created: Mon Nov 25 05:34:02 2024
Last mount time: n/a
Last write time: Mon Nov 25 05:34:05 2024
Mount count: 0
Maximum mount count: -1
Last checked: Mon Nov 25 05:34:02 2024
Check interval: 0 (<none>)
Lifetime writes: 264 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 436542b4-2189-4150-80a1-3b38d069df83
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0xc9e76356
Checksum seed: 0x822f94cf
Orphan file inode: 12
Journal features: (none)
Total journal size: 1024M
Total journal blocks: 262144
Max transaction length: 262144
Fast commit length: 0
Journal sequence: 0x00000001
Journal start: 0

*** Run e2fsck now!
</i>

<i>dumpe2fs 1.47.1 (20-May-2024)
dumpe2fs: Superblock checksum does not match superblock while trying to open /dev/sda1</i>

<i>Unable to read the contents of this file system!
Because of this some operations may be unavailable.
The cause might be a missing software package.
The following list of software packages is required for ext4 file system support: e2fsprogs v1.41+.</i>


It mentions e2fsprorgs. I have that installed already.
 
Try using Raspberry Pi Imager, fixed most of my problems using both SSDs, USB Drives and Hard drives.
 
Superblock checksum does not match superblock while trying to open /dev/sda1
Couldn't find valid filesystem superblock

There is a proposed solution to the problem of the superblock checksum in the link below. It involves the use of the tune2fs command to disable the checksum. The command is provided in the answer with the green tick on this webpage:

It's worth a try since the error message which this command resolved for that user, basically included the same error message in the output in post #3. Personally I haven't had to do this, rather, I've cleared the whole disk to recover, but I think it's worth a try.
 
You can link directly to the answer, for future reference:


None of this worked for me. I almost gave up and just figured that the drive was toast. However for some unknown reason when I formatted the drive to ext3 everything worked.
 
Now that's interesting :)

This article might be worth a read to you.

https://www.pitsdatarecovery.com/blog/ext3-vs-ext4/

If you find the EXT3 continues to work OK and satisfy your needs, you can mark this Thread as solved.

To do so, you can mark it as such by going to your first post, and do as follows

Near bottom left of the post click Edit - (No Prefix) - Solved


Only when you are sure.

Chris Turner
wizardfromoz
 

Members online


Top