[Solved] Please Help Me Troubleshoot Error: Read-only file system

oeaz

New Member
Joined
Mar 11, 2021
Messages
2
Reaction score
1
Credits
33
Hello,

I've been searching the net for similar questions, and have found some answers. I'm not that well versed on partitions, so I was hoping someone here could provide guidance on which one I would need to check. I'm assuming I'm on the right path, but wouldn't be surprised if it turns out I'm barking up the wrong tree. Anyways, here's a summary of my problem and possible and possible solution I've found.

Today my ubuntu 18.04 remote server started acting up, I think I was trying to delete some old files when I got a file system error msg. I thought it was just time for a reboot (it had been 29 days since I had last done it). Well, it took longer than normal to boot up after running sudo reboot , and when I logged in via SSH, I did not get the normal login banner. Also all other services that run on start-up were apparently not running either.

I started searching for other troubleshooting threads for similar problems and read that the file system can sometimes revert to read-only mode in order to protect itself if an error is detected. Sure enough, I tried running a couple of commands and the error: read-only file system came up.

I ran dmesg | grep "error" as suggested in one of these threads and got the following output:

Code:
[  316.379960] EXT4-fs (md2): error count since last fsck: 4
[  316.379979] EXT4-fs (md2): initial error at time 1615414876: ext4_journal_check_start:61
[  316.379986] EXT4-fs (md2): last error at time 1615414876: ext4_journal_check_start:61

Running lsblk gives me the following partition tree:

Code:
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
loop0     7:0    0  9.1M  1 loop  /snap/canonical-livepatch/95
loop1     7:1    0 99.2M  1 loop  /snap/core/10859
loop2     7:2    0 98.4M  1 loop  /snap/core/10823
sda       8:0    0  2.7T  0 disk
├─sda1    8:1    0    8G  0 part
│ └─md0   9:0    0   16G  0 raid0 [SWAP]
├─sda2    8:2    0  512M  0 part
│ └─md1   9:1    0  511M  0 raid1 /boot
├─sda3    8:3    0  2.7T  0 part
│ └─md2   9:2    0  5.5T  0 raid0 /
└─sda4    8:4    0    1M  0 part
sdb       8:16   0  2.7T  0 disk
├─sdb1    8:17   0    8G  0 part
│ └─md0   9:0    0   16G  0 raid0 [SWAP]
├─sdb2    8:18   0  512M  0 part
│ └─md1   9:1    0  511M  0 raid1 /boot
├─sdb3    8:19   0  2.7T  0 part
│ └─md2   9:2    0  5.5T  0 raid0 /
└─sdb4    8:20   0    1M  0 part

Looking at the other threads the suggested action is to run sudo fsck.ext4 -f /current/filesystem/mount/point (other proposed solutions were unmounting/remounting to r/w mode, but some argued against this as file system corruption could have been the cause and it would potentially not be addresed by just remounting).

So my question is, should I run this command on md2, or should I run it on sda3 and sdb3? Again, I'm hoping this is correct and this final check will fix the file system, but I wouldn't be surprised if it could be something else entirely that could be causing this.

Thank you in advance for taking the time to read this, hopefully it's just a couple of steps needed from here to fix the issue.


Regards,

o
 


Update (just in case someone with a similar problem finds this):

Running sudo fsck.ext4 -f /dev/md2 was the correct option, and then rebooting got everything back to normal. I tried first running it on both sda3 and sdb3 first but got this message:
Code:
e2fsck 1.44.1 (24-Mar-2018)
/dev/sdb3 is in use.
e2fsck: Cannot continue, aborting.
 

Staff online

Members online


Top