Boot crash of Virtual Machine Eurolinux 9.2 (Vilnius) on VMware Workstation 17

KeYunLong

New Member
Joined
Jul 22, 2022
Messages
9
Reaction score
2
Credits
98
Hi all,
I used this VM to practice a bash script from time to time - the last one was nothing really special - "for" loop. I saved it in "usr/local/bin/". It was about 2 weeks ago.
Yesterday I wanted to boot my VM and it got stuck on the splash screen:
Linux splash screen.jpg

I pressed "escape" to see what is going:
Linux errors 1.jpg


It seems that it ended up in the loop of "Failed to start OpenSSH server daemon" "crond.service" and "EXEC spawning"
Linux errors.jpg


I couldn't open rescue mode - neither by just selecting it from the GRUB menu, nor by changing parameters by pressing "e" on GRUB
GRUB.jpg


These are unchanged parameters of GRUB


grub e.jpg


Pressing ctrl-c for a command prompt works.

I updated VMware workstation from build 17.02 to 17.05, but it didn't help. Also, I raised an allocated disk size from 20GB to 50GB on the VMware menu, just in case, but it didn't help.
Any idea, how can I fix it and avoid in the future?
 


I did a few additional troubleshooting steps.

I booted into single-user mode by doing the following things:

  1. Started my virtual machine.
  2. When the GRUB menu appeared, selected the entry for my Eurolinux 9.2 distribution.
  3. Pressed the e key to edit the boot parameters.
  4. Located the line that starts with linux.
  5. At the end of that line, added init=/bin/bash.
  6. Pressed Ctrl+X to boot with the modified parameters.

init bin bash.jpg


Then I ran "journalctl -xe"

journalctl -xe 1.jpg


the only interesting details from today I found were:
"Nov 14 07:26:41 localhost systemd[1] fuse: module verification failed: signature and/or required key modification failed: signature and/or required key missing - tainting kernel"
"Nov 14 07:26:43 localhost kernel: device-mapper: core: CONFIG_IMA-DISABLE_HTABLE is disabled. Duplicate IMA mesurements will not be recorded in the ate IMA measurement will not be recorded in the IMA log."
"Nov 14 07:26:43 localhost systemd[1]: sys-module-fuse.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Unit sys-fs-fuse-connections.mount not ignoring: Unit sys-fs-fuse-connections.mount not found.

Based on the information I found, it appeared that there are some issues related to the FUSE module and device-mapper in my system. So I did the following:

bash-5.1# lsmod | grep fuse
bash: lsmod: command not found
bash-5.1# sudo yum install fuse
Config error: [Errno 30] Read-only file system: '/var/log/dnf.log' : '/var/log/dnf.log'
bash-5.1#

But, It seems that the lsmod command is not available in my single-user mode environment, and the yum command is unable to install the fuse package due to a read-only file system.

So, I wanted to check fuse status

bash-5.1# ls /dev/fuse
/dev/fuse
bash-5.1# systemctl status fuse.service
Failed to connect to bus: No such file or directory

Not sure if I'm going to the right direction with the troubleshooting. I will do more tests in the future.
 
Try: init=/bin/sh rw, or, init=/bin/bash rw. That enables read and write.
Then you should be able to run commands.
You can use command names like: lsmod, or full pathnams: /usr/bin/lsmod.
When you are finished you can run the command: sync, to sync what you've done if it needs it.
Run: mount -o remount ro /, to keep the filesystem from altering after you've finished.
Run: sync, again.
Run: /sbin/reboot -f, to reboot.
That should do it for you.
 
Try: init=/bin/sh rw, or, init=/bin/bash rw. That enables read and write.
Then you should be able to run commands.
You can use command names like: lsmod, or full pathnams: /usr/bin/lsmod.
When you are finished you can run the command: sync, to sync what you've done if it needs it.
Run: mount -o remount ro /, to keep the filesystem from altering after you've finished.
Run: sync, again.
Run: /sbin/reboot -f, to reboot.
That should do it for you.
Thanks for your feedback!

I added init=/bin/bash rw

init bash rw.jpg


Then I tried both lsmod

lsmod.jpg


I tried to install fuse

fuse.jpg


grep fuse.jpg


sudo yum update.jpg


sync was fine.. but I didn't do nothing.. maybe that's why.

sync.jpg
 
Hi all,
I used this VM to practice a bash script from time to time - the last one was nothing really special - "for" loop. I saved it in "usr/local/bin/". It was about 2 weeks ago.
Yesterday I wanted to boot my VM and it got stuck on the splash screen:

I pressed "escape" to see what is going:


It seems that it ended up in the loop of "Failed to start OpenSSH server daemon" "crond.service" and "EXEC spawning"
View attachment 17267
What did that script of yours do because I see a bunch of permission denied errors in the screenshot you shared for sshd and crond, meaning they aren't allowed to be executed. Maybe your script had something to do with that and did that for other binaries as well under /usr/sbin causing your boot process to break? Looking at that it's best probably to restore a snapshot of a working system.
 
Last edited:
The results in the output suggest it's the mounting and communication functions in the installation that have been unable to function.

As f33dm3bits asks: what did that for loop, (mentioned in post #1), do and is it implicated?

In any case, VM's are generally so easy to reinstall without too much bother, which is a major virtue of the system. At the moment, it just looks like the installation is broken.

You could go in again in single user and inspect the filesystem by navigating around the /usr/bin, /usr/sbin and /etc directories to see what's in place and what the permissions and configurations are, but clearly if the system can't run lsmod, it's likely to have irretrievable problems.
 
What did that script of yours do because I see a bunch of permission denied errors in the screenshot you shared for sshd and crond, meaning they aren't allowed to be executed. Maybe your script had something to do with that and did that for other binaries as well under /usr/sbin causing your boot process to break? Looking at that it's best probably to restore a snapshot of a working system.

It was a very simple training script:

#!/bin/bash

echo "Look at our services"
for $services in "Products" "Sales" "Support"
do
echo "$services"
done


saved under /usr/local/bin.

The results in the output suggest it's the mounting and communication functions in the installation that have been unable to function.

As f33dm3bits asks: what did that for loop, (mentioned in post #1), do and is it implicated?

In any case, VM's are generally so easy to reinstall without too much bother, which is a major virtue of the system. At the moment, it just looks like the installation is broken.

You could go in again in single user and inspect the filesystem by navigating around the /usr/bin, /usr/sbin and /etc directories to see what's in place and what the permissions and configurations are, but clearly if the system can't run lsmod, it's likely to have irretrievable problems.

ehh.. I expected it, honestly. I will look a few more options and probably just reinstall the VM.
 
It was a very simple training script:
That script didn't do anything with permissions but it looks like something removed execute permissions from binaries in /usr/sbin. Boot into single user mode and then check what the permissions look like of the files under /usr/sbin, that way you will know for sure and maybe also share a screenshot here so we can see?
 
That script didn't do anything with permissions but it looks like something removed execute permissions from binaries in /usr/sbin. Boot into single user mode and then check what the permissions look like of the files under /usr/sbin, that way you will know for sure and maybe also share a screenshot here so we can see?

I ran "ls l /usr/bin | more" and "ls l /usr/sbin | more" the output is very long, so I give just two examples of each. In general all permissions are set to root.
First "ls l /usr/bin | more"
bin 1.jpg

bin 2.jpg

Here is "ls l /usr/sbin | more"

sbin 1.jpg

sbin 2.jpg
 
That looks normal. Try doing the following and see what happens.
1. Boot into single user mode
2. Run: touch /.autorelabel
3. Run exit (twice)
4. The system will reboot and relabel the selinux contexts, then see if the system boots normally.
If that doesn't fix it just restore a working snapshot.
 
That looks normal. Try doing the following and see what happens.
1. Boot into single user mode
2. Run: touch /.autorelabel
3. Run exit (twice)
4. The system will reboot and relabel the selinux contexts, then see if the system boots normally.
If that doesn't fix it just restore a working snapshot.
Everything is going good until the 3-rd step

1700079612149.png


"touch /.autorelabel" is accepted without an issue, but after "exit" the VM freezes.
 
Everything is going good until the 3-rd step

View attachment 17331

"touch /.autorelabel" is accepted without an issue, but after "exit" the VM freezes.
I would just restore a snapshot, something screwed up your system causing the boot process to fail.
 

Members online


Latest posts

Top