Ansible - "Destination /etc/yum.repos.d not writable"

Ron_1984

New Member
Joined
Feb 8, 2021
Messages
18
Reaction score
4
Credits
251
Hi

I am new to Ansible, I'm trying to copy a repo file from control node to manage node and got the below error when executing my playbook.
Error:
fatal: [10.192.128.154]: FAILED! => {"changed": false, "checksum": "bceb47957aa7709b77108019f398d05b481ddca3", "msg": "Destination /etc/yum.repos.d not writable"}

I have the used privilege_escalation and play book as below
[privilege_escalation]
become = true
become_method = enable
become_user = root
become_ask_pass = true

playbook:
- name: Copy function
hosts: test
become: yes
tasks:
- name: Copy repo file
copy:
src: /etc/yum.repos.d/rhel76-kernel-test.repo
dest: /etc/yum.repos.d/rhel76-kernel-test.repo

My user has the privilege to execute sudo commands on both the nodes. Please help to fix the error.

Thank you
 


Use ansible -v(verbose), -vvv( more verbose), or -vvvv(debugging) for verbose output when you run a playbook for debuging.
 
Use ansible -v(verbose), -vvv( more verbose), or -vvvv(debugging) for verbose output when you run a playbook for debuging.
I did a verbose and got the below ouput, still could not understand what I'm missing.

"changed": false,
"checksum": "bceb47957aa7709b77108019f398d05b481ddca3",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "rhel76-kernel-test.repo",
"attributes": null,
"backup": false,
"checksum": "bceb47957aa7709b77108019f398d05b481ddca3",
"content": null,
"delimiter": null,
"dest": "/etc/yum.repos.d/rhel76-kernel-test.repo",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/bc7409/.ansible/tmp/ansible-tmp-1612776338.39-57584-146355239843293/source",
"unsafe_writes": null,
"validate": null
}
},
"msg": "Destination /etc/yum.repos.d not writable"
 
If one verbose doesn't work try the next level over verbose.
 
Instead running a copy task, run a shell command in your task to see as what user it is actually executing the task. So something like this:
YAML:
playbook:
- name: Test Executing ansible user
  hosts: test
  become: yes
    tasks:
      - name: Check using id
        shell: id
      - name: Check using whoami
        shell: whoami
 
Last edited:
Tried with shell command ID, got the below error(used -VVV) Shared connection to 10.192.128.154 closed. Any suggestions?

below is the verbose output.

<10.192.128.154> (0, 'sftp> put /home/bc7409/.ansible/tmp/ansible-local-24241UhSs5v/tmp0wOUgO /home/bc7409/.ansible/tmp/ansible-tmp-1612874714.38-24503-67474575813107/AnsiballZ_setup.py\n', '')

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: None

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o ConnectTimeout=10 -o ControlPath=/home/bc7409/.ansible/cp/bd4d994a74 10.192.128.154 '/bin/sh -c '"'"'chmod u+x /home/bc7409/.ansible/tmp/ansible-tmp-1612874714.38-24503-67474575813107/ /home/bc7409/.ansible/tmp/ansible-tmp-1612874714.38-24503-67474575813107/AnsiballZ_setup.py && sleep 0'"'"''

<10.192.128.154> (0, '', '')

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: None

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o ConnectTimeout=10 -o ControlPath=/home/bc7409/.ansible/cp/bd4d994a74 -tt 10.192.128.154 '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=lvhuvnhwsjdybjqxjlmtdbbacmfmvhte] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-lvhuvnhwsjdybjqxjlmtdbbacmfmvhte ; /usr/bin/python /home/bc7409/.ansible/tmp/ansible-tmp-1612874714.38-24503-67474575813107/AnsiballZ_setup.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''

Escalation succeeded

<10.192.128.154> (1, '\r\n', 'Shared connection to 10.192.128.154 closed.\r\n')

<10.192.128.154> Failed to connect to the host via ssh: Shared connection to 10.192.128.154 closed.

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: None

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o ConnectTimeout=10 -o ControlPath=/home/bc7409/.ansible/cp/bd4d994a74 10.192.128.154 '/bin/sh -c '"'"'rm -f -r /home/bc7409/.ansible/tmp/ansible-tmp-1612874714.38-24503-67474575813107/ > /dev/null 2>&1 && sleep 0'"'"''

<10.192.128.154> (0, '', '')

fatal: [10.192.128.154]: FAILED! => {

"ansible_facts": {},

"changed": false,

"failed_modules": {

"setup": {

"ansible_facts": {

"discovered_interpreter_python": "/usr/bin/python"

},

"failed": true,

"module_stderr": "Shared connection to 10.192.128.154 closed.\r\n",

"module_stdout": "\r\n",

"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",

"rc": 1

}

},

"msg": "The following modules failed to execute: setup\n"

}



PLAY RECAP ***********************************************************************************************************************************************************************************

10.192.128.154 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
 
Run it without -vvv this time since you don't need the verbose output but it looks like the ssh connection wasn't able to be established.
 
I ran the playbook again and it listed my ldap user for the commands "id" $ "whoami"
however when I change the become_method to sudo from enable in ansible.cfg, it gave error with shared connection to 10.192.128.154 is closed.

Below are the details
ansible.cfg:
[defaults]
inventory = ./inventory
ask_pass = true
host_key_checking = false

[privilege_escalation]
become = yes
become_method = enable
become_user= root
become_ask_pass = true

output:
..
TASK [Check executing user using whoami]
"stdout": "vagrant1",
"stdout_lines": [
"vagrant1"

but if I change the become_method = sudo in ansible.cfg, I get the below error

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
fatal: [10.192.128.154]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"setup": {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "failed": true, "module_stderr": "Shared connection to 10.192.128.154 closed.\r\n", "module_stdout": "\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}}, "msg": "The following modules failed to execute: setup\n"}

However my user has the sudo access in 10.192.128.154 and working fine when I login to the server individually. I could not understand what I'm missing here with privilege_escalation. Please advise.
 
What RHEL version is the destination host and what ansible version are you using?
 
I am using ansible 2.9.16 and destination host is Red Hat Enterprise Linux Server release 7.8 (Maipo)
 
What user are you running the playbook with on the control system and what user do you connect to that has the sudo privileges?
 
What user are you running the playbook with on the control system and what user do you connect to that has the sudo privileges?

vagrant1 is the user used in control system and using the same user in managed node to connect and it has the sudo privilege.
 
In your playbook or in ansible.cfg trying setting "remote_user":
ansible.cfg -> remote_user = vagrant1
playbook -> remote_user: vagrant1
Since you will be connecting as that user and then from that user you will be using sudo to escalate to the root user.
 
In your playbook or in ansible.cfg trying setting "remote_user":
ansible.cfg -> remote_user = vagrant1
playbook -> remote_user: vagrant1
Since you will be connecting as that user and then from that user you will be using sudo to escalate to the root user.
Hi, I found the error, its our local customization on the sudoers file that didn't allow to use root via ansible. It worked fine when I disabled the local customization.

thank you very much for your help
 
If the remote_user setting didn't work I was kind of going to thinking that direction, what was the local customization would be interesting to see what caused the problem but glad you got it working.
 
Hi Again


Now I have been stuck with yum update via ansible. I have created a simple play to do a yum update as below; when the play is executed, the yum update hangs indefinitely. I could not find if there is any local customization is done for the yum updates. I can simply log onto the machine and do sudo yum -y update and it works fine.

Also the same playbook executed fine on the newly build RHEL machine without any local customization.

Any suggestions to fix this issue or to identify where to look in? Thank you

Playbook
tasks:
- name: sudo update
yum:
name: '*'
state: latest

cmd: ansible-playbook yum_update.yml -vvv

OUTPUT
TASK [sudo update] ***************************************************************************************************************************************************************************

task path: /ansible/yum_update.yml:5

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: vagrant1

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="vagrant1"' -o ConnectTimeout=60 -o ControlPath=/home/vagrant1/.ansible/cp/99c1f4db2a 10.192.128.154 '/bin/sh -c '"'"'echo ~vagrant1 && sleep 0'"'"''

<10.192.128.154> (0, '/home/vagrant1\n', '')

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: vagrant1

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="vagrant1"' -o ConnectTimeout=60 -o ControlPath=/home/vagrant1/.ansible/cp/99c1f4db2a 10.192.128.154 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/vagrant1/.ansible/tmp `"&& mkdir "` echo /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621 `" && echo ansible-tmp-1613512445.76-10538-246167820485621="` echo /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621 `" ) && sleep 0'"'"''

<10.192.128.154> (0, 'ansible-tmp-1613512445.76-10538-246167820485621=/home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621\n', '')

Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py

<10.192.128.154> PUT /home/vagrant1/.ansible/tmp/ansible-local-10216ID5vBL/tmpd_1Rsb TO /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621/AnsiballZ_yum.py

<10.192.128.154> SSH: EXEC sshpass -d8 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="vagrant1"' -o ConnectTimeout=60 -o ControlPath=/home/vagrant1/.ansible/cp/99c1f4db2a '[10.192.128.154]'

<10.192.128.154> (0, 'sftp> put /home/vagrant1/.ansible/tmp/ansible-local-10216ID5vBL/tmpd_1Rsb /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621/AnsiballZ_yum.py\n', '')

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: vagrant1

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="vagrant1"' -o ConnectTimeout=60 -o ControlPath=/home/vagrant1/.ansible/cp/99c1f4db2a 10.192.128.154 '/bin/sh -c '"'"'chmod u+x /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621/ /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621/AnsiballZ_yum.py && sleep 0'"'"''

<10.192.128.154> (0, '', '')

<10.192.128.154> ESTABLISH SSH CONNECTION FOR USER: vagrant1

<10.192.128.154> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="vagrant1"' -o ConnectTimeout=60 -o ControlPath=/home/vagrant1/.ansible/cp/99c1f4db2a -tt 10.192.128.154 '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=astfxsmrcsjypzwxwxyxcwhsgijmcxhg] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-astfxsmrcsjypzwxwxyxcwhsgijmcxhg ; /usr/bin/python /home/vagrant1/.ansible/tmp/ansible-tmp-1613512445.76-10538-246167820485621/AnsiballZ_yum.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''

Escalation succeeded
 
The playbook looks correct, what does the output look like when you run ansible without -vvv? Does the remote user need to enter a sudo password to become root? Earlier you had these escalation settings:
Code:
[privilege_escalation]
become = true
become_method = enable
become_user = root
become_ask_pass = true
Maybe change the last one to false and configure sudo to not need a password for the vagrant1 user?
 
Its hangs in Task [sudo update], after few mins, I have to manually terminate the playbook.
[[email protected] ansible]$ ansible-playbook yum_update.yml
SSH password:
BECOME password[defaults to SSH password]:

PLAY [RHEL Package Update] *******************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [10.192.128.154]

TASK [sudo update] ***************************************************************************************************************************************************************************
^C [ERROR]: User interrupted execution
[[email protected] ansible]$


I am using become_method = sudo as below to route to the root user to execute the yum update.
[privilege_escalation]
become = yes
become_method = sudo
become_user= root
become_ask_pass = true

The shell ID commands gave my user(vagrant1) when i used become_method = enable in the previous playbook and it gave the root user for ID command when I used the become_method = sudo after removing the local customization on the sudoers file.
 
It's a local problem on your I copied your ansible.cfg and playbook with a few changes and it works for me.
ansible.cfg
Code:
become = yes
become_method = sudo
become_user= root
become_ask_pass = false
#become_ask_pass = true
remote_user = ansible
YAML:
---
- name: Copy function
  hosts: test
  tasks:
  - name: Copy repo file
    copy:
     src: hw.txt
     dest: /etc/yum.repos.d/hw.txt
Output:
(ansible) [ansible@ansible playbooks]$ ansible-playbook -i inventory -l rhel8 copy.yml

PLAY [Copy function] *********************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************
ok: [rhel8]

TASK [Copy repo file] ********************************************************************************************************
changed: [rhel8]

PLAY RECAP *******************************************************************************************************************
rhel8 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

(ansible) [ansible@ansible playbooks]$ ansible-playbook -i inventory -l rhel8 copy.yml

PLAY [Copy function] *********************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************
ok: [rhel8]

TASK [Copy repo file] ********************************************************************************************************
ok: [rhel8]

PLAY RECAP *******************************************************************************************************************
rhel8 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
 

Members online


Top