question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
How do you create a directory www at /srv on a Debian-based system using an Ansible playbook?
| You want the ansible.builtin.file module. To create a directory, you need to specify the option state: directory:
- name: Creates directory
ansible.builtin.file:
path: /src/www
state: directory
You can see other options at https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html
| Ansible | 22,844,905 | 508 |
How do I specify a sudo password for Ansible in non-interactive way?
I'm running Ansible playbook like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username --ask-sudo-pass
But I want to run it like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username` **--sudo-pass=12345**
Is there a way? I want to automate my project deployment as much as possible.
| The docs strongly recommend against setting the sudo password in plaintext:
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see Encrypting content with Ansible Vault.
Instead you should be using --ask-become-pass on the command line when running ansible-playbook.
Previous versions of Ansible have used --ask-sudo-pass and sudo instead of become.
| Ansible | 21,870,083 | 297 |
Is it possible to run commands on the Ansible controller node?
My scenario is that I want to take a checkout from a git server that is hosted internally (and isn't accessible outside the company firewall). Then I want to upload the checkout (tarballed) to the production server (hosted externally).
At the moment, I'm looking at running a script that does the checkout, tarballs it, and then runs the deployment script - but if I could integrate this into Ansible that would be preferable.
| Yes, you can run commands on the Ansible host. You can specify that all tasks in a play run on the Ansible host, or you can mark individual tasks to run on the Ansible host.
If you want to run an entire play on the Ansible host, then specify hosts: 127.0.0.1 and connection:local in the play, for example:
- name: a play that runs entirely on the ansible host
hosts: 127.0.0.1
connection: local
tasks:
- name: check out a git repository
git: repo=git://foosball.example.org/path/to/repo.git dest=/local/path
See Local Playbooks in the Ansible documentation for more details.
If you just want to run a single task on your Ansible host, you can use local_action to specify that a task should be run locally. For example:
- name: an example playbook
hosts: webservers
tasks:
- ...
- name: check out a git repository
local_action: git repo=git://foosball.example.org/path/to/repo.git dest=/local/path
See "Controlling where tasks run: delegation and local actions" in the Ansible documentation for more details.
You can avoid having to type connection: local in your play by adding this to your inventory:
localhost ansible_connection=local
(Here you'd use "localhost" instead of "127.0.0.1" to refer to the play).
In newer versions of Ansible, you no longer need to add the above line to your inventory, Ansible assumes it's already there.
| Ansible | 18,900,236 | 296 |
How can one pass variable to ansible playbook in the command line?
The following command didn't work:
$ ansible-playbook -i '10.0.0.1,' yada-yada.yml --tags 'loaddata' django_fixtures="tile_colors"
Where django_fixtures is my variable.
| Reading the docs I find the section Passing Variables On The Command Line, that gives this example:
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
Others examples demonstrate how to load from JSON string (≥1.2) or file (≥1.3)
| Ansible | 30,662,069 | 295 |
I'm using Ansible for some simple user management tasks with a small group of computers. Currently, I have my playbooks set to hosts: all and my hosts file is just a single group with all machines listed:
# file: hosts
[office]
imac-1.local
imac-2.local
imac-3.local
I've found myself frequently having to target a single machine. The ansible-playbook command can limit plays like this:
ansible-playbook --limit imac-2.local user.yml
But that seems kind of fragile, especially for a potentially destructive playbook. Leaving out the limit flag means the playbook would be run everywhere. Since these tools only get used occasionally, it seems worth taking steps to foolproof playback so we don't accidentally nuke something months from now.
Is there a best practice for limiting playbook runs to a single machine? Ideally the playbooks should be harmless if some important detail was left out.
| Turns out it is possible to enter a host name directly into the playbook, so running the playbook with hosts: imac-2.local will work fine. But it's kind of clunky.
A better solution might be defining the playbook's hosts using a variable, then passing in a specific host address via --extra-vars:
# file: user.yml (playbook)
---
- hosts: '{{ target }}'
user: ...
Running the playbook:
ansible-playbook user.yml --extra-vars "target=imac-2.local"
If {{ target }} isn't defined, the playbook does nothing. A group from the hosts file can also be passed through if need be. Overall, this seems like a much safer way to construct a potentially destructive playbook.
Playbook targeting a single host:
$ ansible-playbook user.yml --extra-vars "target=imac-2.local" --list-hosts
playbook: user.yml
play #1 (imac-2.local): host count=1
imac-2.local
Playbook with a group of hosts:
$ ansible-playbook user.yml --extra-vars "target=office" --list-hosts
playbook: user.yml
play #1 (office): host count=3
imac-1.local
imac-2.local
imac-3.local
Forgetting to define hosts is safe!
$ ansible-playbook user.yml --list-hosts
playbook: user.yml
play #1 ({{target}}): host count=0
| Ansible | 18,195,142 | 277 |
How is it possible to move/rename a file/directory using an Ansible module on a remote system? I don't want to use the command/shell tasks and I don't want to copy the file from the local system to the remote system.
| From version 2.0, in copy module you can use remote_src parameter.
If True it will go to the remote/target machine for the src.
- name: Copy files from foo to bar
copy: remote_src=True src=/path/to/foo dest=/path/to/bar
If you want to move file you need to delete old file with file module
- name: Remove old files foo
file: path=/path/to/foo state=absent
From version 2.8 copy module remote_src supports recursive copying.
| Ansible | 24,162,996 | 266 |
Is there a way to only run one task in ansible playbook?
For example, in roles/hadoop_primary/tasks/hadoop_master.yml. I have "start hadoop job tracker services" task. Can I just run that one task?
hadoop_master.yml file:
# Playbook for Hadoop master servers
- name: Install the namenode and jobtracker packages
apt: name={{item}} force=yes state=latest
with_items:
- hadoop-0.20-mapreduce-jobtracker
- hadoop-hdfs-namenode
- hadoop-doc
- hue-plugins
- name: start hadoop jobtracker services
service: name=hadoop-0.20-mapreduce-jobtracker state=started
tags:
debug
| You should use tags: as documented in https://docs.ansible.com/ansible/latest/user_guide/playbooks_tags.html
If you have a large playbook it may become useful to be able to run a specific part of the configuration without running the whole playbook.
Both plays and tasks support a “tags:” attribute for this reason.
Example:
tasks:
- yum: name={{ item }} state=installed
with_items:
- httpd
- memcached
tags:
- packages
- template: src=templates/src.j2 dest=/etc/foo.conf
tags:
- configuration
If you wanted to just run the “configuration” and “packages” part of a very long playbook, you could do this:
ansible-playbook example.yml --tags "configuration,packages"
On the other hand, if you want to run a playbook without certain tasks, you could do this:
ansible-playbook example.yml --skip-tags "notification"
You may also apply tags to roles:
roles:
- { role: webserver, port: 5000, tags: [ 'web', 'foo' ] }
And you may also tag basic include statements:
- include: foo.yml tags=web,foo
Both of these have the function of tagging every single task inside the include statement.
| Ansible | 23,945,201 | 252 |
Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question:
GATHERING FACTS ***************************************************************
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:yy:zz:....
Are you sure you want to continue connecting (yes/no)?
I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
| Two options - the first, as you said in your own answer, is setting the environment variable ANSIBLE_HOST_KEY_CHECKING to False.
The second way to set it is to put it in an ansible.cfg file, and that's a really useful option because you can either set that globally (at system or user level, in /etc/ansible/ansible.cfg or ~/.ansible.cfg), or in an config file in the same directory as the playbook you are running.
To do that, make an ansible.cfg file in one of those locations, and include this:
[defaults]
host_key_checking = False
You can also set a lot of other handy defaults there, like whether or not to gather facts at the start of a play, whether to merge hashes declared in multiple places or replace one with another, and so on. There's a whole big list of options here in the Ansible docs.
Edit: a note on security.
SSH host key validation is a meaningful security layer for persistent hosts - if you are connecting to the same machine many times, it's valuable to accept the host key locally.
For longer-lived EC2 instances, it would make sense to accept the host key with a task run only once on initial creation of the instance:
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts"
There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should manage host key checking differently per logical environment.
Leave checking enabled by default (in ~/.ansible.cfg)
Disable host key checking in the working directory for playbooks you run against ephemeral instances (./ansible.cfg alongside the playbook for unit tests against vagrant VMs, automation for short-lived ec2 instances)
| Ansible | 32,297,456 | 250 |
When creating a new Ansible role, the template creates both a vars and a defaults directory with an empty main.yml file. When defining my role, I can place variable definitions in either of these, and they will be available in my tasks.
What's the difference between putting the definitions into defaults and vars? What should go into defaults, and what should to into vars? Does it make sense to use both for the same data?
I know that there's a difference in precedence/priority between the two, but I would like to understand what should go where.
Let's say that my role would create a list of directories on the target system. I would like to provide a list of default directories to be created, but would like to allow the user to override them when using the role.
Here's what this would look like:
---
- directories:
- foo
- bar
- baz
I could place this either into the defaults/main.yml or in the vars/main.yml, from an execution perspective, it wouldn't make any difference - but where should it go?
| The Ansible documentation on variable precedence summarizes this nicely:
If multiple variables of the same name are defined in different places, they win in a certain order, which is:
extra vars (-e in the command line) always win
then comes connection variables defined in inventory (ansible_ssh_user, etc)
then comes "most everything else" (command line switches, vars in play, included vars, role vars, etc)
then comes the rest of the variables defined in inventory
then comes facts discovered about a system
then "role defaults", which are the most "defaulty" and lose in priority to everything.
So suppose you have a "tomcat" role that you use to install Tomcat on a bunch of webhosts, but you need different versions of tomcat on a couple hosts, need it to run as different users in other cases, etc. The defaults/main.yml file might look something like this:
tomcat_version: 7.0.56
tomcat_user: tomcat
Since those are just default values it means they'll be used if those variables aren't defined anywhere else for the host in question. You could override these via extra-vars, via facts in your inventory file, etc. to specify different values for these variables.
Edit: Note that the above list is for Ansible 1.x. In Ansible 2.x the list has been expanded on. As always, the Ansible Documentation provides a detailed description of variable precedence for 2.x.
| Ansible | 29,127,560 | 212 |
right now I am using a shell script in ansible that would be much more readable if it was on multiple lines
- name: iterate user groups
shell: groupmod -o -g {{ item['guid'] }} {{ item['username'] }} ....more stuff to do
with_items: "{{ users }}"
Just not sure how to allow multiline script in Ansible shell module
| Ansible uses YAML syntax in its playbooks. YAML has a number of block operators:
The > is a folding block operator. That is, it joins multiple lines together by spaces. The following syntax:
key: >
This text
has multiple
lines
Would assign the value This text has multiple lines\n to key.
The | character is a literal block operator. This is probably what you want for multi-line shell scripts. The following syntax:
key: |
This text
has multiple
lines
Would assign the value This text\nhas multiple\nlines\n to key.
You can use this for multiline shell scripts like this:
- name: iterate user groups
shell: |
groupmod -o -g {{ item['guid'] }} {{ item['username'] }}
do_some_stuff_here
and_some_other_stuff
with_items: "{{ users }}"
There is one caveat: Ansible does some janky manipulation of arguments to the shell command, so while the above will generally work as expected, the following won't:
- shell: |
cat <<EOF
This is a test.
EOF
Ansible will actually render that text with leading spaces, which means the shell will never find the string EOF at the beginning of a line. You can avoid Ansible's unhelpful heuristics by using the cmd parameter like this:
- shell:
cmd: |
cat <<EOF
This is a test.
EOF
| Ansible | 40,230,184 | 193 |
A recurring theme that's in my ansible playbooks is that I often must execute a command with sudo privileges (sudo: yes) because I'd like to do it for a certain user. Ideally I'd much rather use sudo to switch to that user and execute the commands normally. Because then I won't have to do my usual post commands clean up such as chowning directories. Here's a snippet from one of my playbooks:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
- name: change perms
file: dest={{ dst }} state=directory mode=0755 owner=some_user
sudo: yes
Ideally I could run commands or sets of commands as a different user even if it requires sudo to su to that user.
| With Ansible 1.9 or later
Ansible uses the become, become_user, and become_method directives to achieve privilege escalation. You can apply them to an entire play or playbook, set them in an included playbook, or set them for a particular task.
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
become: yes
become_user: some_user
You can use become_with to specify how the privilege escalation is achieved, the default being sudo.
The directive is in effect for the scope of the block in which it is used (examples).
See Hosts and Users for some additional examples and Become (Privilege Escalation) for more detailed documentation.
In addition to the task-scoped become and become_user directives, Ansible 1.9 added some new variables and command line options to set these values for the duration of a play in the absence of explicit directives:
Command line options for the equivalent become/become_user directives.
Connection specific variables which can be set per host or group.
As of Ansible 2.0.2.0, the older sudo/sudo_user syntax described below still works, but the deprecation notice states, "This feature will be removed in a future release."
Previous syntax, deprecated as of Ansible 1.9 and scheduled for removal:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
sudo_user: some_user
| Ansible | 21,344,777 | 191 |
I see that Ansible provide some pre-defined variables that we can use in playbooks and template files. For example, the host IP address is ansible_eth0.ipv4.address. Googleing and searching the docs I couldn't find a list of all available variables.
Would someone list them for me?
| From the FAQ:
How do I see a list of all of the ansible_ variables?
Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the setup module as an ad hoc action:
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe the output to a pager.This does NOT include inventory variables or internal ‘magic’ variables. See the next question if you need more than just ‘facts’.
Here is the output for my vagrant virtual machine called scdev:
scdev | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"10.0.2.15",
"192.168.10.10"
],
"ansible_all_ipv6_addresses": [
"fe80::a00:27ff:fe12:9698",
"fe80::a00:27ff:fe74:1330"
],
"ansible_architecture": "i386",
"ansible_bios_date": "12/01/2006",
"ansible_bios_version": "VirtualBox",
"ansible_cmdline": {
"BOOT_IMAGE": "/vmlinuz-3.2.0-23-generic-pae",
"quiet": true,
"ro": true,
"root": "/dev/mapper/precise32-root"
},
"ansible_date_time": {
"date": "2013-09-17",
"day": "17",
"epoch": "1379378304",
"hour": "00",
"iso8601": "2013-09-17T00:38:24Z",
"iso8601_micro": "2013-09-17T00:38:24.425092Z",
"minute": "38",
"month": "09",
"second": "24",
"time": "00:38:24",
"tz": "UTC",
"year": "2013"
},
"ansible_default_ipv4": {
"address": "10.0.2.15",
"alias": "eth0",
"gateway": "10.0.2.2",
"interface": "eth0",
"macaddress": "08:00:27:12:96:98",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "10.0.2.0",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_devices": {
"sda": {
"holders": [],
"host": "SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 02)",
"model": "VBOX HARDDISK",
"partitions": {
"sda1": {
"sectors": "497664",
"sectorsize": 512,
"size": "243.00 MB",
"start": "2048"
},
"sda2": {
"sectors": "2",
"sectorsize": 512,
"size": "1.00 KB",
"start": "501758"
},
},
"removable": "0",
"rotational": "1",
"scheduler_mode": "cfq",
"sectors": "167772160",
"sectorsize": "512",
"size": "80.00 GB",
"support_discard": "0",
"vendor": "ATA"
},
"sr0": {
"holders": [],
"host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)",
"model": "CD-ROM",
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "cfq",
"sectors": "2097151",
"sectorsize": "512",
"size": "1024.00 MB",
"support_discard": "0",
"vendor": "VBOX"
},
"sr1": {
"holders": [],
"host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)",
"model": "CD-ROM",
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "cfq",
"sectors": "2097151",
"sectorsize": "512",
"size": "1024.00 MB",
"support_discard": "0",
"vendor": "VBOX"
}
},
"ansible_distribution": "Ubuntu",
"ansible_distribution_release": "precise",
"ansible_distribution_version": "12.04",
"ansible_domain": "",
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "10.0.2.15",
"netmask": "255.255.255.0",
"network": "10.0.2.0"
},
"ipv6": [
{
"address": "fe80::a00:27ff:fe12:9698",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "08:00:27:12:96:98",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
"ansible_eth1": {
"active": true,
"device": "eth1",
"ipv4": {
"address": "192.168.10.10",
"netmask": "255.255.255.0",
"network": "192.168.10.0"
},
"ipv6": [
{
"address": "fe80::a00:27ff:fe74:1330",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "08:00:27:74:13:30",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
"ansible_form_factor": "Other",
"ansible_fqdn": "scdev",
"ansible_hostname": "scdev",
"ansible_interfaces": [
"lo",
"eth1",
"eth0"
],
"ansible_kernel": "3.2.0-23-generic-pae",
"ansible_lo": {
"active": true,
"device": "lo",
"ipv4": {
"address": "127.0.0.1",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"ipv6": [
{
"address": "::1",
"prefix": "128",
"scope": "host"
}
],
"mtu": 16436,
"type": "loopback"
},
"ansible_lsb": {
"codename": "precise",
"description": "Ubuntu 12.04 LTS",
"id": "Ubuntu",
"major_release": "12",
"release": "12.04"
},
"ansible_machine": "i686",
"ansible_memfree_mb": 23,
"ansible_memtotal_mb": 369,
"ansible_mounts": [
{
"device": "/dev/mapper/precise32-root",
"fstype": "ext4",
"mount": "/",
"options": "rw,errors=remount-ro",
"size_available": 77685088256,
"size_total": 84696281088
},
{
"device": "/dev/sda1",
"fstype": "ext2",
"mount": "/boot",
"options": "rw",
"size_available": 201044992,
"size_total": 238787584
},
{
"device": "/vagrant",
"fstype": "vboxsf",
"mount": "/vagrant",
"options": "uid=1000,gid=1000,rw",
"size_available": 42013151232,
"size_total": 484145360896
}
],
"ansible_os_family": "Debian",
"ansible_pkg_mgr": "apt",
"ansible_processor": [
"Pentium(R) Dual-Core CPU E5300 @ 2.60GHz"
],
"ansible_processor_cores": "NA",
"ansible_processor_count": 1,
"ansible_product_name": "VirtualBox",
"ansible_product_serial": "NA",
"ansible_product_uuid": "NA",
"ansible_product_version": "1.2",
"ansible_python_version": "2.7.3",
"ansible_selinux": false,
"ansible_swapfree_mb": 766,
"ansible_swaptotal_mb": 767,
"ansible_system": "Linux",
"ansible_system_vendor": "innotek GmbH",
"ansible_user_id": "neves",
"ansible_userspace_architecture": "i386",
"ansible_userspace_bits": "32",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "virtualbox"
},
"changed": false
}
The current documentation now has a complete chapter about Discovering variables: facts and magic variables.
| Ansible | 18,839,509 | 189 |
I'm setting up an Ansible playbook to set up a couple servers. There are a couple of tasks that I only want to run if the current host is my local dev host, named "local" in my hosts file. How can I do this? I can't find it anywhere in the documentation.
I've tried this when statement, but it fails because ansible_hostname resolves to the host name generated when the machine is created, not the one you define in your hosts file.
- name: Install this only for local dev machine
pip:
name: pyramid
when: ansible_hostname == "local"
| The necessary variable is inventory_hostname.
- name: Install this only for local dev machine
pip:
name: pyramid
when: inventory_hostname == "local"
It is somewhat hidden in the documentation at the bottom of this section.
| Ansible | 21,346,390 | 187 |
I'm customizing linux users creation inside my role. I need to let users of my role customize home_directory, group_name, name, password.
I was wondering if there's a more flexible way to cope with default values.
I know that the code below is possible:
- name: Create default
user:
name: "default_name"
when: my_variable is not defined
- name: Create custom
user:
name: "{{my_variable}}"
when: my_variable is defined
But as I mentioned, there's a lot of optional variables and this creates a lot of possibilities.
Is there something like the code above?
user:
name: "default_name", "{{my_variable}}"
The code should set name="default_name" when my_variable isn't defined.
I could set all variables on defaults/main.yml and create the user like that:
- name: Create user
user:
name: "{{my_variable}}"
But those variables are inside a really big hash and there are some hashes inside that hash that can't be a default.
| You can use Jinja's default:
- name: Create user
user:
name: "{{ my_variable | default('default_value') }}"
| Ansible | 35,105,615 | 181 |
I am looking for a way to perform a task when Ansible variable is not registers or undefined.
E.g.:
- name: some task
command: sed -n '5p' "{{app.dirs.includes}}/BUILD.info" | awk '{print $2}'
when: (! deployed_revision) AND ( !deployed_revision.stdout )
register: deployed_revision
| From the ansible documentation:
If a required variable has not been set, you can skip or fail using Jinja2’s defined test. For example:
tasks:
- name: Run the command if "foo" is defined
ansible.builtin.shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
when: foo is defined
- name: Fail if "bar" is undefined
ansible.builtin.fail: msg="Bailing out. This play requires 'bar'"
when: bar is undefined
So, in your case, when: deployed_revision is not defined should work.
| Ansible | 30,119,973 | 177 |
Is there an Ansible variable that has the absolute path to the current playbook that is executing?
Some context: I'm running/creating an Ansible script against localhost to configure a MySQL Docker container and wanting to mount the data volume relative to the Ansible playbook.
For example, let's say I've checkout a repository to ~/branch1/ and then I run ansible-playbook dev.yml I was thinking it should save the volume to ~/branch1/.docker_volume/. If I ran it from ~/branch2 then it should configure the volume to ~/branch2/.docker_volume/.
| You can use the playbook_dir variable.
See the documentation about special variables.
For example, given the file structure:
.
├── foo
│ └── bar.txt
└── playbook.yml
When running playbook.yml, the task:
- ansible.builtin.debug:
var: "(playbook_dir ~ '/foo/bar.txt') is file"
Would give:
TASK [ansible.builtin.debug] **************************************
ok: [localhost] =>
(playbook_dir ~ '/foo/bar.txt') is file: true
| Ansible | 30,787,273 | 165 |
How do you get the current host's IP address in a role?
I know you can get the list of groups the host is a member of and the hostname of the host but I am unable to find a solution to getting the IP address.
You can get the hostname by using {{inventory_hostname}} and the group by using {{group_names}}
I have tried things like {{ hostvars[{{ inventory_hostname }}]['ansible_ssh_host'] }}
and ip="{{ hostvars.{{ inventory_hostname }}.ansible_ssh_host }}"
| A list of all addresses is stored in a fact ansible_all_ipv4_addresses, a default address in ansible_default_ipv4.address.
---
- hosts: localhost
connection: local
tasks:
- debug: var=ansible_all_ipv4_addresses
- debug: var=ansible_default_ipv4.address
Then there are addresses assigned to each network interface... In such cases you can display all the facts and find the one that has the value you want to use.
| Ansible | 39,819,378 | 165 |
I would like to use ansible-playbook command instead of 'vagrant provision'. However setting host_key_checking=false in the hosts file does not seem to work.
# hosts file
vagrant ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
ansible_ssh_user=vagrant ansible_ssh_port=2222 ansible_ssh_host=127.0.0.1
host_key_checking=false
Is there a configuration variable outside of Vagrantfile that can override this value?
| Due to the fact that I answered this in 2014, I have updated my answer to account for more recent versions of ansible.
Yes, you can do it at the host/inventory level (Which became possible on newer ansible versions) or global level:
inventory:
Add the following.
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
host:
Add the following.
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
hosts/inventory options will work with connection type ssh and not paramiko. Some people may strongly argue that inventory and hosts is more secure because the scope is more limited.
global:
Ansible User Guide - Host Key Checking
You can do it either in the /etc/ansible/ansible.cfg or ~/.ansible.cfg file:
[defaults]
host_key_checking = False
Or you can setup and env variable (this might not work on newer ansible versions):
export ANSIBLE_HOST_KEY_CHECKING=False
| Ansible | 23,074,412 | 164 |
When Ansible has problems running plays against a host, it will output the name of the host into a file in the user's home directory ending in '.retry'. These are often not used and just cause clutter, is there a way to turn them off or put them in a different directory?
| There are two options that you can add to the [defaults] section of the ansible.cfg file that will control whether or not .retry files are created and where they are created.
[defaults]
...
retry_files_enabled = True # Create them - the default
retry_files_enabled = False # Do not create them
retry_files_save_path = "~/" # The directory they will go into
# (home directory by default)
| Ansible | 31,318,881 | 161 |
Hi I am trying to find out how to set environment variable with Ansible.
something that a simple shell command like this:
EXPORT LC_ALL=C
tried as shell command and got an error
tried using the environment module and nothing happend.
what am I missing
| There are multiple ways to do this and from your question it's nor clear what you need.
1. If you need environment variable to be defined PER TASK ONLY, you do this:
- hosts: dev
tasks:
- name: Echo my_env_var
shell: "echo $MY_ENV_VARIABLE"
environment:
MY_ENV_VARIABLE: whatever_value
- name: Echo my_env_var again
shell: "echo $MY_ENV_VARIABLE"
Note that MY_ENV_VARIABLE is available ONLY for the first task, environment does not set it permanently on your system.
TASK: [Echo my_env_var] *******************************************************
changed: [192.168.111.222] => {"changed": true, "cmd": "echo $MY_ENV_VARIABLE", ... "stdout": "whatever_value"}
TASK: [Echo my_env_var again] *************************************************
changed: [192.168.111.222] => {"changed": true, "cmd": "echo $MY_ENV_VARIABLE", ... "stdout": ""}
Hopefully soon using environment will also be possible on play level, not only task level as above.
There's currently a pull request open for this feature on Ansible's GitHub: https://github.com/ansible/ansible/pull/8651
UPDATE: It's now merged as of Jan 2, 2015.
2. If you want permanent environment variable + system wide / only for certain user
You should look into how you do it in your Linux distribution / shell, there are multiple places for that. For example in Ubuntu you define that in files like for example:
~/.profile
/etc/environment
/etc/profile.d directory
...
You will find Ubuntu docs about it here: https://help.ubuntu.com/community/EnvironmentVariables
After all for setting environment variable in ex. Ubuntu you can just use lineinfile module from Ansible and add desired line to certain file. Consult your OS docs to know where to add it to make it permanent.
| Ansible | 27,733,511 | 158 |
This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
| I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
| Ansible | 33,222,641 | 156 |
What is the easiest way to create an empty file using Ansible? I know I can save an empty file into the files directory and then copy it to the remote host, but I find that somewhat unsatisfactory.
Another way is to touch a file on the remote host:
- name: create fake 'nologin' shell
file: path=/etc/nologin state=touch owner=root group=sys mode=0555
But then the file gets touched every time, showing up as a yellow line in the log, which is also unsatisfactory...
Is there any better solution to this simple problem?
| The documentation of the file module says:
If state=file, the file will NOT be created if it does not exist, see the copy or template module if you want that behavior.
So we use the copy module, using force: false to create a new empty file only when the file does not yet exist (if the file exists, its content is preserved).
- name: ensure file exists
copy:
content: ""
dest: /etc/nologin
force: false
group: sys
owner: root
mode: 0555
This is a declarative and elegant solution.
| Ansible | 28,347,717 | 153 |
I am pulling JSON via the URI module and want to write the received content out to a file. I am able to get the content and output it to the debugger so I know the content has been received, but I do not know the best practice for writing files.
| An important comment from tmoschou:
As of Ansible 2.10, The documentation for ansible.builtin.copy says:
If you need variable interpolation in copied files, use the
ansible.builtin.template module. Using a variable in the content
field will result in unpredictable output.
For more details see this and an explanation
Original answer:
You could use the copy module, with the content parameter:
- copy: content="{{ your_json_feed }}" dest=/path/to/destination/file
The docs here: copy module
| Ansible | 26,638,180 | 146 |
According to the Ansible docs, a Playbook
is:
...the basis for a really simple configuration management and multi-machine deployment system, unlike any that already exist, and one that is very well suited to deploying complex applications.
And, again, according to those same docs, a Role
are:
...ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users.
However the distinction between these and their different use cases is not immediately obvious to me. For instance, if I configure my /etc/ansible/hosts file to look like:
[databases]
mydb01.example.org
mydb02.example.org
[mail_servers]
mymail01.example.org
mymail_dr.example.org
...then what is this "[databases]" entry...a role? Or the name of a playbook YAML file somewhere? Or something else?!?
If someone could explain to me the differences on these, my understanding of Ansible would be greatly enhance!
Playbook vs Role vs [databases] and similar entries in /etc/ansible/hosts
If Playbooks are defined inside of YAML files, then where are Roles defined?
Aside from the ansible.cfg living on the Ansible server, how do I add/configure Ansible with available Playbooks/Roles? For instance, when I run ansible-playbook someplaybook.yaml, how does Ansible know where to find that playbook?
|
Playbook vs Role vs [databases] and similar entries in /etc/ansible/hosts
[databases] is a single name for a group of hosts. It allows you to reference multiple hosts by a single name.
Role is a set of tasks and additional files to configure host to serve for a certain role.
Playbook is a mapping between hosts and roles.
Example from documentation describes example project. It contains two things:
Playbooks. site.yml, webservers.yml, fooservers.yml are playbooks.
Roles: roles/common/ and roles/webservers/ contain definitions of common and webservers roles accordingly.
Inside playbook (webservers.yml) you have something like:
---
- hosts: webservers <- this group of hosts defined in /etc/ansible/hosts, databases and mail_servers in example from your question
roles: <- this is list of roles to assign to these hosts
- common
- webservers
If Playbooks are defined inside of YAML files, then where are Roles defined?
They are defined inside roles/* directories. Roles are defined mostly using YAML files, but can also contain resources of any types (files/, templates/). According to documentation role definition is structured this way:
If roles/x/tasks/main.yml exists, tasks listed therein will be added to the play
If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play
If roles/x/vars/main.yml exists, variables listed therein will be added to the play
If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later)
Any copy tasks can reference files in roles/x/files/ without having to path them relatively or absolutely
Any script tasks can reference scripts in roles/x/files/ without having to path them relatively or absolutely
Any template tasks can reference files in roles/x/templates/ without having to path them relatively or absolutely
Any include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely
The most important file is roles/x/tasks/main.yml, here you define tasks, which will be executed, when role is executed.
Aside from the ansible.cfg living on the Ansible server, how do I add/configure Ansible with available Playbooks/Roles? For instance, when I run ansible-playbook someplaybook.yaml, how does Ansible know where to find that playbook?
$ ansible-playbook someplaybook.yaml
Will look for a playbook inside current directory.
$ ansible-playbook somedir/somedir/someplaybook.yaml
Will look for a playbook inside somedir/somedir/ directory.
It's your responsibility to put your project with all playbooks and roles on server. Ansible has nothing to do with that.
| Ansible | 32,101,001 | 141 |
I have an ansible task which creates a new user on ubuntu 12.04;
- name: Add deployment user
action: user name=deployer password=mypassword
it completes as expected but when I login as that user and try to sudo with the password I set it always says it's incorrect. What am I doing wrong?
| Recently I figured out that Jinja2 filters have the capability to handle the generation of encrypted passwords. In my main.yml I'm generating the encrypted password as:
- name: Creating user "{{ uusername }}" with admin access
user:
name: "{{ uusername }}"
password: "{{ upassword | password_hash('sha512') }}"
groups: admin append=yes
when: assigned_role == "yes"
- name: Creating users "{{ uusername }}" without admin access
user:
name: "{{ uusername }}"
password: "{{ upassword | password_hash('sha512') }}"
when: assigned_role == "no"
- name: Expiring password for user "{{ uusername }}"
shell: chage -d 0 "{{ uusername }}"
"uusername" and "upassword" are passed as --extra-vars to the playbook and notice I have used Jinja2 filter here to encrypt the passed password.
I have added a tutorial related to this to my blog.
| Ansible | 19,292,899 | 134 |
I'm using the ec2 module with ansible-playbook I want to set a variable to the contents of a file. Here's how I'm currently doing it.
Var with the filename
shell task to cat the file
use the result of the cat to pass to the ec2 module.
Example contents of my playbook.
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: user_data_contents
shell: 'cat {{ user_data_file }}'
register: user_data_action
- name: launch ec2-instance
local_action:
...
user_data: '{{ user_data_action.stdout }}'
I assume there's a much easier way to do this, but I couldn't find it while searching Ansible docs.
| You can use lookups in Ansible in order to get the contents of a file on local machine, e.g.
user_data: "{{ lookup('file', user_data_file) }}"
Caveat: This lookup will work with local files, not remote files.
Here's a complete example from the docs:
- hosts: all
vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}"
tasks:
- debug: msg="the value of foo.txt is {{ contents }}"
| Ansible | 24,003,880 | 133 |
Inside my playbook I'd like to create a variable holding the output of an external command. Afterwards I want to make use of that variable in a couple of templates.
Here are the relevant parts of the playbook:
tasks:
- name: Create variable from command
command: "echo Hello"
register: command_output
- debug: msg="{{command_output.stdout}}"
- name: Copy test service
template: src=../templates/test.service.j2 dest=/tmp/test.service
- name: Enable test service
shell: systemctl enable /tmp/test.service
- name: Start test service
shell: systemctl start test.service
and let's say this is my template:
[Unit]
Description=MyApp
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill busybox1
ExecStartPre=-/usr/bin/docker rm busybox1
ExecStartPre=/usr/bin/docker pull busybox
ExecStart=/usr/bin/docker run --name busybox1 busybox /bin/sh -c "while true; do echo {{ string_to_echo }}; sleep 1; done"
[Install]
WantedBy=multi-user.target
(Notice the {{ string_to_echo }})
So what I'm basically looking for is a way to store the contents of command_output.stdout (which is generated/retrieved during the first task) in a new variable string_to_echo.
That variable I'd like to use in multiple templates afterwards.
I guess I could just use {{command_output.stdout}} in my templates, but I want to get rid of that .stdout for readability.
| You have to store the content as a fact:
- set_fact:
string_to_echo: "{{ command_output.stdout }}"
| Ansible | 36,059,804 | 133 |
I'd like to able to run an ansible task only if the host of the current playbook does not belong to a certain group. In semi pseudo code:
- name: my command
command: echo stuff
when: "if {{ ansible_hostname }} not in {{ ansible_current_groups }}"
How should I do this?
| Here's another way to do this:
- name: my command
command: echo stuff
when: "'groupname' not in group_names"
group_names is a magic variable as documented here:
List of groups the current host is part of, it always reflects the inventory_hostname and ignores delegation.
| Ansible | 21,008,083 | 131 |
I'm scripting a deployment process that takes the name of the user running the ansible script (e.g. tlau) and creates a deployment directory on the remote system based on that username and the current date/time (e.g. tlau-deploy-2014-10-15-16:52).
You would think this is available in ansible facts (e.g. LOGNAME or SUDO_USER), but those are all set to either "root" or the deployment id being used to ssh into the remote system. None of those contain the local user, the one who is currently running the ansible process.
How can I script getting the name of the user running the ansible process and use it in my playbook?
| If you gather_facts, which is enabled by default for playbooks, there is a built-in variable that is set called ansible_user_id that provides the user name that the tasks are being run as. You can then use this variable in other tasks or templates with {{ ansible_user_id }}. This would save you the step of running a task to register that variable.
See: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts
| Ansible | 26,394,096 | 124 |
My use case is the following :
I have a template file, and I would like to create 2 different files from that template, with the variables being filled by a different set of variables for each file.
For example, lets say I want to template the file containing the line:
mkdir -p {{myTemplateVariable}}
I would like to find a proper way to get this variable filled by "File1" and "File2". Something like :
- name: template test 1
template:
src=myTemplateFile
dest=result1
- name: template test 2
template:
src=myTemplateFile
dest=result2
where I could specify for the first templating that the variable to use is a = "File1" and for the second, b = "File2".
| With Ansible 2.x you can use vars: with tasks.
Template test.j2:
mkdir -p {{myTemplateVariable}}
Playbook:
- template: src=test.j2 dest=/tmp/File1
vars:
myTemplateVariable: myDirName
- template: src=test.j2 dest=/tmp/File2
vars:
myTemplateVariable: myOtherDir
This will pass different myTemplateVariable values into test.j2.
| Ansible | 31,142,369 | 123 |
How to escape double curly braces in Ansible 1.9.2?
For instance, how can I escape double curly braces in the following shell command?
- name: Test
shell: "docker inspect --format '{{ .NetworkSettings.IPAddress }}' instance1"
| Whenever you have problems with conflicting characters in Ansible, a rule of thumb is to output them as a string in a Jinja expression.
So instead of {{ you would use {{ '{{' }}:
- debug: msg="docker inspect --format '{{ '{{' }} .NetworkSettings.IPAddress {{ '}}' }}' instance1"
Topic "Escaping" in the Jinja2 docs.
| Ansible | 32,279,519 | 120 |
Say I execute the following.
$ cat test.sh
#!/bin/bash
echo Hello World
exit 0
$ cat Hello.yml
---
- hosts: MyTestHost
tasks:
- name: Hello yourself
script: test.sh
$ ansible-playbook Hello.yml
PLAY [MyTestHost] ****************************************************************
GATHERING FACTS ***************************************************************
ok: [MyTestHost]
TASK: [Hello yourself] ********************************************************
ok: [MyTestHost]
PLAY RECAP ********************************************************************
MyTestHost : ok=2 changed=0 unreachable=0 failed=0
$
I know for sure that it was successful.
Where/how do I see the "Hello World" echo'ed/printed by my script on the remote host (MyTestHost)? Or the return/exit code of script?
My research shows me it would be possible to write a plugin to intercept module execution callbacks or something on those lines and write a log file. I would prefer to not waste my time with that.
E.g. something like the stdout in below (note that I'm running ansible and not ansible-playbook):
$ ansible plabb54 -i /project/plab/svn/plab-maintenance/ansible/plab_hosts.txt -m script -a ./test.sh
plabb54 | success >> {
"rc": 0,
"stderr": "",
"stdout": "Hello World\n"
}
$
| If you pass the -v flag to ansible-playbook on the command line, you'll see the stdout and stderr for each task executed:
$ ansible-playbook -v playbook.yaml
Ansible also has built-in support for logging. Add the following lines to your ansible configuration file:
[defaults]
log_path=/path/to/logfile
Ansible will look in several places for the config file:
ansible.cfg in the current directory where you ran ansible-playbook
~/.ansible.cfg
/etc/ansible/ansible.cfg
| Ansible | 18,794,808 | 114 |
I have a copy task inside a role and I was expecting that the src location would be relative to the role itself, not the playbook that calls the roles.
How do I make this work and use the files from myfrole/files from a task inside myrole/tasks, I don't want to include the role name as part of the path as it does not make much sense. If I do it will break if I duplicate the role.
| If you do not provide any path at all, just the filename, Ansible will pick it automatically from the files directory of the role.
- copy:
src: foo.conf
dest: /etc/foo.conf
Additionally, since Ansible 1.8, there is the variable role_path which you could use in your copy task.
- copy:
src: "{{ role_path }}/files/foo.conf"
dest: /etc/foo.conf
| Ansible | 35,487,756 | 114 |
I tried this:
- command: ./configure chdir=/src/package/
- command: /usr/bin/make chdir=/src/package/
- command: /usr/bin/make install chdir=/src/package/
which works, but I was hoping for something neater.
So I tried this:
from: https://stackoverflow.com/questions/24043561/multiple-commands-in-the-same-line-for-bruker-topspin which give me back "no such file or directory"
- command: ./configure;/usr/bin/make;/usr/bin/make install chdir=/src/package/
I tried this too: https://u.osu.edu/hasnan.1/2013/12/16/ansible-run-multiple-commands-using-command-module-and-with-items/
but I couldn't find the right syntax to put:
- command: "{{ item }}" chdir=/src/package/
with_items:
./configure
/usr/bin/make
/usr/bin/make install
That does not work, saying there is a quote issue.
| To run multiple shell commands with ansible you can use the shell module with a multi-line string (note the pipe after shell:), as shown in this example:
- name: Build nginx
shell: |
cd nginx-1.11.13
sudo ./configure
sudo make
sudo make install
| Ansible | 24,851,575 | 112 |
I have variable named "network" registered in Ansible:
{
"addresses": {
"private_ext": [
{
"type": "fixed",
"addr": "172.16.2.100"
}
],
"private_man": [
{
"type": "fixed",
"addr": "172.16.1.100"
},
{
"type": "floating",
"addr": "10.90.80.10"
}
]
}
}
Is it possible to get the IP address ("addr") with type="floating" doing something like this?
- debug: var={{ network.addresses.private_man | filter type="fixed" | get "addr" }}
I know the syntax is wrong but you get the idea.
| To filter a list of dicts you can use the selectattr filter together with the equalto test:
network.addresses.private_man | selectattr("type", "equalto", "fixed")
The above requires Jinja2 v2.8 or later (regardless of Ansible version).
Ansible also has the tests match and search, which take regular expressions:
match will require a complete match in the string, while search will require a match inside of the string.
network.addresses.private_man | selectattr("type", "match", "^fixed$")
To reduce the list of dicts to a list of strings, so you only get a list of the addr fields, you can use the map filter:
... | map(attribute='addr') | list
Or if you want a comma separated string:
... | map(attribute='addr') | join(',')
Combined, it would look like this.
- debug: msg={{ network.addresses.private_man | selectattr("type", "equalto", "fixed") | map(attribute='addr') | join(',') }}
| Ansible | 31,895,602 | 112 |
I can do that with shell using combination of getent and awk like this:
getent passwd $user | awk -F: '{ print $6 }'
For the reference, in Puppet I can use a custom fact, like this:
require 'etc'
Etc.passwd { |user|
Facter.add("home_#{user.name}") do
setcode do
user.dir
end
end
}
which makes the user's home directory available as a home_<user name> fact.
How do I get the home directory of an arbitrary remote user?
| Ansible (from 1.4 onwards) already reveals environment variables for the user under the ansible_env variable.
- hosts: all
tasks:
- name: debug through ansible.env
debug: var=ansible_env.HOME
Unfortunately you can apparently only use this to get environment variables for the connected user as this playbook and output shows:
- hosts: all
tasks:
- name: debug specified user's home dir through ansible.env
debug: var=ansible_env.HOME
become: true
become_user: "{{ user }}"
- name: debug specified user's home dir through lookup on env
debug: var=lookup('env','HOME')
become: true
become_user: "{{ user }}"
OUTPUT:
vagrant@Test-01:~$ ansible-playbook -i "inventory/vagrant" env_vars.yml -e "user=testuser"
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [192.168.0.30]
TASK: [debug specified user's home dir through ansible.env] *******************
ok: [192.168.0.30] => {
"var": {
"/home/vagrant": "/home/vagrant"
}
}
TASK: [debug specified user's home dir through lookup on env] *****************
ok: [192.168.0.30] => {
"var": {
"/home/vagrant": "/home/vagrant"
}
}
PLAY RECAP ********************************************************************
192.168.0.30 : ok=3 changed=0 unreachable=0 failed=0
As with anything in Ansible, if you can't get a module to give you what you want then you are always free to shell out (although this should be used sparingly as it may be fragile and will be less descriptive) using something like this:
- hosts: all
tasks:
- name: get user home directory
shell: >
getent passwd {{ user }} | awk -F: '{ print $6 }'
changed_when: false
register: user_home
- name: debug output
debug:
var: user_home.stdout
There may well be a cleaner way of doing this and I'm a little surprised that using become_user to switch to the user specified doesn't seem to affect the env lookup but this should give you what you want.
| Ansible | 33,343,215 | 104 |
I'm trying to use Ansible to run the following two commands:
sudo apt-get update && sudo apt-get upgrade -y
I know with ansible you can use:
ansible all -m shell -u user -K -a "uptime"
Would running the following command do it? Or do I have to use some sort of raw command
ansible all -m shell -u user -K -a "sudo apt-get update && sudo apt-get upgrade -y"
| I wouldn't recommend using shell for this, as Ansible has the apt module designed for just this purpose. I've detailed using apt below.
In a playbook, you can update and upgrade like so:
- name: Update and upgrade apt packages
become: true
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400 #One day
The cache_valid_time value can be omitted. Its purpose from the docs:
Update the apt cache if its older than the cache_valid_time. This
option is set in seconds.
So it's good to include if you don't want to update the cache when it has only recently been updated.
To do this as an ad-hoc command you can run:
$ ansible all -m apt -a "upgrade=yes update_cache=yes cache_valid_time=86400" --become
ad-hoc commands are described in detail here
Note that I am using --become and become: true. This is an example of typical privilege escalation through Ansible. You use -u user and -K (ask for privilege escalation password). Use whichever works for you, this is just to show you the most common form.
| Ansible | 41,535,838 | 102 |
- name: Go to the folder
command: chdir=/opt/tools/temp
When I run my playbook, I get:
TASK: [Go to the folder] *****************************
failed: [host] => {"failed": true, "rc": 256}
msg: no command given
Any help is much appreciated.
| There's no concept of current directory in Ansible. You can specify current directory for specific task, like you did in your playbook. The only missing part was the actual command to execute. Try this:
- name: Go to the folder and execute command
command: chdir=/opt/tools/temp ls
| Ansible | 19,369,931 | 97 |
I have been waiting for ansible 2.3 as it was going to introduce encrypt_string feature.
Unfortuately I'm not sure how can I read the encrypted string.
I did try decrypt_string, decrypt (the file), view (the file) and nothing works.
cat test.yml
---
test: !vault |
$ANSIBLE_VAULT;1.1;AES256
37366638363362303836383335623066343562666662386233306537333232396637346463376430
3664323265333036663736383837326263376637616466610a383430623562633235616531303861
66313432303063343230613665323930386138613334303839626131373033656463303736366166
6635346135636437360a313031376566303238303835353364313434363163343066363932346165
6136
The error I'm geeting is ERROR! input is not vault encrypted data for test.yml
How can I decrypt the string so I know what it's value without the need to run the play?
| You can also do with plain ansible command for respective host/group/inventory combination, e.g.:
$ ansible my_server -m debug -a 'var=my_secret'
my_server | SUCCESS => {
"my_secret": "373861663362363036363361663037373661353137303762"
}
| Ansible | 43,467,180 | 97 |
In response to a change, I have multiple related tasks that should run.
How do I write an Ansible handler with multiple tasks?
For example, I would like a handler that restarts a service only if already started:
- name: Restart conditionally
shell: check_is_started.sh
register: result
- name: Restart conditionally step 2
service: name=service state=restarted
when: result
| There is proper solution to this problem as of Ansible 2.2.
handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
handlers:
- name: restart memcached
service: name=memcached state=restarted
listen: "restart web services"
- name: restart apache
service: name=apache state=restarted
listen: "restart web services"
tasks:
- name: restart everything
command: echo "this task will restart the web services"
notify: "restart web services"
This use makes it much easier to trigger multiple handlers. It also decouples handlers from their names, making it easier to share handlers among playbooks and roles
Specifically to the question, this should work:
- name: Check if restarted
shell: check_is_started.sh
register: result
listen: Restart processes
- name: Restart conditionally step 2
service: name=service state=restarted
when: result
listen: Restart processes
and in the task, notify handlers via 'Restart processes'
https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html#naming-handlers
| Ansible | 31,618,967 | 94 |
In Ansible (1.9.4) or 2.0.0
I ran the following action:
- debug: msg="line1 \n {{ var2 }} \n line3 with var3 = {{ var3 }}"
$ cat roles/setup_jenkins_slave/tasks/main.yml
- debug: msg="Installing swarm slave = {{ slave_name }} at {{ slaves_dir }}/{{ slave_name }}"
tags:
- koba
- debug: msg="1 == Slave properties = fsroot[ {{ slave_fsroot }} ], master[ {{ slave_master }} ], connectingToMasterAs[ {{ slave_user }} ], description[ {{ slave_desc }} ], No.Of.Executors[ {{ slave_execs }} ], LABELs[ {{ slave_labels }} ], mode[ {{ slave_mode }} ]"
tags:
- koba
- debug: msg="print(2 == Slave properties = \n\nfsroot[ {{ slave_fsroot }} ],\n master[ {{ slave_master }} ],\n connectingToMasterAs[ {{ slave_user }} ],\n description[ {{ slave_desc }} ],\n No.Of.Executors[ {{ slave_execs }} ],\n LABELs[ {{ slave_labels }} ],\n mode[ {{ slave_mode }} ])"
tags:
- koba
But this is not printing the variable with new lines (for the 3rd debug action)?
| debug module support array, so you can do like this:
debug:
msg:
- "First line"
- "Second line"
The output:
ok: [node1] => {
"msg": [
"First line",
"Second line"
]
}
Or you can use the method from this answer:
In YAML, how do I break a string over multiple lines?
| Ansible | 34,188,167 | 92 |
I'm designing a kind of playbook lib with individual tasks
so in the usual roles repo, I have something like:
roles
├── common
│ └── tasks
│ ├── A.yml
│ ├── B.yml
│ ├── C.yml
│ ├── D.yml
│ ├── login.yml
│ ├── logout.yml
│ └── save.yml
├── custom_stuff_workflow
│ └── tasks
│ └── main.yml
└── other_stuff_workflow
└── tasks
└── main.yml
my main.yml in custom_stuff_workflow then contain something like:
---
- include: login.yml
- include: A.yml
- include: C.yml
- include: save.yml
- include: logout.yml
and this one in the other workflow:
---
- include: login.yml
- include: B.yml
- include: A.yml
- include: D.yml
- include: save.yml
- include: logout.yml
I can't find a way to do it in a natural way:
one way that worked was having all tasks in a single role and tagging the relevant tasks while including a custom_stuff_workflow
The problem I have with that is that tags cannot be set in the calling playbook: it's only to be set at command line
as I'm distributing this ansible repo with many people in the company, I can't rely on command line invocations (it would be nice to have a #! header in yml to be processed by ansible-playbook command)
I could also copy the relevant tasks (inside common in the above tree) in each workflow, but I don't want to repeat them around
Can someone see a solution to achieve what I'd like without repeating the tasks over different roles?
I guess the corner stone of my problem is that I define tasks as individual and it looks not natural in ansible...
Thanks a lot
PS: note that the tasks in the workflow have to be done in specific order and the only natural steps to abstract would be the login and save/logout
PPS: I've seen this question How do I call a role from within another role in Ansible? but it does not solve my problem as it's invoking a full role and not a subset of the tasks in a role
| Just in case someone else bumps into this, version 2.2 of Ansible now has include_role. You can now do something like this:
---
- name: do something
include_role:
name: common
tasks_from: login
Check out the documentation here.
| Ansible | 30,192,490 | 91 |
In Ansible I've used register to save the results of a task in the variable people. Omitting the stuff I don't need, it has this structure:
{
"results": [
{
"item": {
"name": "Bob"
},
"stdout": "male"
},
{
"item": {
"name": "Thelma"
},
"stdout": "female"
}
]
}
I'd like to use a subsequent set_fact task to generate a new variable with a dictionary like this:
{
"Bob": "male",
"Thelma": "female"
}
I guess this might be possible but I'm going round in circles with no luck so far.
| I think I got there in the end.
The task is like this:
- name: Populate genders
set_fact:
genders: "{{ genders|default({}) | combine( {item.item.name: item.stdout} ) }}"
with_items: "{{ people.results }}"
It loops through each of the dicts (item) in the people.results array, each time creating a new dict like {Bob: "male"}, and combine()s that new dict in the genders array, which ends up like:
{
"Bob": "male",
"Thelma": "female"
}
It assumes the keys (the name in this case) will be unique.
I then realised I actually wanted a list of dictionaries, as it seems much easier to loop through using with_items:
- name: Populate genders
set_fact:
genders: "{{ genders|default([]) + [ {'name': item.item.name, 'gender': item.stdout} ] }}"
with_items: "{{ people.results }}"
This keeps combining the existing list with a list containing a single dict. We end up with a genders array like this:
[
{'name': 'Bob', 'gender': 'male'},
{'name': 'Thelma', 'gender': 'female'}
]
| Ansible | 35,605,603 | 87 |
I am planning to execute a shell script on a remote server using Ansible playbook.
blank test.sh file:
touch test.sh
Playbook:
---
- name: Transfer and execute a script.
hosts: server
user: test_user
sudo: yes
tasks:
- name: Transfer the script
copy: src=test.sh dest=/home/test_user mode=0777
- name: Execute the script
local_action: command sudo sh /home/test_user/test.sh
When I run the playbook, the transfer successfully occurs but the script is not executed.
| you can use script module
Example
- name: Transfer and execute a script.
hosts: all
tasks:
- name: Copy and Execute the script
script: /home/user/userScript.sh
| Ansible | 21,160,776 | 86 |
I'm using Ansible to copy a directory (900 files, 136MBytes) from one host to another:
---
- name: copy a directory
copy: src={{some_directory}} dest={{remote_directory}}
This operation takes an incredible 17 minutes, while a simple scp -r <src> <dest> takes a mere 7 seconds.
I have tried the Accelerated mode, which according to the ansible docs, but to no avail.
can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than paramiko.
| TLDR: use synchronize instead of copy.
Here's the copy command I'm using:
- copy: src=testdata dest=/tmp/testdata/
As a guess, I assume the sync operations are slow. The files module documentation implies this too:
The "copy" module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see synchronize module, which is a wrapper around rsync.
Digging into the source shows each file is processed with SHA1. That's implemented using hashlib.sha1. A local test implies that only takes 10 seconds for 900 files (that happen to take 400mb of space).
So, the next avenue. The copy is handled with module_utils/basic.py's atomic_move method. I'm not sure if accelerated mode helps (it's a mostly-deprecated feature), but I tried pipelining, putting this in a local ansible.cfg:
[ssh_connection]
pipelining=True
It didn't appear to help; my sample took 24 minutes to run . There's obviously a loop that checks a file, uploads it, fixes permissions, then starts on the next file. That's a lot of commands, even if the ssh connection is left open. Reading between the lines it makes a little bit of sense- the "file transfer" can't be done in pipelining, I think.
So, following the hint to use the synchronize command:
- synchronize: src=testdata dest=/tmp/testdata/
That took 18 seconds, even with pipeline=False. Clearly, the synchronize command is the way to go in this case.
Keep in mind synchronize uses rsync, which defaults to mod-time and file size. If you want or need checksumming, add checksum=True to the command. Even with checksumming enabled the time didn't really change- still 15-18 seconds. I verified the checksum option was on by running ansible-playbook with -vvvv, that can be seen here:
ok: [testhost] => {"changed": false, "cmd": "rsync --delay-updates -FF --compress --checksum --archive --rsh 'ssh -o StrictHostKeyChecking=no' --out-format='<<CHANGED>>%i %n%L' \"testdata\" \"user@testhost:/tmp/testdata/\"", "msg": "", "rc": 0, "stdout_lines": []}
| Ansible | 27,985,334 | 85 |
The settings
Consider an Ansible inventory file similar to the following example:
[san_diego]
host1
host2
[san_francisco]
host3
host4
[west_coast]
san_diego
san_francisco
[west_coast:vars]
db_server=foo.example.com
db_host=5432
db_password=top secret password
The problem
I would like to store some of the vars (like db_password) in an Ansible vault, but not the entire file.
How can a vault-encrypted ansible file be imported into an unencrypted inventory file?
What I've tried
I have created an encrypted vars file and tried importing it with:
include: secrets
To which ansible-playbook responded with:
ERROR: variables assigned to group must be in key=value form
Probably because it tried to parse the include statement as a variable.
| Since Ansible 2.3 you can encrypt a Single Encrypted Variable.
IMO, a walkthrough is needed as the doco's seem pretty terse.
Given an example of: mysql_password: password123 (within main.yml)
Run a command such as:
ansible-vault encrypt_string password123 --ask-vault-pass
This will produce:
!vault |
$ANSIBLE_VAULT;1.1;AES256
66386439653236336462626566653063336164663966303231363934653561363964363833
3136626431626536303530376336343832656537303632313433360a626438346336353331
Encryption successful
paste this into your main.yml:
mysql_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
66386439653236336462626566653063336164663966303231363934653561363964363833
3136626431626536303530376336343832656537303632313433360a626438346336353331
run playbook:
Ie, ansible-playbook -i hosts main.yml --ask-vault-pass
Verify via debug:
- debug:
msg: "mysql Pwd: {{ mysql_password }}"
| Ansible | 30,209,062 | 85 |
I am using Ansible to deploy my project and I trying to check if an specified package is installed, but I have a problem with it task, here is the task:
- name: Check if python-apt is installed
command: dpkg -l | grep python-apt
register: python_apt_installed
ignore_errors: True
And here is the problem:
$ ansible-playbook -i hosts idempotent.yml
PLAY [lxc-host] ***************************************************************
GATHERING FACTS ***************************************************************
ok: [10.0.3.240]
TASK: [idempotent | Check if python-apt is installed] *************************
failed: [10.0.3.240] => {"changed": true, "cmd": ["dpkg", "-l", "|", "grep", "python-apt"], "delta": "0:00:00.015524", "end": "2014-07-10 14:41:35.207971", "rc": 2, "start": "2014-07-10 14:41:35.192447"}
stderr: dpkg-query: error: package name in specifier '|' is illegal: must start with an alphanumeric character
...ignoring
PLAY RECAP ********************************************************************
10.0.3.240 : ok=2 changed=1 unreachable=0 failed=0
Why is illegal this character '|' .
| From the doc:
command - Executes a command on a remote node
The command module takes the command name followed by a list of
space-delimited arguments. The given command will be executed on all
selected nodes. It will not be processed through the shell, so
variables like $HOME and operations like "<", ">", "|", and "&" will
not work (use the shell module if you need these features).
shell - Executes a commands in nodes
The shell module takes the command name followed by a list of space-delimited arguments.
It is almost exactly like the command module but runs the command
through a shell (/bin/sh) on the remote node.
Therefore you have to use shell: dpkg -l | grep python-apt.
| Ansible | 24,679,591 | 83 |
I am trying to copy the content of dist directory to nginx directory.
- name: copy html file
copy: src=/home/vagrant/dist/ dest=/usr/share/nginx/html/
But when I execute the playbook it throws an error:
TASK [NGINX : copy html file] **************************************************
fatal: [172.16.8.200]: FAILED! => {"changed": false, "failed": true, "msg": "attempted to take checksum of directory:/home/vagrant/dist/"}
How can I copy a directory that has another directory and a file inside?
| To copy a directory's content to another directory you CAN use ansibles copy module:
- name: Copy content of directory 'files'
copy:
src: files/ # note the '/' <-- !!!
dest: /tmp/files/
From the docs about the src parameter:
If (src!) path is a directory, it is copied recursively...
... if path ends with "/", only inside contents of that directory are copied to destination.
... if it does not end with "/", the directory itself with all contents is copied.
| Ansible | 35,488,433 | 82 |
I wonder if there is a way for Ansible to access local environment variables.
The documentation references accessing variable on the target machine:
{{ lookup('env', 'SOMEVAR') }}
Is there a way to access environment variables on the source machine?
| I have a Linux vm running on osx, and for me:
lookup('env', 'HOME') returns "/Users/Gonzalo" (the HOME variable from osx), while ansible_env.HOME returns "/root" (the HOME variable from the vm).
Worth to mention, that ansible_env.VAR fails if the variable does not exists, while lookup('env', 'VAR') does not fail.
| Ansible | 21,422,158 | 81 |
In ansible, I need to check whether a particular line present in a file or not. Basically, I need to convert the following command to an ansible task. My goal is to only check.
grep -Fxq "127.0.0.1" /tmp/my.conf
| Use check_mode, register and failed_when in concert. This fails the task if the lineinfile module would make any changes to the file being checked. Check_mode ensures nothing will change even if it otherwise would.
- name: "Ensure /tmp/my.conf contains '127.0.0.1'"
lineinfile:
name: /tmp/my.conf
line: "127.0.0.1"
state: present
check_mode: yes
register: conf
failed_when: (conf is changed) or (conf is failed)
| Ansible | 30,786,263 | 81 |
I would like to quickly monitor some hosts using commands like ps,dstat etc using ansible-playbook. The ansible command itself perfectly does what I want, for instance I'd use:
ansible -m shell -a "ps -eo pcpu,user,args | sort -r -k1 | head -n5"
and it nicely prints all std output for every host like this:
localhost | success | rc=0 >>
0.0 root /sbin/init
0.0 root [kthreadd]
0.0 root [ksoftirqd/0]
0.0 root [migration/0]
otherhost | success | rc=0 >>
0.0 root /sbin/init
0.0 root [kthreadd]
0.0 root [ksoftirqd/0]
0.0 root [migration/0]
However this requires me to keep a bunch of shell scripts around for every task which is not very 'ansible' so I put this in a playbook:
---
-
hosts: all
gather_facts: no
tasks:
- shell: ps -eo pcpu,user,args | sort -r -k1 | head -n5
and run it with -vv, but the output baiscally shows the dictionary content and newlines are not printed as such so this results in an unreadable mess like this:
changed: [localhost] => {"changed": true, "cmd": "ps -eo pcpu,user,args | sort -r -k1
head -n5 ", "delta": "0:00:00.015337", "end": "2013-12-13 10:57:25.680708", "rc": 0,
"start": "2013-12-13 10:57:25.665371", "stderr": "", "stdout": "47.3 xxx Xvnc4 :24
-desktop xxx:24 (xxx) -auth /home/xxx/.Xauthority -geometry 1920x1200\n
....
I also tried adding register: var and the a 'debug' task to show {{ var.stdout }} but the result is of course the same.
Is there a way to get nicely formatted output from a command's stdout/stderr when run via a playbook? I can think of a number of possible ways (format output using sed? redirect output to file on the host then get that file back and echo it to the screen?), but with my limited knowledge of the shell/ansible it would take me a day to just try it out.
| The debug module could really use some love, but at the moment the best you can do is use this:
- hosts: all
gather_facts: no
tasks:
- shell: ps -eo pcpu,user,args | sort -r -k1 | head -n5
register: ps
- debug: var=ps.stdout_lines
It gives an output like this:
ok: [host1] => {
"ps.stdout_lines": [
"%CPU USER COMMAND",
" 1.0 root /usr/bin/python",
" 0.6 root sshd: root@notty ",
" 0.2 root java",
" 0.0 root sort -r -k1"
]
}
ok: [host2] => {
"ps.stdout_lines": [
"%CPU USER COMMAND",
" 4.0 root /usr/bin/python",
" 0.6 root sshd: root@notty ",
" 0.1 root java",
" 0.0 root sort -r -k1"
]
}
| Ansible | 20,563,639 | 80 |
I'd like to allow anyone to list and read all files in my directory tree, but I don't want to make the files executable :
dir
\subdir1
file1
\subdir2
file2
...
\subdirX
fileX
The following task makes my directories and files readable, but it makes all the files executable as well:
- name: Make my directory tree readable
file:
path: dir
mode: 0755
recurse: yes
On the other hand, if I choose mode 0644, then all my files are not executable, but I'm not able to list my directories.
Is it possible to set mode 755 for all directories and 644 for all files in a directory tree?
| Since version 1.8, Ansible supports symbolic modes. Thus, the following would perform the task you want:
- name: Make my directory tree readable
file:
path: dir
mode: u=rwX,g=rX,o=rX
recurse: yes
Because X (instead of x) only applies to directories or files with at least one x bit set.
| Ansible | 28,778,738 | 80 |
How can I test that stderr is non empty::
- name: Check script
shell: . {{ venv_name }}/bin/activate && myscritp.py
args:
chdir: "{{ home }}"
sudo_user: "{{ user }}"
register: test_myscript
- debug: msg='myscritp is Ok'
when: not test_myscript.stderr
So if there is no error I could read::
TASK: [deploy | debug msg='critp is Ok] *******
ok: [vagrant] => {
"msg": "myscritp is Ok"
}
In case the stderr is not empty a FATAl error occurs.
| (ansible 2.9.6 ansible-lint 5.3.2)
See ansible-lint rules. The condition below results in error: 'empty-string-compare: Don't compare to empty string'
when: test_myscript.stderr != ""
Correct syntax is
when: test_myscript.stderr | length > 0
Quoting from source code
Use when: var|length > 0 rather than when: var != "" (or '
'conversely when: var|length == 0 rather than when: var == "")
Notes
Test empty bare variable e.g.
- debug:
msg: "Empty string '{{ var }}' evaluates to False"
when: not var
vars:
var: ''
- debug:
msg: "Empty list {{ var }} evaluates to False"
when: not var
vars:
var: []
give
msg: Empty string '' evaluates to False
msg: Empty list [] evaluates to False
But, testing non-empty bare variable string depends on CONDITIONAL_BARE_VARS. Setting ANSIBLE_CONDITIONAL_BARE_VARS=false the condition works fine but setting ANSIBLE_CONDITIONAL_BARE_VARS=true the condition will fail
- debug:
msg: "String '{{ var }}' evaluates to True"
when: var
vars:
var: 'abc'
gives
fatal: [localhost]: FAILED! =>
msg: |-
The conditional check 'var' failed. The error was: error while
evaluating conditional (var): 'abc' is undefined
Explicit cast to Boolean prevents the error but evaluates to False i.e. will be always skipped (unless var='True'). When the filter bool is used the options ANSIBLE_CONDITIONAL_BARE_VARS=true and ANSIBLE_CONDITIONAL_BARE_VARS=false have no effect
- debug:
msg: "String '{{ var }}' evaluates to True"
when: var|bool
vars:
var: 'abc'
gives
skipping: [localhost]
Quoting from Porting guide 2.8 Bare variables in conditionals
- include_tasks: teardown.yml
when: teardown
- include_tasks: provision.yml
when: not teardown
" based on a variable you define as a string (with quotation marks around it):"
In Ansible 2.7 and earlier, the two conditions above are evaluated as True and False respectively if teardown: 'true'
In Ansible 2.7 and earlier, both conditions were evaluated as False if teardown: 'false'
In Ansible 2.8 and later, you have the option of disabling conditional bare variables, so when: teardown always evaluates as True, and when: not teardown always evaluates as False when teardown is a non-empty string (including 'true' or 'false')
Quoting from CONDITIONAL_BARE_VARS
Expect that this setting eventually will be deprecated after 2.12
| Ansible | 36,912,726 | 80 |
Recently I started digging into Ansible and writing my own playbooks. However, I have a troubles with understanding difference between become and become_user.
As I understand it become_user is something similar to su <username>, and become means something like sudo su or "perform all commands as a sudo user". But sometimes these two directives are mixed.
Could you explain the correct meaning of them?
| become_user defines the user which is being used for privilege escalation.
become simply is a flag to either activate or deactivate the same.
Here are three examples which should make it clear:
This task will be executed as root, because root is the default user for privilege escalation:
- do: something
become: true
This task will be executed as user someone, because the user is explicitly set:
- do: something
become: true
become_user: someone
This task will not do anything with become_user, because become is not set and defaults to false/no:
- do: something
become_user: someone
...unless become was set to true on a higher level, e.g. a block, include, the playbook or globally via group or host-vars.
Here is an example with a block:
- become: true
block:
- do: something
become_user: someone
- do: something
- do: something
become: false
become_user: someone
- do: something
become: false
The 1st is ran as user someone, the 2nd as root. The 3rd and 4th tasks have become explicitly disabled, so they will be ran as the user who executed the playbook.
As I understand it become_user is something similar to su , and become means something like sudo su or "perform all commands as a sudo user".
The default become_method is sudo, so sudo do something or sudo -u <become_user> do something
Fineprint: Of course "do: something" is pseudocode. Put your actual Ansible module there.
| Ansible | 38,290,143 | 80 |
What is the best way to chmod + x a file with ansible.
Converting the following script to ansible format.
mv /tmp/metadata.sh /usr/local/bin/meta.sh
chmod +x /usr/local/bin/meta.sh
This is what I have so far..
- name: move /tmp/metadata.sh to /usr/local/bin/metadata.sh
command: mv /tmp/metadata.sh /usr/local/bin/metadata.sh
| ansible has mode parameter in file module exactly for this purpose.
To add execute permission for everyone (i.e. chmod a+x on command line):
- name: Changing perm of "/foo/bar.sh", adding "+x"
file: dest=/foo/bar.sh mode=a+x
Symbolic modes are supported since version 1.8, on a prior version you need to use the octal bits.
| Ansible | 40,505,772 | 80 |
I am learning Ansible. I have a playbook to clean up resources, and I want the playbook to ignore every error and keep going on till the end , and then fail at the end if there were errors.
I can ignore errors with
ignore_errors: yes
If it was one task, I could do something like ( from ansible error catching)
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
ignore_errors: True
- name: fail the play if the previous command did not succeed
fail: msg="the command failed"
when: "'FAILED' in command_result.stderr"
How do I fail at the end ? I have several tasks, what would my "When" condition be?
| Use Fail module.
Use ignore_errors with every task that you need to ignore in case of errors.
Set a flag (say, result = false) whenever there is a failure in any task execution
At the end of the playbook, check if flag is set, and depending on that, fail the execution
- fail: msg="The execution has failed because of errors."
when: flag == "failed"
Update:
Use register to store the result of a task like you have shown in your example. Then, use a task like this:
- name: Set flag
set_fact: flag = failed
when: "'FAILED' in command_result.stderr"
| Ansible | 38,876,487 | 79 |
In Ansible 2.1, I have a role being called by a playbook that needs access to a host file variable. Any thoughts on how to access it?
I am trying to access the ansible_ssh_host in the test1 section of the following inventory host file:
[test1]
test-1 ansible_ssh_host=abc.def.ghi.jkl ansible_ssh_port=1212
[test2]
test2-1 ansible_ssh_host=abc.def.ghi.mno ansible_ssh_port=1212
[test3]
test3-1 ansible_ssh_host=abc.def.ghi.pqr ansible_ssh_port=1212
test3-2 ansible_ssh_host=abc.def.ghi.stu ansible_ssh_port=1212
[all:children]
test1
test2
test3
I have tried accessing the role in the following fashions:
{{ hostvars.ansible_ssh_host }}
and
{{ hostvars.test1.ansible_ssh_host }}
I get this error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "'ansible.vars.hostvars.HostVars object' has no attribute 'ansible'"}
| You are on the right track about hostvars.
This magic variable is used to access information about other hosts.
hostvars is a hash with inventory hostnames as keys.
To access fields of each host, use hostvars['test-1'], hostvars['test2-1'], etc.
ansible_ssh_host is deprecated in favor of ansible_host since 2.0.
So you should first remove "_ssh" from inventory hosts arguments (i.e. to become "ansible_user", "ansible_host", and "ansible_port"), then in your role call it with:
{{ hostvars['your_host_group'].ansible_host }}
| Ansible | 40,027,847 | 79 |
I have a playbook which should configure on specified IP, and than connect to this app to configure stuff inside.
I've got a problem: I need to restart app after I've changed anything in app config, and if I do not restart app, connection to it failed (no connection because app knows nothing about new config with new IP address I'm trying to access).
My current playbook:
tasks:
- name: Configure app
template: src=app.conf.j2 dest=/etc/app.conf
notify: restart app
- name: Change data in app
configure_app: host={{new_ip}} data={{data}}
handlers:
- name: restart app
service: name=app state=restarted
I need to force the handler to run if configure_app changed before executing 'Change data in app'.
| If you want to force the handler to run in between the two tasks instead of at the end of the play, you need to put this between the two tasks:
- meta: flush_handlers
Example taken from the ansible documentation :
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
Note that this will cause all pending handlers to run at that point, not just that specific one.
| Ansible | 34,018,862 | 78 |
Recently I created new roles called spd in my existing project. While other script works fine in the setup. This newly created fails. Please point me to what is going wrong here
ansible/roles
spd
tasks
templates
defaults
deploy-spd.yml
- hosts:
roles:
- spd
inventory file
[kube-master]
kubernetes-master-1 ansible_host=10.20.0.225 ansible_user=centos ansible_become=true
kubernetes-master-2 ansible_host=10.20.0.226 ansible_user=centos ansible_become=true
kubernetes-master-3 ansible_host=10.20.0.227 ansible_user=centos ansible_become=true
Failure
bash-4.3# ansible-playbook -i inventory/inventory deploy-test-ms.yml --ask-vault-pass
Vault password:
PLAY [kube-master] *************************************************************
TASK [setup] *******************************************************************
Thursday 16 March 2017 13:32:05 +0000 (0:00:00.026) 0:00:00.026 ********
fatal: [kubernetes-master-1]: FAILED! => {"failed": true, "msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"}
fatal: [kubernetes-master-2]: FAILED! => {"failed": true, "msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"}
fatal: [kubernetes-master-3]: FAILED! => {"failed": true, "msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"}
PLAY RECAP *********************************************************************
kubernetes-master-1 : ok=0 changed=0 unreachable=0 failed=1
kubernetes-master-2 : ok=0 changed=0 unreachable=0 failed=1
kubernetes-master-3 : ok=0 changed=0 unreachable=0 failed=1
UPDATE:
**With failed script**
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<10.20.0.227> ESTABLISH SSH CONNECTION FOR USER: centos
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<172.23.169.137> ESTABLISH SSH CONNECTION FOR USER: centos
<10.20.0.225> ESTABLISH SSH CONNECTION FOR USER: centos
<10.20.0.226> ESTABLISH SSH CONNECTION FOR USER: centos
**With successfull script**
Thursday 16 March 2017 14:03:49 +0000 (0:00:00.066) 0:00:00.066 ********
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<10.20.0.237> ESTABLISH SSH CONNECTION FOR USER: centos
<10.20.0.237> SSH: EXEC ssh -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=30m -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=centos -o ConnectTimeout=30 -o 'ControlPath=~/.ssh/ansible-%r@%h:%p' 10.20.0.237 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1489673029.48-15997231643297
4 `" && echo ansible-tmp-1489673029.48-159972316432974="` echo $HOME/.ansible/tmp/ansible-tmp-1489673029.48-159972316432974 `" ) && sleep 0'"'"''
<10.20.0.237> PUT /tmp/tmpnHJPbc TO /home/centos/.ansible/tmp/ansible-tmp-1489673029.48-159972316432974/setup.py
<10.20.0.237> SSH: EXEC scp -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=30m -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=centos -o ConnectTimeout=30 -o 'ControlPath=~/.ssh/ansible-%r@%h:%p' /tmp/tmpnHJPbc '[10.20.0.237]:/home/centos/.ansible/tmp/ansible-tmp-1489673029.48-159972316432974/setup.py'
<10.20.0.237> ESTABLISH SSH CONNECTION FOR USER: centos
<10.20.0.237> SSH: EXEC ssh -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=30m -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=centos -o ConnectTimeout=30 -o 'ControlPath=~/.ssh/ansible-%r@%h:%p' 10.20.0.237 '/bin/sh -c '"'"'chmod u+x /home/centos/.ansible/tmp/ansible-tmp-1489673029.48-159972316432974/ /home/cento
s/.ansible/tmp/ansible-tmp-1489673029.48-159972316432974/setup.py && sleep 0'"'"''
| It is the host machine which needs the sshpass program installed. Again, this error message of:
ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program
Applies to the HOST (provisioner) not the GUEST (machine(s) being provisioned). Thus install sshpass on the provisioner.
Install sshpass on Ubuntu 16.04 or similar
apt-get install sshpass
Install sshpass on Mac OS
brew install hudochenkov/sshpass/sshpass
Source: How to install sshpass on Mac?
| Ansible | 42,835,626 | 78 |
In my Ansible play I am restarting database then trying to do some operations on it. Restart command returns as soon as restart is started, not when db is up. Next command tries to connect to the database. That command my fail when db is not up.
I want to retry my second command a few times. If last retry fails, I want to fail my play.
When I do retries as follows
retries: 3
delay: 5
Then retries are not executed at all, because first command execution fails whole play. I could add ignore_errors: yes but that way play will pass even if all retries failed. Is there a easy way to retry failures until I have success, but fail when no success from last retry?
| I don't understand your claim that the "first command execution fails whole play". It wouldn't make sense if Ansible behaved this way.
The following task:
- command: /usr/bin/false
retries: 3
delay: 3
register: result
until: result.rc == 0
produces:
TASK [command] ******************************************************************************************
FAILED - RETRYING: command (3 retries left).
FAILED - RETRYING: command (2 retries left).
FAILED - RETRYING: command (1 retries left).
fatal: [localhost]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["/usr/bin/false"], "delta": "0:00:00.003883", "end": "2017-05-23 21:39:51.669623", "failed": true, "rc": 1, "start": "2017-05-23 21:39:51.665740", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
which seems to be exactly what you want.
| Ansible | 44,134,642 | 77 |
I am trying to get started with Ansible to provision my Vagrantbox, but I can’t figure out how to deal with host files.
According to the documentation the should be storred in /etc/ansible/hosts, but I can’t find this on my system (Mac OS X). I also seen examples where the host.ini file situated in the document root adjacent to the vagrant file.
So my question is where would you store your hostfile for setting up a single vagrant box?
| While Ansible will try /etc/ansible/hosts by default, there are several ways to tell ansible where to look for an alternate inventory file :
use the -i command line switch and pass your inventory file path
add inventory = path_to_hostfile in the [defaults] section of your ~/.ansible.cfg configuration file
use export ANSIBLE_HOSTS=path_to_hostfile as suggested by DomaNitro in his answer
Now you don't mention if you want to use the ansible provisionner available in vagrant, or if you want to provision your vagrant host manually.
Let's go for the Vagrant ansible provisionner first :
Create a directory (e.g. test), and create a Vagrant file inside :
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "precise64-v1.2"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.define :webapp do |webapp|
webapp.vm.hostname = "webapp.local"
webapp.vm.network :private_network, ip: "192.168.123.2"
webapp.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 200, "--name", "vagrant-docs", "--natdnshostresolver1", "on"]
end
end
#
# Provisionning
#
config.vm.provision :ansible do |ansible|
ansible.playbook = "provision.yml"
ansible.inventory_path = "hosts"
ansible.sudo = true
#
# Use anible.tags if you want to restrict what `vagrant provision`
# Here is a list of possible tags
# ansible.tags = "foo bar"
#
# Use ansible.verbose to see detailled output for ansible runs
# ansible.verbose = 'vvv'
#
# Customize your stuff here
ansible.extra_vars = {
some_var: 42,
foo: "bar",
}
end
end
Now when you run vagrant up (or vagrant provision), Vangrant's ansible provionner will look for a file name hosts in the same directory as Vagrantfile, and will try to apply the provision.yml playbook.
You can also run it manually, without resorting to Vagrant's ansible provisionner :
ansible-playbook -i hosts provision.yml --ask-pass --sudo
Note that Vagrant+Virtualbox+Ansible trio does not always get along well. There are some versions combinations that are problematic. Try to upgrade to the latests versions if you experience issues (especially regarding network).
{shameless_plug} You can find an more extensive example mixing vagrant and ansible here {/shameless_plug}
Good luck !
| Ansible | 21,958,727 | 76 |
I would like to use a system fact for a host times a number/percentage as a base for a variable. What I am trying to do specifically is use the ansible_memtotal_mb value and multiply it by .80 to get a ramsize to then use in setting a Couchbase value. I have been trying different variations of the line below. I'm not ever sure that it is possible, but any help would be appreciated.
vars:
ramsize: '"{{ ansible_memtotal_mb }}" * .80'
| You're really close! I use calculations to set some default java memory sizes, which is similar to what you are doing. Here's an example:
{{ (ansible_memtotal_mb*0.8-700)|int|abs }}
That shows a couple of things- first, it's using jinja math, so do the calculations inside the {{ jinja }}. Second, int and abs do what you'd expect- ensure the result is an unsigned integer.
In your case, the correct code would be:
vars:
ramsize: "{{ ansible_memtotal_mb * 0.8 }}"
| Ansible | 33,505,521 | 76 |
Recently, in our company, we decided to use Ansible for deployment and continuous integration. But when I started using Ansible I didn't find modules for building Java projects with Maven, or modules for running JUnit tests, or JMeter tests.
So, I'm in a doubtful state: it may be I'm using Ansible in a wrong way.
When I looked at Jenkins, it can do things like build, run tests, deploy. The missing thing in Hudson is creating/deleting an instance in cloud environments like AWS.
So, in general, for what purposes do we need to use Ansible/Jenkins? For CI do I need to use a combination of Ansible and Jenkins?
Please throw some light on correct usage of Ansible.
| First, Jenkins and Hudson are basically the same project. I'll refer to it as Jenkins below. See How to choose between Hudson and Jenkins?, Hudson vs Jenkins in 2012, and What is the most notable difference between Jenkins and Hudson from a user perpective? for more.
Second, Ansible isn't meant to be a continuous integration engine. It (generally) doesn't poll git repos and run builds that fail in a sane way.
When can I simply use Jenkins?
If your machine environment and deployment process is very straightforward (such as Heroku or iron that is configured outside of your team), Jenkins may be enough. You can write a custom script that does a deploy as the final build step (or a chained step).
When can I simply use Ansible?
If you only need to "deploy" without needing to build/test, Ansible might be enough. For instance, you can run a deploy from the commandline or using Ansible Tower. This is great for small projects, static sites, etc.
How do they work together?
A good combination is to use Jenkins to build, test, and save artifacts. Add a step to call Ansible or Ansible Tower to handle the actual deployment process. That allows Ansible to handle machine configuration and lets Jenkins handle the CI process.
What are the alternatives to Jenkins?
I strongly recommend Thoughtworks Go (not to be confused with Go the language) instead of Jenkins. Others include CruiseControl, TravisCI, and Integrity.
| Ansible | 25,842,718 | 75 |
Here is my problem I need to use one variable 'target_host' and then append '_host' to it's value to get another variable name whose value I need.
If you look at my playbook. Task nbr 1,2,3 fetch the value of variable however nbr 4 is not able to do what I expect. Is there any other way to achieve the same in ansible?
---
- name: "Play to for dynamic groups"
hosts: local
vars:
- target_host: smtp
- smtp_host: smtp.max.com
tasks:
- name: testing
debug: msg={{ target_host }}
- name: testing
debug: msg={{ smtp_host }}
- name: testing
debug: msg={{ target_host }}_host
- name: testing
debug: msg={{ {{ target_host }}_host }}
Output:
TASK: [testing] ***************************************************************
ok: [127.0.0.1] => {
"msg": "smtp"
}
TASK: [testing] ***************************************************************
ok: [127.0.0.1] => {
"msg": "smtp.max.com"
}
TASK: [testing] ***************************************************************
ok: [127.0.0.1] => {
"msg": "smtp_host"
}
TASK: [testing] ***************************************************************
ok: [127.0.0.1] => {
"msg": "{{{{target_host}}_host}}"
}
| If you have a variable like
vars:
myvar: xxx
xxx_var: anothervalue
the working Ansible syntax:
- debug: msg={{ vars[myvar + '_var'] }}
will give you the analogue of:
- debug: msg={{ xxx_var }}
| Ansible | 29,276,198 | 74 |
In my playbook, I need to create a symbolic link for a repo.
With command (shell) it may work like this:
########## Create symbolic link
- name: Create symbolic link
shell : ln -s "{{SOURCE_FOLDER}}" SYMLINK
args :
chdir : "/opt/application/i99/"
when:
- ansible_host in groups['ihm']
-> like this my symbolic link is created directly inside i99 repo /
SYMLINK -> SOURCE_FOLDER
But while doing it with the Ansible file module, like this:
########## Create symbolic link
- name: Create symbolic link
file:
src: "/opt/application/i99/{{SOURCE_FOLDER}}/"
dest: "/opt/application/i99/SYMLINK"
state: link
when:
- ansible_host in groups['ihm']
My output is this :
SYMLINK -> /opt/application/i99/SOURCE_FOLDER
As I don't want that it points to the whole path, and I need to obtain the first format:
SYMLINK -> SOURCE_FOLDER
How can I do it?
| Simply:
- name: Create symbolic link
file:
src: "{{SOURCE_FOLDER}}"
dest: "/opt/application/i99/SYMLINK"
state: link
As you can see in the manual for the file module:
src Will accept absolute, relative and nonexisting paths. Relative paths are not expanded.
| Ansible | 48,560,311 | 74 |
I'm trying to execute ansible2 commands...
When I do:
ansible-playbook -vvv -i my/inventory my/playbook.yml
I get:
Unexpected Exception: name 'basestring' is not defined
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 85, in <module>
sys.exit(cli.run())
File "/usr/local/lib/python3.4/site-packages/ansible/cli/playbook.py", line 150, in run
results = pbex.run()
File "/usr/local/lib/python3.4/site-packages/ansible/executor/playbook_executor.py", line 87, in run
self._tqm.load_callbacks()
File "/usr/local/lib/python3.4/site-packages/ansible/executor/task_queue_manager.py", line 149, in load_callbacks
elif isinstance(self._stdout_callback, basestring):
NameError: name 'basestring' is not defined
Here is ansible --version:
ansible 2.0.0.2
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
And here is python --version
Python 3.4.3
| Ansible below version 2.5 requires Python 2.6 or 2.7 on the control host: Control Node Requirements
basestring is no longer available in Python 3. From What’s New In Python 3.0:
The builtin basestring abstract type was removed. Use str instead. The str and bytes types don’t have functionality enough in common to warrant a shared base class. The 2to3 tool (see below) replaces every occurrence of basestring with str.
So the solution is to either upgrade Ansible or downgrade Python.
| Ansible | 34,803,467 | 73 |
I'm writing an Ansible playbook and have a task which will always fail in check mode:
hosts: ...
tasks:
- set_fact: filename="{{ansible_date_time.iso8601}}"
- file: state=touch name={{filename}}
- file: state=link src={{filename}} dest=latest
In check mode, the file will not be created so the link task will always fail. Is there a way to mark such a task to be skipped when running in check mode? Something like:
- file: state=link src={{filename}} dest=latest
when: not check_mode
| Ansible 2.1 supports ansible_check_mode magic variable which is set to True in check mode (official docs). This means you will be able to do this:
- file:
state: link
src: '{{ filename }}'
dest: latest
when: not ansible_check_mode
or
- file:
state: link
src: '{{ filename }}'
dest: latest
ignore_errors: '{{ ansible_check_mode }}'
whichever you like more.
| Ansible | 28,729,567 | 72 |
I'm currently using Ansible 1.7.2. I have the following test playbook:
---
- hosts: localhost
tasks:
- name: set fact 1
set_fact: foo="[ 'zero' ]"
- name: set fact 2
set_fact: foo="{{ foo }} + [ 'one' ]"
- name: set fact 3
set_fact: foo="{{ foo }} + [ 'two', 'three' ]"
- name: set fact 4
set_fact: foo="{{ foo }} + [ '{{ item }}' ]"
with_items:
- four
- five
- six
- debug: var=foo
The first task sets a fact that's a list with one item in it. The subsequent tasks append to that list with more values. The first three tasks work as expected, but the last one doesn't. Here's the output when I run this:
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [set fact 1] ************************************************************
ok: [localhost]
TASK: [set fact 2] ************************************************************
ok: [localhost]
TASK: [set fact 3] ************************************************************
ok: [localhost]
TASK: [set fact 4] ************************************************************
ok: [localhost] => (item=four)
ok: [localhost] => (item=five)
ok: [localhost] => (item=six)
TASK: [debug var=foo] *********************************************************
ok: [localhost] => {
"foo": [
"zero",
"one",
"two",
"three",
"six"
]
}
PLAY RECAP ********************************************************************
localhost : ok=6 changed=0 unreachable=0 failed=0
Given the with_items in task 4 and the fact that the output shows the task properly iterated over the items in that list, I would have expected the result to contain all the numbers zero through six. But that last task seems to only be evaluating set_fact with the last item in the list. Is this possibly a bug in Ansible?
Edit: I also just tested this on ansible 1.8 and the output was identical.
| There is a workaround which may help. You may "register" results for each set_fact iteration and then map that results to list:
---
- hosts: localhost
tasks:
- name: set fact
set_fact: foo_item="{{ item }}"
with_items:
- four
- five
- six
register: foo_result
- name: make a list
set_fact: foo="{{ foo_result.results | map(attribute='ansible_facts.foo_item') | list }}"
- debug: var=foo
Output:
< TASK: debug var=foo >
---------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [localhost] => {
"var": {
"foo": [
"four",
"five",
"six"
]
}
}
| Ansible | 29,399,581 | 72 |
Ansible shows an error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
What is wrong?
The exact transcript is:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in 'playbook.yml': line 10, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: My task name
^ here
| Reason #1
You are using an older version of Ansible which did not have the module you try to run.
How to check it?
Open the list of modules module documentation and find the documentation page for your module.
Read the header at the top of the page - it usually shows the Ansible version in which the module was introduced. For example:
New in version 2.2.
Ensure you are running the specified version of Ansible or later. Run:
ansible-playbook --version
And check the output. It should show something like:
ansible-playbook 2.4.1.0
Reason #2
You tried to write a role and put a playbook in my_role/tasks/main.yml.
The tasks/main.yml file should contain only a list of tasks. If you specified:
---
- name: Configure servers
hosts: my_hosts
tasks:
- name: My first task
my_module:
parameter1: value1
Ansible tries to find an action module named hosts and an action module named tasks. It doesn't, so it throws an error.
Solution: specify only a list of tasks in the tasks/main.yml file:
---
- name: My first task
my_module:
parameter1: value1
Reason #3
The action module name is misspelled.
This is pretty obvious, but overlooked. If you use incorrect module name, for example users instead of user, Ansible will report "no action detected in task".
Ansible was designed as a highly extensible system. It does not have a limited set of modules which you can run and it cannot check "in advance" the spelling of each action module.
In fact you can write and then specify your own module named qLQn1BHxzirz and Ansible has to respect that. As it is an interpreted language, it "discovers" the error only when trying to execute the task.
Reason #4
You are trying to execute a module not distributed with Ansible.
The action module name is correct, but it is not a standard module distributed with Ansible.
If you are using a module provided by a third party - a vendor of software/hardware or another module shared publicly, you must first download the module and place it in appropriate directory.
You can place it either in modules subdirectory of the playbook or in a common path.
Ansible looks ANSIBLE_LIBRARY or the --module-path command line argument.
To check what paths are valid, run:
ansible-playbook --version
and check the value of:
configured module search path =
Ansible version 2.4 and later should provide a list of paths.
Reason #5
You really don't have any action inside the task.
The task must have some action module defined. The following example is not valid:
- name: My task
become: true
| Ansible | 47,159,193 | 72 |
How can I make Ansible execute a shell script if a (rpm) package is not installed? Is it somehow possible to leverage the yum module?
| I don't think the yum module would help in this case. It currently has 3 states: absent, present, and latest. Since it sounds like you don't want to actually install or remove the package (at least at this point) then you would need to do this in two manual steps. The first task would check to see if the package exists, then the second task would invoke a command based on the output of the first command.
If you use "rpm -q" to check if a package exists then the output would look like this for a package that exists:
# rpm -q httpd
httpd-2.2.15-15.el6.centos.1.x86_64
and like this if the package doesn't exist:
# rpm -q httpdfoo
package httpdfoo is not installed
So your ansible tasks would look something like this:
- name: Check if foo.rpm is installed
command: rpm -q foo.rpm
register: rpm_check
- name: Execute script if foo.rpm is not installed
command: somescript
when: rpm_check.stdout.find('is not installed') != -1
The rpm command will also exit with a 0 if the package exists, or a 1 if the package isn't found, so another possibility is to use:
when: rpm_check.rc == 1
| Ansible | 21,892,603 | 71 |
Let's imagine an inventory file like this:
node-01 ansible_ssh_host=192.168.100.101
node-02 ansible_ssh_host=192.168.100.102
node-03 ansible_ssh_host=192.168.100.103
node-04 ansible_ssh_host=192.168.100.104
node-05 ansible_ssh_host=192.168.100.105
[mainnodes]
node-[01:04]
In my playbook I now want to create some variables containing the IP addresses of the group mainnodes:
vars:
main_nodes_ips: "192.168.100.101,192.168.100.102,192.168.100.103,192.168.100.104"
main_nodes_ips_with_port: "192.168.100.101:3000,192.168.100.102:3000,192.168.100.103:3000,192.168.100.104:3000"
This is what I got so far:
vars:
main_nodes_ips: "{{groups['mainnodes']|join(',')}}"
main_nodes_ips_with_port: "{{groups['mainnodes']|join(':3000,')}}"
but that would use the host names instead of the IP addresses.
Any ideas how this could be done?
Update:
looking at the docs for a while, I think this would allow me to loop through all the ip adresses:
{% for host in groups['mainnodes'] %}
{{hostvars[host]['ansible_ssh_host']}}
{% endfor %}
But I just can't figure out how to create an array that holds all these IPs. So that I can use the |join() command on them.
Update2:
I just thought I had figured it out... but it turns out that you cannot use the {% %} syntax in the playbook... or can I?
Well in the vars section it didn't. :/
vars:
{% set main_nodes_ip_arr=[] %}
{% for host in groups['mesos-slave'] %}
{% if main_nodes_ip_arr.insert(loop.index,hostvars[host]['ansible_ssh_host']) %} {% endif %}
{% endfor %}
main_nodes_ips: "{{main_nodes_ip_arr|join(',')}}"
main_nodes_ips_with_port: "{{main_nodes_ip_arr|join(':3000,')}}"
| I find the magic map extract here.
main_nodes_ips: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_host']) | join(',') }}"
main_nodes_ips_with_port: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_host']) | join(':3000,') }}:3000"
An alternative(idea comes from here):
main_nodes_ips: "{{ groups['mainnodes'] | map('extract', hostvars, ['ansible_eth0', 'ipv4', 'address']) | join(',') }}"
(Suppose the interface is eth0)
| Ansible | 36,328,907 | 70 |
Is there a way to check playbook syntax and variables?
I'm trying to dry-run(--check) but for some reasons it works really slow. It looks like it tries to perform an action instead of just check the syntax
I want to omit en errors like this:
..."msg": "AnsibleUndefinedVariable: ERROR! 'application_name' is undefined"}
| This is expected behaviour according to the documentation:
When ansible-playbook is executed with --check it will not make any
changes on remote systems. Instead, any module instrumented to support
‘check mode’ (which contains most of the primary core modules, but it
is not required that all modules do this) will report what changes
they would have made rather than making them. Other modules that do
not support check mode will also take no action, but just will not
report what changes they might have made.
Old link (does not work anymore): http://docs.ansible.com/ansible/playbooks_checkmode.html
New link: https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#using-check-mode
If you would like to check the YAML syntax you can use syntax-check.
ansible-playbook rds_prod.yml --syntax-check
playbook: rds_prod.yml
| Ansible | 35,339,512 | 69 |
I'm currently building a role for installing PHP using ansible, and I'm having some difficulty merging dictionaries. I've tried several ways to do so, but I can't get it to work like I want it to:
# A vars file:
my_default_values:
key = value
my_values:
my_key = my_value
# In a playbook, I create a task to attempt merging the
# two dictionaries (which doesn't work):
- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: my_default_values + my_values
# I have also tried:
- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: my_default_values|union(my_values)
# I have /some/ success with using j2's update,
# but you can't use j2 syntax in "with_dict", it appears.
# This works:
- debug: msg="{{ my_default_values.update(my_values) }}"
# But this doesn't:
- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: my_default_values.update(my_values)
Is there a way to merge two dictionaries, so I can use it with "with_dict"?
| In Ansible 2.0, there is a Jinja filter, combine, for this:
- debug: msg="{{ item.key }} = {{ item.value }}"
with_dict: "{{ my_default_values | combine(my_values) }}"
| Ansible | 25,422,771 | 68 |
How would I save a registered Variable to a file? I took this from the tutorial:
- hosts: web_servers
tasks:
- shell: /usr/bin/foo
register: foo_result
ignore_errors: True
- shell: /usr/bin/bar
when: foo_result.rc == 5
How would I save foo_result variable to a file e.g. foo_result.log using ansible?
| Thanks to tmoschou for adding this comment to an outdated accepted answer:
As of Ansible 2.10, The documentation for ansible.builtin.copy says:
If you need variable interpolation in copied files, use the
ansible.builtin.template module. Using a variable in the content field will
result in unpredictable output.
For more details see this and an explanation
Original answer:
You can use the copy module, with the parameter content=.
I gave the exact same answer here: Write variable to a file in Ansible
In your case, it looks like you want this variable written to a local logfile, so you could combine it with the local_action notation:
- local_action: copy content={{ foo_result }} dest=/path/to/destination/file
| Ansible | 26,732,241 | 68 |
I have a task, that creates a group.
- name: add user to docker group
user: name=USERNAME groups=docker append=yes
sudo: true
In another playbook I need to run a command that relies on having the new group permission. Unfortunately this does not work because the new group is only loaded after I logout and login again.
I have tried some stuff like:
su -l USERNAME
or
newgrp docker; newgrp
But nothing worked. Is there any change to force Ansible to reconnect to the host and does a relogin? A reboot would be the last option.
| You can use an (ansible.builtin.)meta: reset_connection task:
- name: Add user to docker group
ansible.builtin.user:
name: USERNAME
groups: docker
append: true
- name: Reset ssh connection to allow user changes to affect ansible user
ansible.builtin.meta:
reset_connection
Note that you can not use a variable to only run the task when the ansible.builtin.user task did a change as “reset_connection task does not support when conditional”, see #27565.
The reset_connection meta task was added in Ansible 2.3, but remained a bit buggy until excluding v2.5.8, see #27520.
While the reset_connection task itself does not support when conditions, you still can use conditional task inclusion like this:
- name: Add user to docker group
ansible.builtin.user:
name: USERNAME
groups: docker
append: true
register: add_to_docker_group_result
- name: Include reset connection tasks
ansible.builtin.include_tasks: reset_connection.yaml
when: add_to_docker_group_result.changed == true
And the reset_connection.yaml file to include:
- name: Reset ssh connection
ansible.builtin.meta: reset_connection
| Ansible | 26,677,064 | 66 |
How should one go about defining a pretask for role dependencies.
I currently have an apache role that has a user variable so in my own role in <role>/meta/main.yml I do something like:
---
dependencies:
- { role: apache, user: proxy }
The problem at this point is that I still don't have the user I specify and when the role tries to start apache server under a non existent user, I get an error.
I tried creating a task in <role>/tasks/main.yml like:
---
- user: name=proxy
But the user gets created only after running the apache task in dependencies (which is to be expected). So, is there a way to create a task that would create a user before running roles in dependencies?
| I use the pre_tasks to do some tasks before roles, thanks for Kashyap.
#!/usr/bin/env ansible-playbook
---
- hosts: all
become: true
pre_tasks:
- name: start tasks and sent notifiaction to HipChat
hipchat:
color: purple
token: "{{ hipchat_token }}"
room: "{{ hipchat_room }}"
msg: "[Start] Run 'foo/setup.yml' playbook on {{ ansible_nodename }}."
roles:
- chusiang.vim-and-vi-mode
vars:
...
tasks:
- name: include main task
include: tasks/main.yml
post_tasks:
- name: finish tasks and sent notifiaction to HipChat
hipchat:
color: green
token: "{{ hipchat_token }}"
room: "{{ hipchat_room }}"
msg: "[Finish] Run 'foo/setup.yml' playbook on {{ ansible_nodename }}."
# vim:ft=ansible :
| Ansible | 29,258,759 | 66 |
I have an Ansible playbook for deploying a Java app as an init.d daemon.
Being a beginner in both Ansible and Linux I'm having trouble to conditionally execute tasks on a host based on the host's status.
Namely I have some hosts having the service already present and running where I want to stop it before doing anything else. And then there might be new hosts, which don't have the service yet. So I can't simply use service: name={{service_name}} state=stopped, because this will fail on new hosts.
How I can I achieve this? Here's what I have so far:
- name: Check if Service Exists
shell: "if chkconfig --list | grep -q my_service; then echo true; else echo false; fi;"
register: service_exists
# This should only execute on hosts where the service is present
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_exists
register: service_stopped
# This too
- name: Remove Old App Folder
command: rm -rf {{app_target_folder}}
when: service_exists
# This should be executed on all hosts, but only after the service has stopped, if it was present
- name: Unpack App Archive
unarchive: src=../target/{{app_tar_name}} dest=/opt
| See the service_facts module, new in Ansible 2.5.
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker' in services"
| Ansible | 30,328,506 | 66 |
I generate files with ansible on remote host and after this generation, I would like to read theses files in another task.
I don't find any module to read remote file with ansible (lookup seems only on local host).
Do you know a module like this ?
Thanks
EDIT:
Here is my use case:
I generate ssh keys and I add it to github. These keys are setting by an object in var files so I loop like this to generate it:
tasks:
- name: Create ssh key
user:
name: "{{sshConfigFile.user}}"
generate_ssh_key: yes
ssh_key_file: ".ssh/{{item.value.file}}"
state: present
with_dict: "{{sshConfiguration}}"
It works very fine but how read these keys to send it to github via the API ?
| Either run with the --diff flag (outputs a diff when the destination file changes) ..
ansible-playbook --diff server.yaml
or slurp it up ..
- name: Slurp hosts file
slurp:
src: /etc/hosts
register: slurpfile
- debug: msg="{{ slurpfile['content'] | b64decode }}"
| Ansible | 34,722,761 | 66 |
I have been developing an Ansible playbook for a couple of weeks, therefore, my experience with such technology is relatively short. Part of my strategy includes using a custom ansible_ssh_user for provisioning hosts throughout the inventory, however, such user will need its own SSH key pair, which would involve some sort of a plan for holding/storing its correspondent private key. On a production environment, this playbook would be cloned/pulled and run inside a certain playbook node whose role is to provision the rest of the infrastructure.
At first, I was thinking to just put that private key inside the playbook git repository, but I am having second thoughts about it nonetheless, mostly because of somewhat obvious security reasons and common sense around it, hence the reason I need to consult you about this matter.
With this set on the table, here are the follow-up questions:
In an Ansible-based development environment, is it sane/reasonable to hold a private SSH key in source control?
Would this practice be advised only for development environments whereas another local git branch inside the playbook node would be then used to hold the actual production SSH private key?
Would it be better to address this case scenario via Ansible Vault instead?, I have not ever used this before, but regardless of that I cannot yet tell whether this would be a proper case for using it.
In your experience, what would be your approach around this in a production environment?, what would it be considered as the best practice in this particular scenario?
| It's a bad idea to store any kind of plaintext secret in revision control, SSH private keys included. Instead, use ansible-vault to store the private key.
ansible-vault can operate on any file type. Just encrypt the file with
ansible-vault encrypt /path/to/local/private_key
then install the key:
- name: Install a private SSH key
vars:
source_key: /path/to/local/private_key
dest_key: /path/to/remote/private_key
tasks:
- name: Ensure .ssh directory exists.
file:
dest: "{{ dest_key | dirname }}"
mode: 0700
owner: user
state: directory
- name: Install ssh key
copy:
src: "{{ source_key }}"
dest: "{{ dest_key }}"
mode: 0600
owner: user
| Ansible | 29,392,369 | 65 |
I need to create new variable from contents of other variables. Currently I'm using something like this:
- command: echo "{{ var1 }}-{{ var2 }}-{{ var3 }}"
register: newvar
The problem is:
Usage of {{ var1 }}...{{ varN }} brings too long strings and very ugly code.
Usage of {{ newvar.stdout }} a bit better but confusing.
Usage of set_fact module caches fact between runs. It isn't appropriate for me.
Is there any other solution?
| Since strings are lists of characters in Python, we can concatenate strings the same way we concatenate lists (with the + sign):
{{ var1 + '-' + var2 + '-' + var3 }}
If you want to pipe the resulting string to some filter, make sure you enclose the bits in parentheses:
e.g. To concatenate our 3 vars, and get a sha512 hash:
{{ (var1 + var2 + var3) | hash('sha512') }}
Note: this works on Ansible 2.3. I haven't tested it on earlier versions.
| Ansible | 31,186,874 | 65 |
I ran into a configuration problem when coding an Ansible playbook for SSH private key files. In static Ansible inventories, I can define combinations of host servers, IP addresses, and related SSH private keys - but I have no idea how to define those with dynamic inventories.
For example:
---
- hosts: tag_Name_server1
gather_facts: no
roles:
- role1
- hosts: tag_Name_server2
gather_facts: no
roles:
- roles2
I use the below command to call that playbook:
ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem
My questions are:
How can I define ~/.ssh/SSHKEY.pem in Ansible files rather than on the command line?
Is there a parameter in playbooks (like gather_facts) to define which private keys should be used which hosts?
If there is no way to define private keys in files, what should be called on the command line when different keys are used for different hosts in the same inventory?
| TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.
Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.
This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.
The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.
Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:
my-playbooks
|-- test.yml
+-- group_vars
|-- tag_Name_server1.yml
+-- tag_Name_server2.yml
Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.
Within each group var file, we can specify the key file to use for connecting to hosts in the group:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem
Now, when you run your playbook, it should automatically pick up the right keys!
Using environment vars for portability
I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.
In this case, my group vars files look like this, instead:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"
Further Improvements
There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.
| Ansible | 33,795,607 | 65 |
I'm trying to learn how to use Ansible facts as variables, and I don't get it. When I run...
$ ansible localhost -m setup
...it lists all of the facts of my system. I selected one at random to try and use it, ansible_facts.ansible_date_time.date, but I can't figure out HOW to use it. When I run...
$ ansible localhost -m setup -a "filter=ansible_date_time"
localhost | success >> {
"ansible_facts": {
"ansible_date_time": {
"date": "2015-07-09",
"day": "09",
"epoch": "1436460014",
"hour": "10",
"iso8601": "2015-07-09T16:40:14Z",
"iso8601_micro": "2015-07-09T16:40:14.795637Z",
"minute": "40",
"month": "07",
"second": "14",
"time": "10:40:14",
"tz": "MDT",
"tz_offset": "-0600",
"weekday": "Thursday",
"year": "2015"
}
},
"changed": false
}
So, it's CLEARLY there. But when I run...
$ ansible localhost -a "echo {{ ansible_facts.ansible_date_time.date }}"
localhost | FAILED => One or more undefined variables: 'ansible_facts' is undefined
$ ansible localhost -a "echo {{ ansible_date_time.date }}"
localhost | FAILED => One or more undefined variables: 'ansible_date_time' is undefined
$ ansible localhost -a "echo {{ date }}"
localhost | FAILED => One or more undefined variables: 'date' is undefined
What am I not getting here? How do I use Facts as variables?
| The command ansible localhost -m setup basically says "run the setup module against localhost", and the setup module gathers the facts that you see in the output.
When you run the echo command these facts don't exist since the setup module wasn't run. A better method to testing things like this would be to use ansible-playbook to run a playbook that looks something like this:
- hosts: localhost
tasks:
- debug: var=ansible_date_time
- debug: msg="the current date is {{ ansible_date_time.date }}"
Because this runs as a playbook facts for localhost are gathered before the tasks are run. The output of the above playbook will be something like this:
PLAY [localhost] **************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [debug var=ansible_date_time] *******************************************
ok: [localhost] => {
"ansible_date_time": {
"date": "2015-07-09",
"day": "09",
"epoch": "1436461166",
"hour": "16",
"iso8601": "2015-07-09T16:59:26Z",
"iso8601_micro": "2015-07-09T16:59:26.896629Z",
"minute": "59",
"month": "07",
"second": "26",
"time": "16:59:26",
"tz": "UTC",
"tz_offset": "+0000",
"weekday": "Thursday",
"year": "2015"
}
}
TASK: [debug msg="the current date is {{ ansible_date_time.date }}"] **********
ok: [localhost] => {
"msg": "the current date is 2015-07-09"
}
PLAY RECAP ********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
| Ansible | 31,323,604 | 64 |
I'm trying to use the result of Ansible find module, which return list of files it find on a specific folder.
The problem is, when I iterate over the result, I do not have the file names, I only have their full paths (including the name).
Is there an easy way to use the find_result items below to provide the file_name in the second command as shown below?
- name: get files
find:
paths: /home/me
file_type: "file"
register: find_result
- name: Execute docker secret create
shell: docker secret create <file_name> {{ item.path }}
run_once: true
with_items: "{{ find_result.files }}"
| basename filter?
{{ item.path | basename }}
There are also dirname, realpath, relpath filters.
| Ansible | 45,564,899 | 64 |
I need something like (ansible inventory file):
[example]
127.0.0.1 timezone="Europe/Amsterdam" locales="en_US","nl_NL"
However, ansible does not recognize 'locales' as a list.
| You can pass a list or object like this:
[example]
127.0.0.1 timezone="Europe/Amsterdam" locales='["en_US", "nl_NL"]'
| Ansible | 18,572,092 | 63 |
The playbook looks like:
- hosts: all
tasks:
- name: "run on all hosts,1"
shell: something1
- name: "run on all hosts,2"
shell: something2
- name: "run on one host, any host would do"
shell: this_command_should_run_on_one_host
- name: "run on all hosts,3"
shell: something3
I know with command line option --limit, I can limit to one host.
Is it possible to do it in playbook?
| For any host (with defaults it will match the first on the list):
- name: "run on first found host"
shell: this_command_should_run_on_one_host
run_once: true
For a specific host:
- name: "run on that_one_host host"
shell: this_command_should_run_on_one_host
when: ansible_hostname == 'that_one_host'
Or inventory_hostname (hostname as defined in the Ansible inventory) instead of ansible_hostname (hostname as defined on the target machine), depending on which name you want to use.
| Ansible | 47,342,724 | 63 |
I received the following data from the setup module:
"ansible_nodename": "3d734bc2a391",
"ansible_os_family": "RedHat",
"ansible_pkg_mgr": "yum",
"ansible_processor": [
"AuthenticAMD",
"AMD PRO A10-8700B R6, 10 Compute Cores 4C+6G"
],
"ansible_processor_cores": 1,
"ansible_processor_count": 1,
"ansible_processor_threads_per_core": 1,
I want to retrieve the 1st value of ansible_processor and use it in a Jinja2 template.
If I use {{ ansible_processor }}, it's giving me both values:
"AuthenticAMD",
"AMD PRO A10-8700B R6, 10 Compute Cores 4C+6G"
But I want only the first one.
| To get first item of the list:
- debug:
msg: "First item: {{ ansible_processor[0] }}"
Or:
- debug:
msg: "First item: {{ ansible_processor | first }}"
| Ansible | 41,610,207 | 62 |
I want to use Ansible as part of another Python software. in that software I have a hosts list with their user / password.
Is there a way to pass the user / pass of the SSH connection to the Ansible ad-hoc command or write it in any file in encrypted way?
Or do i understand it all wrong, and the only way to do it is with SSH certification?
| The docs say you can specify the password via the command line:
-k, --ask-pass.
ask for connection password
Ansible can also store the password in the ansible_password variable on a per-host basis.
| Ansible | 37,004,686 | 61 |
I have a some Ansible tasks that perform unfortunately long operations - things like running an synchronization operation with an S3 folder. It's not always clear if they're progressing, or just stuck (or the ssh connection has died), so it would be nice to have some sort of progress output displayed. If the command's stdout/stderr was directly displayed, I'd see that, but Ansible captures the output.
Piping output back is a difficult problem for Ansible to solve in its current form. But are there any Ansible tricks I can use to provide some sort of indication that things are still moving?
Current ticket is https://github.com/ansible/ansible/issues/4870
| I came across this problem today on OSX, where I was running a docker shell command which took a long time to build and there was no output whilst it built. It was very frustrating to not understand whether the command had hung or was just progressing slowly.
I decided to pipe the output (and error) of the shell command to a port, which could then be listened to via netcat in a separate terminal.
myplaybook.yml
- name: run some long-running task and pipe to a port
shell: myLongRunningApp > /dev/tcp/localhost/4000 2>&1
And in a separate terminal window:
$ nc -lk 4000
Output from my
long
running
app will appear here
Note that I pipe the error output to the same port; I could as easily pipe to a different port.
Also, I ended up setting a variable called nc_port which will allow for changing the port in case that port is in use. The ansible task then looks like:
shell: myLongRunningApp > /dev/tcp/localhost/{{nc_port}} 2>&1
Note that the command myLongRunningApp is being executed on localhost (i.e. that's the host set in the inventory) which is why I listen to localhost with nc.
| Ansible | 41,194,021 | 61 |
So after reading Ansible docs, I found out that Handlers are only fired when tasks report changes, so for example:
some tasks ...
notify: nginx_restart
# our handler
- name: nginx_restart
vs
some tasks ...
register: nginx_restart
# do this after nginx_restart changes
when: nginx_restart|changed
Is there any difference between these 2 methods? When should I use each of them?
For me, register seems to have more functionality here, unless I am missing something...
| There are some differences and which is better depends on the situation.
Handlers will only be visible in the output if they have actually been executed. Not notified, there will be no skipped tasks in Ansibles output. Tasks always have output no matter if skipped, executed with change or without. (except they are excluded via tags/skip-tags)
Handlers can be called from any role. This gets handy if you have more complex roles which depend on each other. Let's say you have a role to manage iptables but which rules you define is actually depending on other roles (e.g. database role, redis role etc...) Each role can add their rules to a config file and at the end you notify the iptables role to reload iptables if changed.
Handlers by default get executed at the end of the playbook. Tasks will get executed immediately where they are defined. This way you could configure all your applications and at the end the service restart for all changed apps will be triggered per handler. This can be dangerous though. In case your playbook fails after a handler has been notified, the handler will actually not be called. If you run the playbook again, the triggering task may not have a changed state any longer, therefore not notifying the handler. This results in Ansible actually not being idempotent. Since Ansible 1.9.1 you can call Ansible with the --force-handler option or define force_handlers = True in your ansible.cfg to even fire all notified handlers after the playbook failed. (See docs)
If you need your handlers to be fired at a specific point (for example you configured your system to use an internal DNS and now want to resolve a host through this DNS) you can flush all handlers by defining a task like:
- meta: flush_handlers
A handler would be called only once no matter how many times it was notified. Imagine you have a service that depends on multiple config files (for example bind/named: rev, zone, root.db, rndc.key, named.conf) and you want to restart named if any of these files changed. With handlers you simply would notify from every single task that managed those files. Otherwise you need to register 5 useless vars, and then check them all in your restart task.
Personally I prefer handlers. It appears much cleaner than dealing with register. Tasks triggered per register was safer before Ansible 1.9.1.
| Ansible | 33,931,610 | 60 |
All I could find was this from the docs:
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the rest of the domain.
Is there any actual difference between inventory_hostname and ansible_hostname variables in Ansible? If so, then which one should I use and when?
|
inventory_hostname - As configured in the ansible inventory file (eg: /etc/ansible/hosts). It can be an IP address or a name that can be resolved by the DNS
ansible_hostname - As discovered by ansible. Ansible logs into the host via ssh and gathers some facts. As part of the fact, it also discovers its hostname which is stored in ansible_hostname.
Which one should you use?
hostvars is a dictionary which has an entry for each inventory host. If you want to access host information, you need to use the inventory_hostname. If you want to use/print the name of the host as configured on the host, you should use ansible_hostname since most likely the IP will be used in the inventory file.
Important: To use ansible_hostname, you need to gather facts:
gather_facts: true
Otherwise, you will get a message that ansible_hostname is not defined.
"ansible_hostname": "VARIABLE IS NOT DEFINED!"
Try this with one host to understand the differences
tasks:
- debug: var=inventory_hostname
- debug: var=ansible_hostname
- debug: var=hostvars
| Ansible | 45,908,067 | 60 |
Is it possible to run ansible playbook, which looks like this (it is an example from this site: http://docs.ansible.com/playbooks_roles.html):
- name: this is a play at the top level of a file
hosts: all
remote_user: root
tasks:
- name: say hi
tags: foo
shell: echo "hi..."
- include: load_balancers.yml
- include: webservers.yml
- include: dbservers.yml
in multithread mode?
I want to run three "includes" in the same time (it is deploying to different hosts anyway), like in this diagram:
http://www.gliffy.com/go/publish/5267618
Is it possible?
| As of Ansible 2.0 there seems to be an option called strategy on a playbook. When setting the strategy to free, the playbook plays tasks on each host without waiting to the others. See http://docs.ansible.com/ansible/playbooks_strategies.html.
It looks something like this (taken from the above link):
- hosts: all
strategy: free
tasks:
...
Please note that I didn't check this and I'm very new to Ansible. I was just curious about doing what you described and happened to come acroess this strategy thing.
EDIT:
It seems like this is not exactly what you're trying to do. Maybe "async tasks" is more appropriate as described here: http://docs.ansible.com/ansible/playbooks_async.html.
This includes specifying async and poll on a task. The following is taken from the 2nd link I mentioned:
- name: simulate long running op, allow to run for 45 sec, fire and forget
command: /bin/sleep 15
async: 45
poll: 0
I guess you can specify longer async times if your task is lengthy. You can probably define your three concurrent task this way.
| Ansible | 21,158,689 | 59 |
I am getting this error in my nginx-error.log file:
2014/02/17 03:42:20 [crit] 5455#0: *1 connect() to unix:/tmp/uwsgi.sock failed (13: Permission denied) while connecting to upstream, client: xx.xx.x.xxx, server: localhost, request: "GET /users HTTP/1.1", upstream: "uwsgi://unix:/tmp/uwsgi.sock:", host: "EC2.amazonaws.com"
The browser also shows a 502 Bad Gateway Error. The output of a curl is the same, Bad Gateway html
I've tried to fix it by changing permissions for /tmp/uwsgi.sock to 777. That didn't work. I also added myself to the www-data group (a couple questions that looked similar suggested that). Also, no dice.
Here is my nginx.conf file:
nginx.conf
worker_processes 1;
worker_rlimit_nofile 8192;
events {
worker_connections 3000;
}
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I am running a Flask application with Nginsx and Uwsgi, just to be thorough in my explanation. If anyone has any ideas, I would really appreciate them.
EDIT
I have been asked to provide my uwsgi config file. So, I never personally wrote my nginx or my uwsgi file. I followed the guide here which sets everything up using ansible-playbook. The nginx.conf file was generated automatically, but there was nothing in /etc/uwsgi except a README file in both apps-enabled and apps-available folders. Do I need to create my own config file for uwsgi? I was under the impression that ansible took care of all of those things.
I believe that ansible-playbook figured out my uwsgi configuration since when I run this command
uwsgi -s /tmp/uwsgi.sock -w my_app:app
it starts up and outputs this:
*** Starting uWSGI 2.0.1 (64bit) on [Mon Feb 17 20:03:08 2014] ***
compiled with version: 4.7.3 on 10 February 2014 18:26:16
os: Linux-3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01 UTC 2014
nodename: ip-10-9-xxx-xxx
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /home/username/Project
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 4548
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
Python version: 2.7.5+ (default, Sep 19 2013, 13:52:09) [GCC 4.8.1]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x1f60260
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72760 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x1f60260 pid: 26790 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 26790, cores: 1)
| The permission issue occurs because uwsgi resets the ownership and permissions of /tmp/uwsgi.sock to 755 and the user running uwsgi every time uwsgi starts.
The correct way to solve the problem is to make uwsgi change the ownership and/or permission of /tmp/uwsgi.sock such that nginx can write to this socket. Therefore, there are three possible solutions.
Run uwsgi as the www-data user so that this user owns the socket file created by it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --uid www-data --gid www-data
Change the ownership of the socket file so that www-data owns it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --chown-socket=www-data:www-data
Change the permissions of the socket file, so that www-data can write to it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --chmod-socket=666
I prefer the first approach because it does not leave uwsgi running as root.
The first two commands need to be run as root user. The third command does not need to be run as root user.
The first command leaves uwsgi running as www-data user. The second and third commands leave uwsgi running as the actual user that ran the command.
The first and second command allow only www-data user to write to the socket. The third command allows any user to write to the socket.
I prefer the first approach because it does not leave uwsgi running as root user and it does not make the socket file world-writeable .
| Ansible | 21,820,444 | 59 |
I'm running Ansible playbook and it works fine on one machine.
On a new machine when I try for the first time, I get the following error.
17:04:34 PLAY [appservers] *************************************************************
17:04:34
17:04:34 GATHERING FACTS ***************************************************************
17:04:34 fatal: [server02.cit.product-ref.dev] => {'msg': "FAILED: (22, 'Invalid argument')", 'failed': True}
17:04:34 fatal: [server01.cit.product-ref.dev] => {'msg': "FAILED: (22, 'Invalid argument')", 'failed': True}
17:04:34
17:04:34 TASK: [common | remove old ansible-tmp-*] *************************************
17:04:34 FATAL: no hosts matched or all hosts have already failed -- aborting
17:04:34
17:04:34
17:04:34 PLAY RECAP ********************************************************************
17:04:34 to retry, use: --limit @/var/lib/jenkins/site.retry
17:04:34
17:04:34 server01.cit.product-ref.dev : ok=0 changed=0 unreachable=1 failed=0
17:04:34 server02.cit.product-ref.dev : ok=0 changed=0 unreachable=1 failed=0
17:04:34
17:04:34 Build step 'Execute shell' marked build as failure
17:04:34 Finished: FAILURE
This error can be resolved, if I first go to the source machine (from where I'm running the ansible playbook) and manually ssh to the target machine (as the given user) and enter "yes" for known_hosts file entry.
Now, if I run the same ansible playbook second time, it works without an error.
Therefore, how can I suppress the prompt what SSH gives while making ssh known_hosts entry for the first time for a given user (~/.ssh folder, file known_hosts)?
I found I can do this if I use the following config entries in ~/.ssh/config file.
~/.ssh/config
# For vapp virtual machines
Host *
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
User kobaloki
LogLevel ERROR
i.e. if I place the above code in the user's ~/.ssh/config file of a remote machine and try Ansible playbook for the first time, I won't be prompted for entring "yes" and playbook will run successfully (without requiring the user to manually create a known_hosts file entry from the source machine to the target/remote machine).
My questions:
1. What security issues I should take care if I go ~/.ssh/config way
2. How can I pass the settings (what's there in the config file) as parameters/options to ansible at command line so that it will run first time on a new machine (without prompting / depending upon the known_hosts file entry on the source machine for the target machine?
| The ansible docs have a section on this. Quoting:
Ansible has host key checking enabled by default.
If a host is reinstalled and has a different key in ‘known_hosts’,
this will result in an error message until corrected. If a host is not
initially in ‘known_hosts’ this will result in prompting for
confirmation of the key, which results in an interactive experience if
using Ansible, from say, cron. You might not want this.
If you understand the implications and wish to disable this behavior,
you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:
[defaults]
host_key_checking = False
Alternatively this can be set by the ANSIBLE_HOST_KEY_CHECKING
environment variable:
$ export ANSIBLE_HOST_KEY_CHECKING=False
Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recommended when using this feature.
| Ansible | 30,226,113 | 59 |
In Ansible, I have a list of strings that I want to join with newline characters to create a string, that when written to a file, becomes a series of lines. However, when I use the join() filter, it works on the inner list, the characters in the strings, and not on the outer list, the strings themselves. Here's my sample code:
# Usage: ansible-playbook tst3.yaml --limit <GRP>
---
- hosts: all
remote_user: root
tasks:
- name: Create the list
set_fact:
my_item: "{{ item }}"
with_items:
- "One fish"
- "Two fish"
- "Red fish"
- "Blue fish"
register: my_item_result
- name: Extract items and turn into a list
set_fact:
my_list: "{{ my_item_result.results | map(attribute='ansible_facts.my_item') | list }}"
- name: Examine the list
debug:
msg: "{{ my_list }}"
- name: Concatenate the public keys
set_fact:
my_joined_list: "{{ item | join('\n') }}"
with_items:
- "{{ my_list }}"
- name: Examine the joined string
debug:
msg: "{{ my_joined_list }}"
I want to get output that looks like:
One fish
Two fish
Red fish
Blue Fish
What I get instead is:
TASK: [Examine the joined string] *********************************************
ok: [hana-np-11.cisco.com] => {
"msg": "B\nl\nu\ne\n \nf\ni\ns\nh"
}
ok: [hana-np-12.cisco.com] => {
"msg": "B\nl\nu\ne\n \nf\ni\ns\nh"
}
ok: [hana-np-13.cisco.com] => {
"msg": "B\nl\nu\ne\n \nf\ni\ns\nh"
}
ok: [hana-np-14.cisco.com] => {
"msg": "B\nl\nu\ne\n \nf\ni\ns\nh"
}
ok: [hana-np-15.cisco.com] => {
"msg": "B\nl\nu\ne\n \nf\ni\ns\nh"
}
How do I properly concatenate a list of strings with the newline character?
| Solution
join filter works on lists, so apply it to your list:
- name: Concatenate the public keys
set_fact:
my_joined_list: "{{ my_list | join('\n') }}"
Explanation
While my_list in your example is a list, when you use with_items, in each iterationitem is a string. Strings are treated as lists of characters, thus join splits them.
It’s like in any language: when you have a loop for i in (one, two, three) and refer to i inside the loop, you get only one value for each iteration, not the whole set.
Remarks
Don’t use debug module, but copy with content to have\n rendered as newline.
The way you create a list is pretty cumbersome. All you need is (quotation marks are also not necessary):
- name: Create the list
set_fact:
my_list:
- "One fish"
- "Two fish"
- "Red fish"
- "Blue fish"
| Ansible | 47,244,834 | 59 |
I've seen the question asked in a round about sort of way but not conclusively answered. What I want to do is straight forward. I want to copy a file index.php to the remote host at /var/www/index.php but only if it doesn't already exist.
I've tried using creates and only_if but I don't think these are intended for the purpose I want here. Can anyone supply some examples of how I would go about this?
| Assuming index.php exists in the role's files subdirectory:
- copy:
src: index.php
dest: /var/www/index.php
force: no
The decisive property is force. As the documentation explains, the default is yes, which will replace the remote file when contents are different than the source. If no, the file will only be transferred if the destination does not exist.
| Ansible | 21,646,033 | 58 |
So I figured I should start using Ansible Galaxy when possible, instead of writing my own roles. I just installed my first role and it was installed to /etc/local/ansible/roles (I am on OSX). Now I wonder how you install this roles where I actually need it? Do I just copy the role to where I need it or is there an Ansible way of doing it?
| Yes, you would copy them according to a sample project structure:
site.yml
webservers.yml
fooservers.yml
kubernetes.yaml
roles/
common/
files/
templates/
tasks/
handlers/
vars/
meta/
webservers/
files/
templates/
tasks/
handlers/
vars/
meta/
kubernetes/
files/
templates/
tasks/
handlers/
vars/
meta/
or you can just run ansible-galaxy with the -p ROLES_PATH or --roles-path=ROLES_PATH option to install it under /your/project/root
You can also use the /etc/local/ansible directory as your project root if you'd like to.
Additionally, you can get help by running the command ansible-galaxy install --help
| Ansible | 22,201,306 | 58 |
I am using ansible to script a deployment for an API. I would like this to work sequentially through each host in my inventory file so that I can fully deploy to one machine at a time.
With the out box behaviour, each task in my playbook is executed for each host in the inventory file before moving on to the next task.
How can I change this behaviour to execute all tasks for a host before starting on the next host? Ideally I would like to only have one playbook.
Thanks
| Have a closer look at Rolling Updates:
What you are searching for is
- hosts: webservers
serial: 1
tasks:
- name: ...
| Ansible | 27,315,469 | 58 |
I'm trying to check if the version supplied is a valid supported version. I've set the list of acceptable versions in a variable, and I want to fail out of the task if the supplied version is not in the list. However, I'm unsure of how to do that.
#/role/vars/main.yml
---
acceptable_versions: [2, 3, 4]
and
#/role/tasks/main.yml
---
- fail:
msg: "unsupported version"
with_items: "{{acceptable_versions}}"
when: "{{item}} != {{version}}"
- name: continue with rest of tasks...
Above is sort of what I want to do, but I haven't been able to figure out if there's a one line way to construct a "list contains" call for the fail module.
| You do not need {{}} in when conditions. What you are searching for is:
- fail: msg="unsupported version"
when: version not in acceptable_versions
| Ansible | 28,080,145 | 58 |
I have a bunch of servers that have four physical drives on them (/dev/sda, sdb, sdc, and sdd). sda has the OS installed on it.
I need to format each drive except sda. I need to check if each drive has data on it. If it does, then I shouldn't format it.
# This will get all physical disks (sda, sdb, sdc, etc) and assign them to disk_var
- name: Get disks
set_fact: disk_var="{{hostvars[inventory_hostname]["ansible_devices"].keys()|list}}"
- name: Check if the disk is partitioned and also ignore sda
stat: path=/dev/{{item}}1
with_items: disk_var
when: item != 'sda'
register: base_secondary_partition_{{item}}
- name: Create GPT partition table
command: /sbin/parted -s /dev/{{item}} mklabel gpt
with_items: disk_var
when: item != 'sda' and base_secondary_partition_{{item}}.stat.exists == false
There's clearly more steps involved into formatting these drives but it fails at the last task when creating the GPT partition table.
Here's what it looks like when it runs. You'll see that it fails at the last task:
TASK: [role | Get disks] ******************************************************
ok: [server1.com]
TASK: [role | Check if the disk is partitioned] *******************************
skipping: [server1.com] => (item=sda)
ok: [server1.com] => (item=sdd)
ok: [server1.com] => (item=sdb)
ok: [server1.com] => (item=sdc)
TASK: [role | Create GPT partition table] *************************************
fatal: [server1.com] => error while evaluating conditional: base_secondary_partition_sdd.stat.exists == false
FATAL: all hosts have already failed -- aborting
Any idea how I can check the conditional base_secondary_partition_{{item}}.stat.exists? I need to make sure that if there's data on the drive, it will not format it.
| You do not need to register your result with the item salt. When you register the result of a loop (e.g. with_items) the registered value will contain a key results which holds a list of all results of the loop. (See docs)
Instead of looping over your original device list, you can loop over the registered results of the first task then:
- name: Check if the disk is partitioned and also ignore sda
stat: path=/dev/{{item}}1
with_items: disk_var
when: item != 'sda'
register: device_stat
- name: Create GPT partition table
command: /sbin/parted -s /dev/{{ item.item }} mklabel gpt
with_items: "{{ device_stat.results }}"
when:
- item is not skipped
- item.stat.exists == false
The condition not item | skipped takes care of that elements which have been filtered in the original loop (sda) will not be processed.
While that might be a solution to your problem, your question is very interesting. There seems to be no eval feature in Jinja2. While you can concatenate strings you can not use that string as a variable name to get to its value...
| Ansible | 32,214,529 | 58 |
I am new to Ansible and I am trying to implement it. I tried all the possible ways present on the Internet and also all questions related to it, but still I can't resolve the error. How can I fix it?
I installed Ansible playbook on my MacBook Pro. I created a VM whose IP address is 10.4.1.141 and host IP address is 10.4.1.140.
I tried to connect to my VM using the host via SSH. It connected by the following command:
ssh [email protected]
And I got the shell access. This means my SSH connection is working fine.
Now I tried the following command for Ansible:
ansible all -m ping
And the content in the /etc/ansible/host is 10.4.1.141.
Then it shows the following error:
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to rerun the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Then I tried creating the config file in .ssh/ folder on the host machine, but the error is still the same.
The content of the config file is:
IdentityFile ~/.ssh/id_rsa
which is the path to my private key.
Then I ran the same command ansible all -m ping and got the same error again.
When I tried another command,
ansible all -m ping -u user --ask-pass
Then it asked for the SSH password. I gave it (I am very sure the password is correct), but I got this error:
10.4.1.141 | FAILED => FAILED: Authentication failed.
This is the log using -vvvv:
<10.4.1.141> ESTABLISH CONNECTION FOR USER: rajatg
<10.4.1.141> REMOTE_MODULE ping
<10.4.1.141> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/rajatg/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 10.4.1.141 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && echo $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007'
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I am still not able to figure it out what the problem is. It is my last choice to ask it here after doing my all research. This is the link I referred to.
| I fixed the issue. The problem was in my /etc/ansible/hosts file.
The content written in /etc/ansible/hosts was 10.4.1.141. But when I changed it to [email protected], then the issue got fixed.
| Ansible | 33,280,244 | 58 |